index
int64 0
20.3k
| text
stringlengths 0
1.3M
| year
stringdate 1987-01-01 00:00:00
2024-01-01 00:00:00
| No
stringlengths 1
4
|
|---|---|---|---|
2,100
|
The Fidelity of Local Ordinal Encoding Javid Sadr, Sayan Mukherjee, Keith Thoresz, Pawan Sinha Center for Biological and Computational Learning Department of Brain and Cognitive Sciences, MIT Cambridge, Massachusetts, 02142 USA {sadr,sayan,thorek,sinha}@ai.mit.edu Abstract A key question in neuroscience is how to encode sensory stimuli such as images and sounds. Motivated by studies of response properties of neurons in the early cortical areas, we propose an encoding scheme that dispenses with absolute measures of signal intensity or contrast and uses, instead, only local ordinal measures. In this scheme, the structure of a signal is represented by a set of equalities and inequalities across adjacent regions. In this paper, we focus on characterizing the fidelity of this representation strategy. We develop a regularization approach for image reconstruction from ordinal measures and thereby demonstrate that the ordinal representation scheme can faithfully encode signal structure. We also present a neurally plausible implementation of this computation that uses only local update rules. The results highlight the robustness and generalization ability of local ordinal encodings for the task of pattern classification. 1 Introduction Biological and artificial recognition systems face the challenge of grouping together differing proximal stimuli arising from the same underlying object. How well the system succeeds in overcoming this challenge is critically dependent on the nature of the internal representations against which the observed inputs are matched. The representation schemes should be capable of efficiently encoding object concepts while being tolerant to their appearance variations. In this paper, we introduce and characterize a biologically plausible representation scheme for encoding signal structure. The scheme employs a simple vocabulary of local ordinal relations, of the kind that early sensory neurons are capable of extracting. Our results so far suggest that this scheme possesses several desirable characteristics, including tolerance to object appearance variations, computational simplicity, and low memory requirements. We develop and demonstrate our ideas in the visual domain, but they are intended to be applicable to other sensory modalities as well. The starting point for our proposal lies in studies of the response properties of neurons in the early sensory cortical areas. These response properties constrain Figure 1: (a) A schematic contrast response curve for a primary visual cortex neuron. The response of the neuron saturates at low contrast values. (b) An idealization of (a). This unit can be thought of as an ordinal comparator, providing information only about contrast polarity but not its magnitude. the kinds of measurements that can plausibly be included in our representation scheme. In the visual domain, many striate cortical neurons have rapidly saturating contrast response functions [1, 4]. Their tendency to reach ceiling level responses at low contrast values render these neurons sensitive primarily to local ordinal, rather than metric, relations. We propose to use an idealization of such units as the basic vocabulary of our representation scheme (figure 1). In this scheme, objects are encoded as sets of local ordinal relations across image regions. As discussed below, this very simple idea seems well suited to handling the photometric appearance variations that real-world objects exhibit. Figure 2: The challenge for a representation scheme: to construct stable descriptions of objects despite radical changes in appearance. As figure 2 shows, variations in illumination significantly alter the individual brightness of different parts of the face, such as the eyes, cheeks, and forehead. Therefore, absolute image brightness distributions are unlikely to be adequate for classifying all of these images as depicting the same underlying object. Even the contrast magnitudes across different parts of the face change greatly under different lighting conditions. While the absolute luminance and contrast magnitude information is highly variable across these images, Thoresz and Sinha [9] have shown that one can identify some stable ordinal measurements. Figure 3 shows several pairs of average brightness values over localized patches for each of the three images included in figure 2. Certain regularities are apparent. For instance, the average brightness of the left eye is always less than that of the forehead, irrespective of the lighting conditions. The relative magnitudes of the two brightness values may change, but the sign of the inequality does not. In other words, the ordinal relationship between the average brightnesses of the <left-eye, forehead> pair is invariant under lighting changes. Figure 3 shows several other such pair-wise invariances. It seems, therefore that local ordinal relations may encode the stable facial attributes across different illumination conditions. An additional advantage to using ordinal relations is their natural robustness to sensor noise. Thus, it would seem that local ordinal representations may be well suited for devising compact representations, robust against Figure 3: The absolute brightnesses and their relative magnitudes change under different lighting conditions but several pair-wise ordinal relationships stay invariant. large photometric variations, for at least some classes of objects. Notably, for similar reasons, ordinal measures have also been shown to be a powerful tool for simple, efficient, and robust stereo image matching [3]. In what follows, we address an important open question regarding the expressiveness of the ordinal representation scheme. Given that this scheme ignores absolute luminance and contrast magnitude information, an obvious question that arises is whether such a crude representation strategy can encode object/image structure with any fidelity. 2 Information Content of Local Ordinal Encoding Figure 4 shows how we define ordinal relations between an image region pa and its immediate neighbors pb = {pa1, . . . , pa8}. In the conventional rectilinear grid, when all image regions pa are considered, four of the eight relations are redundant; we encode the remaining four as {1, 0, −1} based on the difference in luminance between two neighbors being positive, zero, or negative, respectively. To demonstrate the richness of information encoded by this scheme, we compare the original image to one produced by a function that reconstructs the image using local ordinal relationships as constraints. Our reconstruction function has the form f(x) = w · φ(x), (1) where x = {i, j} is the position of a pixel, f(x) is its intensity, φ is a map from the input space into a high (possibly infinite) dimensional space, w is a hyperplane in this high-dimensional space, and u · v denotes an inner product. Infinitely many reconstruction functions could satisfy the given ordinal constraints. To make the problem well-posed we regularize [10] the reconstruction function subject to the ordinal constraints, as done in ordinal regression for ranking document Department of Brain Sciences, MIT Cambridge, Massachusetts, USA. {sadr,sayan,thorek,sinha}@ai.mit.edu Neighbors’ relations to pixel of interest ———————————————————————————I(pa) < I(pa1) = I(pa2) < I(pa3) < I(pa4) > I(pa5) < I(pa6) < I(pa7) < I(pa8) (1) ———————————————————————————Figure 4: Ordinal relationships between an image region pa and its neighbors. retrieval results [5]. Our regularization term is a norm in a Reproducing Kernel Hilbert Space (RKHS) [2, 11]. Minimizing the norm in a RKHS subject to the ordinal constraints corresponds to the following convex constrained quadratic optimization problem: min ξ,w 1 2||w||2 + C X p ξp (2) subject to θ(δp)w · (φ(xpa) −φ(xpb)) ≥|δp| −ξp, ∀p and ξ ≥0, (3) where the function θ(y) = +1 for y ≥0 and −1 otherwise, p is the index over all pairwise ordinal relations between all pixels pa and their local neighbors pb (as depicted in figure 4), ξp are slack variables which are penalized by C (the trade-off between smoothness and ordinal constraints), and δp take integer values {−1, 0, 1} denoting the ordinal relation (less than, equal to, or greater than, respectively) between pa and pb; for the case δp = 0 the inequality in (3) will be a strict equality. Taking the dual of (2) subject to constraints (3) results in the following convex quadratic optimization problem which has only box constraints: max α X p |δp| αp −1 2 X p X q αpαq ˜Kpq (4) subject to 0 ≤αp ≤C if δp > 0, −C ≤αp ≤C if δp = 0, (5) −C ≤αp ≤0 if δp < 0, where αp are the dual Lagrange multipliers, and the elements of the matrix ˜K have the form ˜Kpq = (φ(xpa) −φ(xpb)) · (φ(xqa) −φ(xqb)) = K(xpa, xqa) −K(xpb, xqa) −K(xpa, xqb) + K(xpb, xqb), where K(y, x) = φ(y)·φ(x) using the standard kernel trick [8]. In this paper we use only Gaussian kernels K(y, x) = exp(−||x−y||2/2σ2). The reconstruction function, f(x), obtained from optimizing (4) subject to box constraints (5) has the following form f(x) = X p αp (K(x, xpa) −K(x, xpb)) . (6) Note that in general many of the αp values may be zero – these terms do not contribute to the reconstruction, and the corresponding constraints in (3) were not 0 128 255 0 100 200 300 0 128 255 0 100 200 300 (a) (b) (c) (d) Figure 5: Reconstruction results from the regularization approach. (a) Original images. (b) Reconstructed images. (c) Absolute difference between original and reconstruction. (d) Histogram of absolute difference. required. The remaining αp with absolute value less than C satisfy the inequality constraints in (3), whereas those with absolute value at C violate them. Figure 5 depicts two typical reconstructions performed by this algorithm. The difference images and error histograms suggests that the reconstructions closely match the source images. 3 Discussion Our reconstruction results suggest that the local ordinal representation can faithfully encode image structure. Thus, even though individual ordinal relations are insensitive to absolute luminance or contrast magnitude, a set of such relations implicitly encodes metric information. In the context of the human visual system, this result suggests that the rapidly saturating contrast response functions of the early visual neurons do not significantly hinder their ability to convey accurate image information to subsequent cortical stages. An important question that arises here is what are the strengths and limitations of local ordinal encoding. The first key limitation is that for any choice of neighborhood size over which ordinal relations are extracted, there are classes of images for which the local ordinal representation will be unable to encode the metric structure. For a neighborhood of size n, an image with regions of different luminance embedded in a uniform background and mutually separated by a distance greater than n would constitute such an image. In general, sparse images present a problem for this representation scheme, as might foveal or cortical “magnification,” for example. This issue could be addressed by using ordinal relations across multiple scales, perhaps in an adaptive way that varies with the smoothness or sparseness of the stimulus. Second, the regularization approach above seems biologically implausible. Our intent in using this approach for reconstructions was to show via well-understood theoretical tools the richness of information that local ordinal representations proFigure 6: Reconstruction results from the relaxation approach. vide. In order to address the neural plausibility requirement, we have developed a simple relaxation-based approach with purely local update rules of the kind that can easily be implemented by cortical circuitry. Each unit communicates only with its immediate neighbors and modifies its value incrementally up or down (starting from an arbitrary state) depending on the number of ordinal relations in the positive or negative direction. This computation is performed iteratively until the network settles to an equilibrium state. The update rule can be formally stated as Rpa,t+1 = Rpa,t + ∆ X pb (θ(Rpa,t −Rpb,t) −θ(Ipa −Ipb)), (7) where Rpa,t is the intensity of the reconstructed pixel pa at step t, Ipa is the intensity of the corresponding pixel in the original image, ∆is a positive update rate, and θ and pb are as described above. Figure 6 shows four examples of image reconstructions performed using a relaxation-based approach. A third potential limitation is that the scheme does not appear to constitute a compact code. If each pixel must be encoded in terms of its relations with all of its eight neighbors, where each relation takes one of three values, {−1, 0, 1}, then what has been gained over the original image where each pixel is encoded by 8 bits? There are three ways to address this question. 1. Eight relations per pixel is highly redundant – four are sufficient. In fact, as shown in figure 7, the scheme can also tolerate several missing relations. Figure 7: Five reconstructions, shown here to demonstrate the robustness of local ordinal encoding to missing inputs. From left to right: reconstructions based on 100%, 80%, 60%, 40%, and 20% of the full set of immediate neighbor relations. 2. An advantage to using ordinal relations is that they can be extracted and transmitted much more reliably than metric ones. These relations share the same spirit (a) (b) Figure 8: A small collection of ordinal relations (a), though insufficient for high fidelity reconstruction, is very effective for pattern classification despite significant appearance variations. (b) Results of using a local ordinal relationship based template to detect face patterns. The program places white dots at the centers of patches classified as faces. (From Thoresz and Sinha, in preparation.) as loss functions used in robust statistics [6] and trimmed or Winsorized estimators. 3. The intent of the visual system is often not to encode/reconstruct images with perfect fidelity, but rather to encode the most stable characteristics that can aid in classification. In this context, a few ordinal relations may suffice for encoding objects reliably. Figure 8 shows the results of using less than 20 relations for detecting faces. Clearly, such a small set would not be sufficient for reconstructions, but it works well for classification. Its generalization arises because it defines an equivalence class of patterns. In summary, the ordinal representation scheme provides a neurally plausible strategy for encoding signal structure. While in this paper we focus on demonstrating the fidelity of this scheme, we believe that its true strength lies in defining equivalence classes of patterns enabling generalizations over appearance variations in objects. Several interesting directions remain to be explored. These include the study of ordinal representations across multiple scales, learning schemes for identifying subsets of ordinal relations consistent across different instances of an object, and the relationship of this work to multi-dimensional scaling [12] and to the use of truncated, quantized wavelet coefficients as “signatures” for fast, multiresolution image querying [7]. Acknowledgements We would like to thank Gadi Geiger, Antonio Torralba, Ryan Rifkin, Gonzalo Ramos, and Tabitha Spagnolo. Javid Sadr is a Howard Hughes Medical Institute Pre-Doctoral Fellow. References [1] A. Anzai, M. A. Bearse, R. D. Freeman, and D. Cai. Contrast coding by cells in the cat’s striate cortex: monocular vs. binocular detection. Visual Neuroscience, 12:77–93, 1995. [2] N. Aronszajn. Theory of reproducing kernels. Trans. Amer. Math. Soc., 686:337–404, 1950. [3] D. Bhat and S. Nayar. Ordinal measures for image correspondence. In IEEE Conf. on Computer Vision and Pattern Recognition, pages 351–357, 1996. [4] G. C. DeAngelis, I. Ohzawa, and R. D. Freeman. Spatiotemporal organization of simple-cell receptive fields in the cat’s striate cortex. i. general characteristics and postnatal development. J. Neurophysiology, 69:1091–1117, 1993. [5] R. Herbrich, T. Graepel, and K. Obermeyer. Support vector learning for ordinal regression. In Proc. of the Ninth Intl. Conf. on Artificial Neural Networks, pages 97–102, 1999. [6] P. Huber. Robust Statistics. John Wiley and Sons, New York, 1981. [7] C. E. Jacobs, A. Finkelstein, and D. H. Salesin. Fast multiresolution image querying. In Computer Graphics Proc., Annual Conf. Series (SIGGRAPH 95), pages 277–286, 1995. [8] T. Poggio. On optimal nonlinear associative recall. Biological Cybernetics, 19:201–209, 1975. [9] K. Thoresz and P. Sinha. Qualitative representations for recognition. Vision Sciences Society Abstracts, 1:81, 2001. [10] A. N. Tikhonov and V. Y. Arsenin. Solutions of Ill-posed Problems. W. H. Winston, Washington, D.C., 1977. [11] G. Wahba. Spline Models for Observational Data. Series in Applied Mathematics, Vol 59, SIAM, Philadelphia, 1990. [12] F. W. Young and C. H. Null. Mds of nominal data: the recovery of metric information with alscal. Psychometika, 53.3:367–379, 1978.
|
2001
|
99
|
2,101
|
Temporal Coherence, Natural Image Sequences, and the Visual Cortex Jarmo Hurri and Aapo Hyvärinen Neural Networks Research Centre Helsinki University of Technology P.O.Box 9800, 02015 HUT, Finland {jarmo.hurri,aapo.hyvarinen}@hut.fi Abstract We show that two important properties of the primary visual cortex emerge when the principle of temporal coherence is applied to natural image sequences. The properties are simple-cell-like receptive fields and complex-cell-like pooling of simple cell outputs, which emerge when we apply two different approaches to temporal coherence. In the first approach we extract receptive fields whose outputs are as temporally coherent as possible. This approach yields simple-cell-like receptive fields (oriented, localized, multiscale). Thus, temporal coherence is an alternative to sparse coding in modeling the emergence of simple cell receptive fields. The second approach is based on a two-layer statistical generative model of natural image sequences. In addition to modeling the temporal coherence of individual simple cells, this model includes inter-cell temporal dependencies. Estimation of this model from natural data yields both simple-cell-like receptive fields, and complex-cell-like pooling of simple cell outputs. In this completely unsupervised learning, both layers of the generative model are estimated simultaneously from scratch. This is a significant improvement on earlier statistical models of early vision, where only one layer has been learned, and others have been fixed a priori. 1 Introduction The functional role of simple and complex cells has puzzled scientists since their response properties were first mapped by Hubel and Wiesel in the 1950s (see, e.g., [1]). The current view of the functionality of sensory neural networks emphasizes learning and the relationship between the structure of the cells and the statistical properties of the information they process (see, e.g., [2]). In 1996 a major advance was achieved when Olshausen and Field showed that simple-cell-like receptive fields emerge when sparse coding is applied to natural image data [3]. Similar results were obtained with independent component analysis shortly thereafter [4]. In the case of image data, independent component analysis is closely related to sparse coding [5]. In this paper we show that a principle called temporal coherence [6, 7, 8, 9] leads to the emergence of major properties of the primary visual cortex from natural image sequences. Temporal coherence is based on the idea that when processing temporal input, the representation changes as little as possible over time. Several authors have demonstrated the usefulness of this principle using simulated data (see, e.g., [6, 7]). We apply the principle of temporal coherence to natural input, and at the level of early vision, in two different ways. In the first approach we show that when the input consists of natural image sequences, the maximization of temporal response strength correlation of cell output leads to receptive fields which are similar to simple cell receptive fields. These results show that temporal coherence is an alternative to sparse coding, in that they both result in the emergence of simple-cell-like receptive fields from natural input data. Whereas earlier research has focused on establishing a link between temporal coherence and complex cells, our results demonstrate that such a connection exists even on the simple cell level. We will also show how this approach can be interpreted as estimation of a linear latent variable model in which the latent signals have varying variances. In the second approach we use the principle of temporal coherence to formulate a two-layer generative model of natural image sequences. In addition to single-cell temporal coherence, this model also captures inter-cell temporal dependencies. We show that when this model is estimated from natural image sequence data, the results include both simple-cell-like receptive fields, and a complex-cell-like pooling of simple cell outputs. Whereas in earlier research learning two-layer statistical models of early vision has required fixing one of the layers beforehand, in our model both layers are learned simultaneously. 2 Simple-cell-like receptive fields are temporally coherent features Our first approach to modeling temporal coherence in natural image sequences can be interpreted either as maximization of temporal coherence of cell outputs, or as estimation of a latent variable model in which the underlying variables have certain kind of time structure. This situation is analogous to sparse coding, because measures of sparseness can also be used to estimate linear generative models with non-Gaussian independent sources [5]. We first describe our measure of temporal coherence, and then provide the link to latent variable models. In this paper we restrict ourselves to consider linear spatial models of simple cells. Linear simple cell models are commonly used in studies concerning the connections between visual input statistics and simple cell receptive fields [3, 4]. (Non-negative and spatiotemporal extensions of this basic framework are discussed in [10].) The linear spatial model uses a set of spatial filters (vectors) w1, ..., wK to relate input to output. Let signal vector x(t) denote the input of the system at time t. A vectorization of image patches can be done by scanning images column-wise into vectors – for windows of size N × N this yields vectors with dimension N 2. The output of the kth filter at time t, denoted by signal yk(t), is given by yk(t) = wT k x(t). Let matrix W = [w1 · · · wK]T denote a matrix with all the filters as rows. Then the input-output relationship can be expressed in vector form by y(t) = Wx(t), (1) where signal vector y(t) = [y1(t) · · · yK(t)]T . Temporal response strength correlation, the objective function, is defined by f(W) = K X k=1 Et {g(yk(t))g(yk(t −∆t))} , (2) where the nonlinearity g is strictly convex, even (rectifying), and differentiable. The symbol ∆t denotes a delay in time. The nonlinearity g measures the strength (amplitude) of the response of the filter, and emphasizes large responses over small ones (see [10] for A −3 0 3 0 200 400 time index y(t) B 0 3 6 9 0 200 400 time index y2(t) Figure 1: Illustration of nonstationarity of variance. (A) A temporally uncorrelated signal y(t) with nonstationary variance. (B) Plot of y2(t). additional discussion). Examples of choices for this nonlinearity are g1(α) = α2, which measures the energy of the response, and g2(α) = ln cosh α, which is a robustified version of g1. A set of filters which has a large temporal response strength correlation is such that the same filters often respond strongly at consecutive time points, outputting large (either positive or negative) values. This means that the same filters will respond strongly over short periods of time, thereby expressing temporal coherence of a population code. A detailed discussion of the difference between temporal response strength correlation and sparseness, including several control experiments, can be found in [10]. To keep the outputs of the filters bounded we enforce the unit variance constraint on each of the output signals yk(t). Additional constraints are needed to keep the filters from converging to the same solution – we force their outputs to be uncorrelated. A gradient projection method can be used to maximize (2) under these constraints. The initial value of W is selected randomly. See [10] for details. The interpretation of maximization of objective function (2) as estimation of a generative model is based on the concept of sources with nonstationary variances [11, 12]. The linear generative model for x(t), the counterpart of equation (1), is similar to the one in [13, 3]: x(t) = Ay(t). (3) Here A = [a1 · · · aK] denotes a matrix which relates the image patch x(t) to the activities of the simple cells, so that each column ak, k = 1, ..., K, gives the feature that is coded by the corresponding simple cell. The dimension of x(t) is typically larger than the dimension of y(t), so that (1) is generally not invertible but an underdeterminedset of linear equations. A one-to-one correspondence between W and A can be established by computing the pseudoinverse solution A = WT (WWT )−1. The nonstationarity of the variances of sources y(t) means that their variances change over time, and the variance of a signal is correlated at nearby time points. An example of a signal with nonstationary variance is shown in Figure 1. It can be shown [12] that optimization of a cumulant-based criterion, similar to equation (2), can separate independent sources with nonstationary variances. Thus, the maximization of the objective function can also be interpreted as estimation of generative models in which the activity levels of the sources vary over time, and are temporally correlated over time. As was noted above, this situation is analogous to the application of measures of sparseness to estimate linear generative models with non-Gaussian sources. The algorithm was applied to natural image sequence data, which was sampled from a subset of image sequences used in [14]. The number of samples was 200,000, ∆t was 40 ms, and the sampled image patches were of size 16×16 pixels. Preprocessing consisted of temporal decorrelation, subtraction of local mean, and normalization [10], and dimensionality reduction from 256 to 160 using principal component analysis [5] (this degree of reduction Figure 2: Basis vectors estimated using the principle of temporal coherence. The vectors were estimated from natural image sequences by optimizing temporal response strength correlation (2) under unit energy and uncorrelatedness constraints (here nonlinearity g(α) = ln cosh α). The basis vectors have been ordered according to Et {g(yk(t))g(yk(t −∆t))} , that is, according to their “contribution” into the final objective value (vectors with largest values top left). retains 95% of signal energy). Figure 2 shows the basis vectors (columns of matrix A) which emerge when temporal response strength correlation is maximized for this data. The basis vectors are oriented, localized, and have multiple scales. These are the main features of simple cell receptive fields [1]. A quantitative analysis, showing that the resulting receptive fields are similar to those obtained using sparse coding, can be found in [10], where the details of the experiments are also described. 3 Inter-cell temporal dependencies yield simple cell output pooling 3.1 Model Temporal response strength correlation, equation (2), measures the temporal coherence of individual simple cells. In terms of the generative model described above, this means that the nonstationary variances of different yk(t)’s have no interdependencies. In this section we add another layer to the generative model presented above to extend the theory to simple cell interactions, and to the level of complex cells. Like in the generative model described at the end of the previous section, the output layer of the model (see Figure 3) is linear, and maps signed cell responses to image features. But in contrast to the previous section, or models used in independent component analysis [5] or basic sparse coding [3], we do not assume that the components of y(t) are independent. Instead, we model the dependencies between these components with a multivariate autoregressive model in the first layer of our model. Let abs (y(t)) = [|y1(t)| · · · |yK(t)|]T , let v(t) denote a driving noise signal, and let M denote a K × K matrix. Our model is a multidimensional first-order autoregressive process, defined by abs (y(t)) = M abs (y(t −∆t)) + v(t). (4) As in independent component analysis, we also need to fix the scale of the latent variables by defining Et n y 2 k(t) o = 1 for k = 1, ..., K. abs (y(t)) = M abs (y(t −∆t)) + v(t) x(t) = Ay(t) x(t) v(t) × random signs y(t) abs (y(t)) Figure 3: The two layers of the generative model. Let abs (y(t)) = [|y1(t)| · · · |yK(t)|]T denote the amplitudes of simple cell responses. In the first layer, the driving noise signal v(t) generates the amplitudes of simple cell responses via an autoregressive model. The signs of the responses are generated randomly between the first and second layer to yield signed responses y(t). In the second layer, natural video x(t) is generated linearly from simple cell responses. In addition to the relations shown here, the generation of v(t) is affected by M abs (y(t −∆t)) to ensure non-negativity of abs (y(t)) . See text for details. There are dependencies between the driving noise v(t) and output strengths abs (y(t)) , caused by the non-negativity of abs (y(t)) . To take these dependencies into account, we use the following formalism. Let u(t) denote a random vector with components which are statistically independent of each other. We define v(t) = max (−M abs (y(t −∆t)) , u(t)) , where, for vectors a and b, max (a, b) = [max(a1, b1) · · · max(an, bn)]T . We assume that u(t) and abs (y(t)) are uncorrelated. To make the generative model complete, a mechanism for generating the signs of cell responses y(t) must be included. We specify that the signs are generated randomly with equal probability for plus or minus after the strengths of the responses have been generated. Note that one consequence of this is that the different yk(t)’s are uncorrelated. In the estimation of the model this uncorrelatedness property is used as a constraint. When this is combined with the unit variance (scale) constraints described above, the resulting set of constraints is the same as in the approach described in Section 2. In equation (4), a large positive matrix element M(i, j), or M(j, i), indicates that there is strong temporal coherence between the output strengths of cells i and j. Thinking in terms of grouping temporally coherent cells together, matrix M can be thought of as containing similarities (reciprocals of distances) between different cells. We will use this property in the experimental section to derive a topography of simple cell receptive fields from M. 3.2 Estimation of the model To estimate the model defined above we need to estimate both M and W (pseudoinverse of A). We first show how to estimate M, given W. We then describe an objective function which can be used to estimate W, given M. Each iteration of the estimation algorithm consists of two steps. During the first step M is updated, and W is kept constant; during the second step these roles are reversed. First, regarding the estimation of M, consider a situation in which W is kept constant. It can be shown that M can be estimated by using approximative method of moments, and that the estimate is given by M ≈βEt n (abs (y(t)) −Et {abs (y(t))}) (abs (y(t −∆t)) −Et {abs (y(t))})T o × Et n (abs (y(t)) −Et {abs (y(t))}) (abs (y(t)) −Et {abs (y(t))})T o−1 , (5) where β > 1. Since this multiplier has a constant linear effect in the objective function given below, its value does not change the optima, so we can set β = 1 in the optimization. (Details are given in [15].) The resulting estimator is the same as the optimal least mean squares linear predictor in the case of unconstrained v(t). The estimation of W is more complicated. A rigorous derivation of an objective function based on well-known estimation principles is very difficult, because the statistics involved are non-Gaussian, and the processes have difficult interdependencies. Therefore, instead of deriving an objective function from first principles, we derived an objective function heuristically, and verified through simulations that the objective function is capable of estimating the two-layer model. The objective function is a weighted sum of the covariances of filter output strengths at times t −∆t and t, defined by f(W, M) = K X i=1 K X j=1 M(i, j) cov {|yi(t)| , |yj(t −∆t)|} . (6) In the actual estimation algorithm, W is updated by employing a gradient projection approach to the optimization of (6) under the constraints. The initial value of W is selected randomly. The fact that the algorithm described above is able to estimate the two-layer model has been verified through extensive simulations (details can be found in [15]). 3.3 Experiments The estimation algorithm was run on the same data set as in the previous experiment (see Section 2). The extracted matrices A and M can be visualized simultaneously by using the interpretation of M as a similarity matrix (see Section 3.1). Figure 4 illustrates the basis vectors – that is, columns of A – laid out at spatial coordinates derived from M in a way explained below. The resulting basis vectors are again oriented, localized and multiscale, as in the previous experiment. The two-dimensional coordinates of the basis vectors were determined from M using multidimensional scaling (see figure caption for details). The temporal coherence between the outputs of two cells i and j is reflected in the distance between the corresponding receptive fields: the larger the elements M(i, j) and M(j, i) are, the closer the receptive fields are to each other. We can see that local topography emerges in the results: those basis vectors which are close to each other seem to be mostly coding for similarly oriented features at nearby spatial positions. This kind of grouping is characteristic of pooling of simple cell outputs at complex cell level [1].1 Thus, the estimation of our two-layer model from natural image sequences yields both simple-cell-like receptive fields, and grouping similar to the pooling of simple cell outputs. Linear receptive fields emerge in the second layer (matrix A), and cell output grouping emerges in the first layer (matrix M). Both of these layers are estimated simultaneously. This is a significant improvement on earlier statistical models of early vision, because no a priori fixing of either of these layers is needed. 4 Conclusions We have shown in this paper that when the principle of temporal coherence is applied to natural image sequences, both simple-cell-like receptive fields, and complex-cell-like pooling of simple cell outputs emerge. These results were obtained with two different approaches 1Some global topography also emerges: those basis vectors which code for horizontal features are on the left in the figure, while those that code for vertical features are on the right. Figure 4: Results of estimating the two-layer generative model from natural image sequences. Basis vectors (columns of A) plotted at spatial coordinates given by applying multidimensional scaling to M. Matrix M was first converted to a non-negative similarity matrix Ms by subtracting mini,j M(i, j) from each of its elements, and by setting each of the diagonal elements at value 1. Multidimensional scaling was then applied to Ms by interpreting entries Ms(i, j) and Ms(j, i) as similarity measures between cells i and j. Some of the resulting coordinates were very close to each other, so tight cell clusters were magnified for purposes of visual display. Details are given in [15]. to temporal coherence. The first used temporally coherent simple cell outputs, and the second was based on a temporal two-layer generative model of natural image sequences. Simple-cell-like receptive fields emerge in both cases, and the output pooling emerges as a local topographic property in the case of the two-layer generative model. These results are important for two reasons. First, to our knowledge this is the first time that localized and oriented receptive fields with different scales have been shown to emerge from natural data using the principle of temporal coherence. In some models of invariant visual representations [8, 16] simple cell receptive fields are obtained as by-products, but learning is strongly modulated by complex cells, and the receptive fields seem to lack the important properties of spatial localization and multiresolution. Second, in earlier research on statistical models of early vision, learning two-layer models has required a priori fixing of one of the layers. This is not needed in our two-layer model, because both layers emerge simultaneously in a completely unsupervised manner from the natural input data. References [1] Stephen E. Palmer. Vision Science – Photons to Phenomenology. The MIT Press, 1999. [2] Eero P. Simoncelli and Bruno A. Olshausen. Natural image statistics and neural representation. Annual Review of Neuroscience, 24:1193–1216, 2001. [3] Bruno A. Olshausen and David Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381(6583):607–609, 1996. [4] Anthony Bell and Terrence J. Sejnowski. The independent components of natural scenes are edge filters. Vision Research, 37(23):3327–3338, 1997. [5] Aapo Hyvärinen, Juha Karhunen, and Erkki Oja. Independent Component Analysis. John Wiley & Sons, 2001. [6] Peter Földiák. Learning invariance from transformation sequences. Neural Computation, 3(2):194–200, 1991. [7] James Stone. Learning visual parameters using spatiotemporal smoothness constraints. Neural Computation, 8(7):1463–1492, 1996. [8] Christoph Kayser, Wolfgang Einhäuser, Olaf Dümmer, Peter König, and Konrad Körding. Extracting slow subspaces from natural videos leads to complex cells. In Georg Dorffner, Horst Bischof, and Kurt Hornik, editors, Artificial Neural Networks – ICANN 2001, volume 2130 of Lecture notes in computer science, pages 1075–1080. Springer, 2001. [9] Laurenz Wiskott and Terrence J. Sejnowski. Slow feature analysis: Unsupervised learning of invariances. Neural Computation, 14(4):715–770, 2002. [10] Jarmo Hurri and Aapo Hyvärinen. Simple-cell-like receptive fields maximize temporal coherence in natural video. Neural Computation, 2003. In press. [11] Kiyotoshi Matsuoka, Masahiro Ohya, and Mitsuru Kawamoto. A neural net for blind separation of nonstationary signals. Neural Networks, 8(3):411–419, 1995. [12] Aapo Hyvärinen. Blind source separation by nonstationarity of variance: A cumulant-based approach. IEEE Transactions on Neural Networks, 12(6):1471–1474, 2001. [13] Aapo Hyvärinen and Patrik O. Hoyer. A two-layer sparse coding model learns simple and complex cell receptive fields and topography from natural images. Vision Research, 41(18):2413– 2423, 2001. [14] J. Hans van Hateren and Dan L. Ruderman. Independent component analysis of natural image sequences yields spatio-temporal filters similar to simple cells in primary visual cortex. Proceedings of the Royal Society of London B, 265(1412):2315–2320, 1998. [15] Jarmo Hurri and Aapo Hyvärinen. A two-layer dynamic generative model of natural image sequences. Submitted. [16] Teuvo Kohonen, Samuel Kaski, and Harri Lappalainen. Self-organized formation of various invariant-feature filters in the adaptive-subspace SOM. Neural Computation, 9(6):1321–1344, 1997.
|
2002
|
1
|
2,102
|
How the Poverty of the Stimulus Solves the Poverty of the Stimulus WilleIll ZuideIlla Language Evolution and Computation Research Unit and Institute for Cell, Animal and Population Biology University of Edinburgh 40 George Square, Edinburgh EH8 9LL, United Kingdom jelle@ling.ed.ac.uk Abstract Language acquisition is a special kind of learning problem because the outcome of learning of one generation is the input for the next. That makes it possible for languages to adapt to the particularities of the learner. In this paper, I show that this type of language change has important consequences for models of the evolution and acquisition of syntax. 1 The Language Acquisition Problem For both artificial systems and non-human animals, learning the syntax of natural languages is a notoriously hard problem. All healthy human infants, in contrast, learn any of the approximately 6000 human languages rapidly, accurately and spontaneously. Any explanation of how they accomplish this difficult task must specify the (innate) inductive bias that human infants bring to bear, and the input data that is available to them. Traditionally, the inductive bias is termed - somewhat unfortunately - "Universal Grammar", and the input data "primary linguistic data". Over the last 30 years or so, a view on the acquisition of the syntax of natural language has become popular that has put much emphasis on the innate machinery. In this view, that one can call the "Principles and Parameters" model, the Universal Grammar specifies most aspects of syntax in great detail [e.g. 1]. The role of experience is reduced to setting a limited number (30 or so) of parameters. The main argument for this view is the argument from the poverty of the stimulus [2]. This argument states that children have insufficient evidence in the primary linguistic data to induce the grammar of their native language. Mark Gold [3] provides the most well-known formal basis to this argument. Gold introduced the criterion "identification in the limit" for evaluating the success of a learning algorithm: with an infinite number of training samples all hypotheses of the algorithm should be identical, and equivalent to the target. Gold showed that the class of context-free grammars is not learnable in this sense by any algorithm from positive samples alone (and neither are other super'-jinite classes). This proof is based on the fact that no matter how many samples from an infinite language a learning algorithm has seen, the algorithm can not decide with certainty that the samples are drawn from the infinite language or from a finite language that contains all samples. Because natural languages are thought to be at least as complex as context-free grammars, and negative feedback is assumed to be absent in the primary linguistic data, Gold's analysis, and subsequent work in learn ability theory [1] , is usually interpreted as strong support for the argument from the poverty of the stimulus, and, in the extreme, for the view that grammar induction is fundamentally impossible (a claim that Gold would not subscribe to). Critics of this "nativist" approach [e.g. 4, 5] have argued for different assumptions on the appropriate grammar formalism (e.g. stochastic context-free grammars), the available primary data (e.g. semantic information) or the appropriate learnability criterion. In this paper I will take a different approach. I will present a model that induces context-free grammars without a-priori restrictions on the search space, semantic information or negative evidence. Gold's negative results thus apply. Nevertheless, acquisition of grammar is successful in my model, because another process is taken into account as well: the cultural evolution of language. 2 The Language Evolution Problem Whereas in language acquisition research the central question is how a child acquires an existing language, in language evolution research the central question is how this language and its properties have emerged in the first place. Within the nativist paradigm, some have suggested that the answer to this question is that Universal Grammar is the product of evolution under selection pressures for communication [e.g. 6]. Recently, several formal models have been presented to evaluate this view. For this paper, the most relevant of those is the model of Nowak et al. [7]. In that model it is assumed that there is a finite number of grammars, that newcomers (infants) learn their grammar from the population, that more successful grammars have a higher probability of being learned and that mistakes are made in learning. The system can thus be described in terms of the changes in the relative frequencies Xi of each grammar type i in the population. The first result that Nowak et al. obtain is a "coherence threshold". This threshold is the necessary condition for grammatical coherence in a population, i.e. for a majority of individuals to use the same grammar. They show that this coherence depends on the chances that a child has to correctly acquire its parents' grammar. This probability is described with the parameter q. Nowak et al. show analytically that there is a minimum value for q to keep coherence in the population. If q is lower than this value, all possible grammar types are equally frequent in the population and the communicative success in minimal. If q is higher than this value, one grammar type is dominant; the communicative success is much higher than before and reaches 100% if q = l. The second result relates this required fidelity (called qd to a lower bound (be) on the number of sample sentences that a child needs. Nowak et al. make the crucial assumption that all languages are equally expressive and equally different from each other. With that assumption they can show that be is proportional to the total number of possible grammars N. Of course, the actual number of sample sentences b is finite; Nowak et al. conclude that only if N is relatively small can a stable grammar emerge in a population. I.e. the population dynamics require a restrictive Universal Grammar. The models of Gold and Nowak et al. have in common that they implicitly assume that every possible grammar is equally likely to become the target grammar for learning. If even the best possible learning algorithm cannot learn such a grammar, the set of allowed grammars must be restricted. There is, however, reason to believe that this assumption is not the most useful for language learning. Language learning is a very particular type of learning problem, because the outcome of the learning process at one generation is the input for the next. The samples from which a child learns with its learning procedure, are therefore biased by the learning of previous generations that used the same procedure[8]. In [9] and other papers, Kirby, Hurford and students have developed a framework to study the consequences of that fact. In this framework, called the "Iterated Learning Model" (ILM), a population of individuals is modeled that can each produce and interpret sentences, and have a language acquisition procedure to learn grammar from each other. In the ILM one individual (the parent) presents a relatively small number of examples of form-meaning pairs to the next individual (the child). The child then uses these examples to induce his own granunar. In the next iteration the child becomes the parent, and a new individual becomes the child. This process is repeated many times. Interestingly, Kirby and Hurford have found that in these iterated transmission steps the language becomes easier and easier to learn, because the language adapts to the learning algorithm by becoming more and more structured. The structure of language in these models thus emerges from the iteration of learning. The role of biological evolution, in this view, is to shape the learning algorithms, such that the complex results of the iterated learning is biologically adaptive [10]. In this paper I will show that if one adopts this view on the interactions between learning, cultural evolution and biological evolution, the models such as those of Gold [3] and Nowak et al. [7] can no longer be taken as evidence for an extensive, innate pr~specification of human language. 3 A Simple Model of Grammar Induction To study the interactions between language adaptation and language acquisition, I have first designed a grammar induction algorithm that is simple, but can nevertheless deal with some non-trivial induction problems. The model uses context-free grammars to represent linguistic abilities. In particular, the representation is limited to grammars G where all rules are of one of the following forms: (1) A 1-+ t, (2) A 1-+ BC, (3) A 1-+ Bt. The nontenninals A, B, C are elements of the non-terminal alphabet Vnt , which includes the start symbol S. t is a string of tenninal symbols from the terminal alphabet Vt 1• For determining the language L of a certain grammar G I use simple depth-first exhaustive search of the derivation tree. For computational reasons, the depth of the search is limited to a certain depth d, and the string length is limited to length l. The set of sentences (L' ~ L) used in training and in communication is therefore finite (and strictly speaking not context-free, but regular); in production, strings are drawn from a uniform distribution over L'. The grammar induction algorithm learns from a set of sample strings (sentences) that are provided by a teacher. The design of the learning algorithm is originally inspired by [11] and is similar to the algorithm in [12]. The algorithm fits within a tradition of algorithms that search for compact descriptions of the input data [e.g. 13, 14, 15]. It consists of three operations: Incorporation: extend the language, such that it includes the encountered string; if string s is not already part of the language, add a rule S 1-+ s to the grammar. INote that the restrictions on the rule-types above do not limit the scope of languages that can be represented (they are essentially equivalent to Chomsky Normal Form). They are, however, relevant for the language acquisition algorithm. Compression: substitute frequent and long substrings with a nonterminal, such that the gmmmar becomes smaller and the language remains unchangedj for every valid substring z of the right-hand sides of all rules, calculate the compression effect v(z) of substituting z with a nonterminal Aj replace all valid occurrences of the substring z, = arymaxzv(z) with A if v(z') > 0, and add a rule A f-+ Zl to the grammar. "Valid substrings" are those substrings which can be replaced while keeping all rules of the forms 1- 3 described above. The compression effect is measured as the difference between the number of symbols in the grammar before and after the substitution. The compression step is repeated until the grammar does not change anymore. Generalization: equate two nonterminals, such that the grammar becomes smaller and the language laryerj for every combination of two nonterminals A and B (B :f S), calculate the compression effect v of equating A and B. Equate the combination (A',B') = arymaxABv(A,B) ifv(A',B') > OJ i.e. replace all occurrences of B with A. The compression effect is measured as the difference between the number of symbols before and after replacing and deleting redundant rules. The generalization step is repeated until the grammar does not change anymore. 4 Learnable and U nlearnable Classes The algorithm described above is implemented in C++ and tested on a variety of target grammars2 • I will not present a detailed analysis of the learning behavior here, but limit myself to a simple example that shows that the algorithm can learn some (recursive) grammars, while it can not learn others. The induction algorithm receives three sentences (abed, abcabcd, abcabcabcd). The incorporation, compression (repeated twice) and generalization steps yield subsequently the following grammars: (a) Incorporation (b) Compression (c) Generalization S f-+ abed S f-+ Yd S f-+ Xd S f-+ abcabcd S f-+ Xd S f-+ Xabcd S f-+ abcabcabcd S f-+ Xabcd X f-+ XX X f-+ yy X f-+ abc Y f-+ abc In (b) the substrings "abcabc" and "abc" are subsequently replaced by the nonterminals X and Y. In (c) the non-terminals X and Y are equated, which leads to the deletion of the second rule in (b). One can check that the total size of the grammar reduces from 24, to 19 and further down to 16 characters. From this example it is also clear that learning is not always successful. Any of the three grammars above «a) and (b) are equivalent) could have generated the training data, but with these three input strings the algorithm always yields grammar (c). Consistent with Gold's general proof [3], many target grammars will never be learned correctly, no matter how many input strings are generated. In practice, each finite set of randomly generated strings from some target grammar, might yield a different result. Thus, for some number of input strings T, some set of target grammars are always acquired, some are never acquired, and some are some of the time acquired. H we can enumerate all possible grammars, we can describe this with a matrix Q, where each entry Qij describes the probability that the algorithm learning from sample strings from a target grammar i, will end up with grammar 2The source code is available at http://wvv.ling.ed.ac . uk/ "" j elle of type j. Qii is the probability that the algorithm finds the target grammar. To make learning successful, the target grammars that are presented to the algorithm have to be biased. The following section will show that for this we need nothing more than to assume that the output of one learner is the input for the next. 5 Iterated Learning: the Emergence of Learnability To study the effects of iterated learning, we extend the model with a population structure. In the new version of the model individuals (agents, that each represent a generation) are placed in a chain. The first agent induces its grammar from a number E of randomly generated strings. Every subsequent agent (the child) learns its grammar from T sample sentences that are generated by the previous one (the parent). To avoid insufficient expressivenes:,;, we al:,;o extend the generalization step with a check if the number EG of different strings the grammar G can recognize is larger than or equal to E. If not, E - EG random new strings are generated and incorporated in the grammar. Using the matrix Q from the previou:,; section, we can formalize this iterated learning model with the following general equation, where Xi is the probability that grammar i is the grammar of the current generation: N ~Xi = LXjQji j = O (1) In simulations such a:,; the one of figure 1 communicative :,;ucces:,; between child and parent - a measure for the learnability of a grammar - rises steadily from a low value (here 0.65) to a high value (here 1.0). In the initial stage the grammar shows no structure, and consequently almost every string that the grammar produces is idiosyncratic. A child in this stage typically hears strings like "ada", "ddac", "adba", "bcbd", or "cdca" from its parent. It can not discover many regularities in these strings. The child therefore can not do much better than simply reproduce the strings it heard (i.e. T random draws from at least E different :,;trings), and generate random new strings, if necessary to make sure its language obeys the minimum number (E) of strings. However, in these randomly generated strings, sometimes regularitie:,; appear. I.e., a parent may u:,;e the randomly generated string:,; "dcac", "bcac", "caac" and "daac". When this happens the child tends to analyze these strings as different combinations with the building block "ac". Thus, typically, the learning algorithm generates a grammar with the rules S f-7 dcX, S f-7 bcX, S f-7 caX, S f-7 daX, and X f-7 ac. When this happens to another set of string:,; as well, say with a new rule Y f-7 b, the generalization procedure can decide to equate the non-terminals X and Y. The resulting grammar can then generalize from the observed strings, to the unobserved strings "dcb", "bcb", "cab" and "dab". The child still needs to generate random new strings to reach the minimum E, but fewer than in the case considered above. The interesting aspect of this becomes clear when we consider the next step in the simulation, when the child becomes itself the parent of a new child. This child is now pre:,;ented with a language with more regularities than before, and has a fair chance of cor·r-ectly generalizing to unseen examples. If, for instance, it only sees the strings "dcac", "bcac", "caac", "bcb", "cab" and "dab", it can, through the same procedure as above, infer that "daac" and "dcb" are also part of the target language. This means that (i) the child shares more string:,; with its parent than just the ones it observes and consequently shows a higher between generation communicative success, and (ii) regularities that appear in the language by chance, have a fair chance to remain in the language. In the process of iterated learning, languages can thus become more structured and better learnable. '---. ..... - '---.... (a.) LeBnl.&bility (b) Number of rules (c) Expressiveness Figure 1: Iterated Learning: although initially the target language is unstructured and difficult to learn, over the course of 20 generation!! (8) the learnability (the fraction of !!uccessful communication!! with the parent) steadily increases, (b) the number of rules steadily dec:reaaes (combmatorial and recursive stategies are used), and (c) after a initial. phase of overgeneralization, the expressiveness remains close to its minimally required level. Parameters: Vi = {a,b,c,d}, Vut = {S,X,Y,Z, A,B, C}, T=30, E=20, 10=3. Shown are the average values of 2 simulations. Similar results with different formalismB were already reported before [e.g. 11, 16], but here I have used context-free grammars and the results an! therefore directly relevant for the interpretation of Gold'e proof [3]. Whereas in the ueual interpretation of that proof [e.g. 1] it is assumed that we need. innate constraints on the search space in addition to a smart leaming procedure, here I show that even a !!imple learning procedure can lead to succeMful acquisition, because restriction!! on the search space automatically emerge in the iteration of learning. If one considers ieamability a Dina'll feature - 38 is common in generative linguistics - this ill a rather trivial phenomenon: languages that are not learnable will not occur in the next generation. However, if there are gradations in learnability, the cultural evolution of language can be an intricate process where languages get shaped over many generations. 6 Language Adaptation and the Coherence Threshold When we study this effect in a version of the model where selection does play a role, it is also relevant for the analysis in [7]. The model is therefore extended such that at every generation there ill a population of agents, agents of one generation communicate with each other and the expected number of ofFspring of an agent (the fitnt2B) is determined by the number of successful interactions it had. Children still acquire their grammar from sample strings produced. by their parent. Adapting equation 1, this system CaD now be described with the following equation, where z. is now the relative fraction of grammar i in the population (assuming an infinite population size): N ~i = Lz;ljQji - t/Yzi (2) j=O Here, Ji ill the relative jitnelJB (quality) of gra.m.mars of type i and equ.alB Ji = Ej ziF~i' where F~J is the expected communicative success from an interaction between an individual of type i and an individual of type j. The relative fitness f of a grammar thus depends on the frequencies of all grammar types, hence it ill freflUency dependent. q, is the average fitness in the population and equals q, = Ei Xiii. This term is needed to keep the sum of all fractions at 1. This equation is essentially the model of Nowak et al. [7]. Recall that the main result of that paper is a "coherence threshold": a minimum value for the learning accuracy q to keep coherence in the population. In previous work [unpublished] I have reproduced this result and shown that it is robust against variations in the Q-matrix, as long as the value of q (i.e. the diagonal values) remains equal for all grammars. %~~~20~~~40'-~6~o~-o8~o~-7"oo generations Figure 2: Results from a run under fitness proportional selection. This figure shows that there are regions of grammar space where the dynamics are apparently under the "coherence threshold" [7], while there are other regions where the dynamics are above this threshold. The parameters, including the number of sample sentences T, are still the same, but the language has adapted itself to the bias of the learning algorithm. Parameters are: lit = {O, 1, 2, 3}, v;.,t = {S, a, b, c, d, e, f}, P=20, T=100, E=100, lo=12. Shown are the average values of 20 agents. Figure 2, however, shows results from a simulation with the grammar induction algorithm described above, where this condition is violated. Whereas in the simulations of figure 1 the target languages have been relatively easy (the initial string length is short, i.e. 6), here the learning problem is very difficult (initial string length is long, i.e. 12). For a long period the learning is therefore not very successful, but around generation 70 the success suddenly rises. With always the same T (number of sample sentences), and with always the same grammar space, there are regions where the dynamics are apparently under the "coherence threshold", while there are other regions where the dynamics are above this threshold. The language has adapted to the learning algorithm, and, consequently, the coherence in the population does not satisfy the prediction of Nowak et al. 7 Conclusions I believe that these results have some important consequences for our thinking about language acquisition. In particular, they offer a different perspective on the argument from the poverty of the stimulus, and thus on one of the most central "problems" of language acquisition research: the logical pmblern of lang'uage acquisition. My results indicate that in iterated learning it is not necessary to put the (whole) explanatory burden on the representation bias. Although the details of the grammatical formalism (context-free grammars) and the population structure are deliberately close to [3] and [7] respectively, I do observe successful acquisition of grammars from a class that is unlearn able by Gold's criterion. Further, I observe grammatical coherence even though many more grammars are allowed in principle than Nowak et al. calculate as an upper bound. The reason for these surprising results is that language acquisition is a very particular type of learning problem: it is a problem where the target of the learning process is itself the outcome of a learning process. That opens up the possibility of language itself to adapt to the language acquisition procedure of children. In such iterated learning situations [11], learners are only presented with targets that other learners have been able to learn. Isn't this the traditional Universal Grammar in disguise'? Learnability is - consistent with the undisputed proof of [3] - still achieved by constraining the set of targets. However, unlike in usual interpretations of this proof, these constraints are not strict (some grammars are better learnable than others, allowing for an infinite "Grammar Universe"), and they are not a-priori: they are the outcome of iterated learning. The poverty of the stimulus is now no longer a problem; instead, the ancestors' poverty is the solution for the child's. AcknowledgIllents This work was performed while I was at the AI Laboratory of the Vrije Universiteit Brussel. It builds on previous work that was done in close collaboration with Paulien Hogeweg of Utrecht University. I thank her and Simon Kirby, John Batali, Aukje Zuidema and my colleagues at the AI Lab and the LEC for valuable hints, questions and remarks. Funding from the Concerted Research Action fund of the Flemish Government and the VUB, from the Prins Bernhard Cultuurfonds and from a Marie Curie Fellowship of the European Commission are gratefully acknowledged. References [1) Stefano Bertolo, editor. Language Acquisition and Learnability. Cambridge University Press, 200l. [2) Noam Chom::;ky. Aspects of the theor'y of syntax. MIT Pre::;::;, Cambridge, MA, 1965. [3) E. M. Gold. Language identification in the limit. Infor'mation and Contml (now Information and Computation), 10:447- 474, 1967. [4) Michael A. Arbib and Jane C. Hill. Language acquisition: Schemas replace universal grammar. In John A. Hawkins, editor, Explaining Language Universals. Basil Blackwell, New York, USA, 1988. [5) J. Elman, E. Bates, et al. Rethinking innateness. MIT Press, 1996. [6) Steven Pinker and Paul Bloom. Natural language and natural selection. Behavioral and brain sciences, 13:707-784, 1990. [7) Martin A. Nowak, Natalia Komarova, and Partha Niyogi. Evolution of universal grammar. Science, 291:114-118, 200l. [8) Terrence Deacon. Symbolic species, the co-e'Uol'ution of language and the h'uman brain. The Penguin Press, 1997. [9) S. Kirby and J. Hurford. The emergence of lingui::;tic ::;tructure: An overview of the iterated learning model. In Angelo Cangelosi and Domenico Parisi, editors, Sirn'ulating the Evolution of Lang'uage, chapter 6, pages 121-148. Springer Verlag, London, 2002. [10) Kenny Smith. Natural selection and cultural selection in the evolution of communication. Adaptive Behavior, 2003. to appear. [11) Simon Kirby. Syntax without natural selection: How compositionality emerges from vocabulary in a population of learners. In C. Knight et al., editors, The Evolutionary Emergence of Language. Cambridge University Press, 2000. [12) J. Gerard Wolff. Language acqui::;ition, data compre::;::;ion and generalization. Language (3 Communication, 2(1):57-89, 1982. [13) A. Stolcke. Bayesian Learning of Pmbabilistic Language Models. PhD thesii:i, Dept. of Electrical Engineering and Computer Science, University of California at Berkeley, 1994. [14) Menno van Zaanen and Pieter Adriaans. Comparing two unsupervised grammar induction systems: Alignment-based learning vs. EMILE. In Ben Kriise et al., editors, Pmceedinys of BNAIC 2001, 200l. [15) Zach Solan, Eytan Ruppin, David Horn, and Shimon Edelman. Automatic acquisition and efficient representation of syntactic structures. This volume. [16) Henry Brighton. Compositional syntax from cultural transmission. Ar·tificial Life, 8(1), 2002.
|
2002
|
10
|
2,103
|
Morton-Style Factorial Coding of Color in Primary Visual Cortex Javier R. Movellan Institute for Neural Computation University of California San Diego La Jolla, CA 92093-0515 movellan@inc.ucsd.edu Thomas Wachtler Sloan Center for Theoretical Neurobiology The Salk Institute La Jolla, CA 92037, USA thomas@salk.edu Thomas D. Albright Howard Hughes Medical Institute The Salk Institute La Jolla, CA 92037, USA tom@salk.edu Terrence Sejnowski Computational Neurobiology Laboratory The Salk Institute La Jolla, CA 92037, USA terry@salk.edu Abstract We introduce the notion of Morton-style factorial coding and illustrate how it may help understand information integration and perceptual coding in the brain. We show that by focusing on average responses one may miss the existence of factorial coding mechanisms that become only apparent when analyzing spike count histograms. We show evidence suggesting that the classical/non-classical receptive field organization in the cortex effectively enforces the development of Morton-style factorial codes. This may provide some cues to help understand perceptual coding in the brain and to develop new unsupervised learning algorithms. While methods like ICA (Bell & Sejnowski, 1997) develop independent codes, in Morton-style coding the goal is to make two or more external aspects of the world become independent when conditioning on internal representations. In this paper we introduce the notion of Morton-style factorial coding and illustrate how it may help analyze information integration and perceptual organization in the brain. In the neurosciences factorial codes are often studied in the context of mean tuning curves. A tuning curve is called separable if it can be expressed as the product of terms selectively influenced by different stimulus dimensions. Separable tuning curves are taken as evidence of factorial coding mechanisms. In this paper we show that by focusing on average responses one may miss the existence of factorial coding mechanisms that become only apparent when analyzing spike count histograms. Morton (1969) analyzed a wide variety of psychophysical experiments on word perception and showed that they could be explained using a model in which stimulus and context have separable effects on perception. More precisely, in Mortons’ model the joint effect of stimulus and context on a perceptual representation can be obtained by multiplying terms selectively controlled by stimulus and by context, i.e.,
(1) where
is the empirical probability of perceiving the perceptual alternative in response to stimulus in context ,
represents the support of stimulus for percept and !
the support of the context for percept . Massaro (1987b, 1987a, 1989a) has shown that this form of factorization describes accurately a wide variety of psychophysical studies in domains such as word recognition, phoneme recognition, audiovisual speech recognition, and recognition of facial expressions. Morton-style factorial codes used to be taken as evidence for a feedforward coding mechanism (Massaro, 1989b) but Movellan & McClelland (2001) showed that neural networks with feedback connections can develop factorial codes when they follow an architectural constraint named “channel separability”. Channel separability is defined as follows: First we identify the neurons which have a direct influence on the observed responses (e.g., the set of neurons that affect an electrode). For a given set of response units, the stimulus chanel is defined as the set of units modulated by the stimulus provided the response specification units are excised from the rest of the network. The context channel is the set of units modulated by the context provided the response units are excised from the rest of the networks. Two channels are called separable if they have no units in common. Channel separability implies that the influences of an information source upon the channel of another information source should be mediated via the response specification units (see Figure 1). While the models used in Movellan and McClelland (2001) are a simplification of actual neural circuits, the analysis suggests that the form of separability expressed in the the Morton-Massaro model may be a useful paradigm for the study of information integration in the brain. Indeed it is quite remarkable that the functional organization of cortex into classical/non-classical receptive fields provides a separable architecture (See Figure 1). Such organization may be nature’s way of enforcing Morton-style perceptual coding. In this paper we present evidence in favor of this view by investigating how color is encoded in primary visual cortex. It is well known that stimuli of equal chromaticity can evoke different color percepts, depending on the visual context (Wesner & Shevell, 1992; Brown & MacLeod, 1997). Context dependent responses to color stimuli have been found in V4 (Zeki, 1983). More recently the last three authors of this article investigated the chromatic tuning properties of V1 cells in response to stimuli presented in different chromatic contexts (Wachtler, Sejnowski, & Albright, 2003). The experiment showed that the background color, outside the cell’s classical receptive field, had a significant effect on the response to colors inside the receptive field. No attempt was made to model the form of such influence. In this paper we analyze quantitatively the results of that experiment and show that a large proportion of these neurons, adhered to the Morton-Massaro law, i.e., stimulus and context had a separable influence on the spike count histograms of these cells. 1 Methods The animal preparation and methods of this experiment are described in Wachtler et al. (in press) in great detail. Here we briefly describe the portion of the experiment relevant to us. Two adult female rhesus monkeys were used in the study. Extracellular potentials from single isolated neurons were recorded from two macaque monkeys. The monkeys were awake and were required to fixate a small fixation target for the duration of each trial (2500 ms.). Amplified electrical activity from the cortex was passed to a data acquisition system for spike detection and sorting. Once a neuron was isolated, its receptive field was determined using flashed and moving bars of different size, orientation, and color. All the Input Stimulus Context Context Sensors Response Specification Units Stimulus Relays Context Relays Stimulus Sensors Response
Stimulus Electrode Background Background Response Specification Background Channel Stimulus Channel Figure 1: Left: A network with separable context and stimulus processing channels. Right: The arrows connecting the stimulus to the unit in the center represent the classical receptive field of that unit. External inputs affecting the classical receptive field are called “stimuli” and all the other inputs are called “background”. In this preparation the stimulus and background channels are separable. neurons recorded had receptive fields at eccentricities between and . Once the receptive fields were located, the color tuning of the neurons was mapped by flashing 8 stimuli of different chromaticity. The stimuli were homogenous color squares, centered on and at least twice as large as the receptive field of the neuron under study. They were flashed for 500 ms. Chromaticity was defined in a color space similar to the one used in Derrington, Krauskopf, and Lennie (1984). Cone excitations were calculated on the basis of the human cone fundamentals proposed by Stockman (Stockman, MaCleod, & Johnson, 1993). The origin of the color space corresponded to a homogeneous gray background to which the animal had been adapted (luminance 48 cd/m ). The three coordinate axis of the color space corresponded to L versus M-cone contrast, S-cone contrast, and achromatic luminance. The 8 color stimuli were isoluminant with the gray background, had a fixed color contrast (distance from origin of color space) and had chromatic directions corresponding to polar angles . After several presentations of the stimuli, the chromatic directions for which the neurons showed a clear response were determined, and one of them was selected as the second background condition. In the second condition, the color of the background changed during stimulus presentation (i.e., for 500 ms) to a different color. This color was isoluminant with the gray background, was in the direction of a stimulus color to which the cell showed clear response, but was of lower chromatic contrast than the stimulus colors. In subsequent trials combinations of the 8 stimulus and 2 background conditions were presented in random order. For each trial we recored the number of spikes in a 100 ms window starting 50 ms after stimulus onset. This time window was chosen because color tuning was usually more pronounced in the first response phase as compared to later periods of the response and because it maximized the effects of context. Data were recorded for a total of 94 units. Of these, 20 neurons were selected for having the strongest background effect and a minimum of 16 trials per condition. No other criteria were used for the selection of these neurons. 2 Results Figure 2 shows example tuning curves of 4 different neurons. The thick lines represent the average response for a particular color stimulus in the plane defined by the first two chromatic axis. The dark curve represents responses for the gray background condition. The light curve represents responses for the color background condition. The boxes around the tuning curves represent average response rates as a function of stimulus onset for the two background conditions. Testing whether a code is factorial is like testing for the absence of interaction terms in Analysis of Variance (ANOVA). The complexity (i.e., degrees of freedom) of an ANOVA model without interaction terms is identical to the complexity of the Morton-Massaro model. When testing for interaction effects we analyze whether the addition of interaction terms provides significant improvement on data fit over a simple additive model. In our case we investigate whether the addition of non-factorial terms provides a significant improvement on data fit over the factorial Morton-Massaro model. For each neuron there were 8 stimulus conditions, 2 background conditions, and 10 response alternatives, one per bin in the spike count histogram. The probabilities of the spike count histogram add up to one thus, there is a total of independent probability estimates per neuron. In this case the Morton-Massaro model requires
parameters (Movellan & McClelland, 2001), thus there is a total of 63 nonfactorial terms. For each neuron we fitted Morton-Massaro’s model and performed a standard likelihood test to see whether the additional nonfactorial terms improved data fit significantly (i.e., whether the deviations from the Morton-Massaro factorial model where significant). We found that of the 20 neurons only 5 showed significant deviations from the Morton-Massaro model (chi-square test, 63 degrees of freedom, ). While the Morton-Massaro model had 81 parameters many of them were highly redundant. We also evaluated a 30 parameter version of the model by performing PCA independently on the stimulus and on the context parameters of the full model and deleting coefficients with small eigenvalues. The 30 parameter model provided fits almost indistinguishable from the 81 parameter model. In this case only 4 neurons showed significant deviations from the model (chi-square, 124 df, ). On a pool of 20 neurons compliant with the Morton-Massaro model one would expect the test to mistakenly reject 1 neuron by chance. Rejection of 4 or more neurons out of 20 is not inconsistent with the idea that all the neurons were in fact compliant with the Morton-Massaro model ( , binomial test). Figure 2 shows the obtained and predicted spike count histograms for a typical neuron. The top row represents the 8 stimulus conditions with gray background. The bottom row shows the 8 conditions with color background. Lines represent spike count histograms predicted by the Morton-Massaro model, dots represent obtained spike count histograms. In order to test the statistical power of the likelihood-ratio test, we generated 20 neurons with random histograms. The histograms were unimodal, with peak response randomly selected between 0 and 9, with fall-offs similar to those found in the actual neurons and with the same number of observations per condition as in the actual neurons. We then fitted the 81-parameter Morton-Massaro model to each of these neurons and tested it using a likelihood ratio test. All the simulated neurons exhibited statistically significant deviations from the model (chi-square, 63 df, ) suggesting that the test was quite sensitive. Finally, for comparison purposes we tested a model of information integration that uses the same number of parameters as the Morton-Massaro model but in which the stimulus and context terms are are combined additively instead of multiplicatively, i.e.,
(2) Figure 2: Effect of the stimulus and background on the chromatic mean tuning curves of 4 neurons. The thick dark and light lines show mean responses in the isoluminant plane (x axis: L-M cone variation; y axis: S cone variation) for the two background conditions. Black: gray background; Light: colored background. The 8 boxes around each tuning curve shows the average response rate as a function of the time from stimulus onset for the two background conditions. Figure 3: Predicted (lines) and obtained (dots) spike count histograms for a typical neuron. The horizontal axis represents spike counts in a 100 ms. window. The vertical axis represents probabilities. Each row represents a different background condition. Each column represents a different stimulus condition. After fitting the new model, we performed a likelihood-ratio test. 80 % of the neurons showed significant deviations from this model (chi-square, 63 df, ). 3 Relation to Tuning Curve Separability In neuroscience separability is commonly studied in the context of mean tuning curves. For example, a tuning curve is called (multiplicatively) separable if the conditional expected value of a neuron’s response can be decomposed as the product of two different factors each selectively influenced by a single stimulus dimension. An important aspect of the MortonMassaro model is that it applies to entire response histograms, not to expected values. If the Morton-Massaro model holds, then separability appears in the following sense: If we are allowed to see the response histograms for all the stimuli in background condition A and the response histogram for a reference stimulus in background condition B, then it should be possible to predict the response histograms for any stimulus in background condition B. For example, by looking at the top row of Figure 1 and one of the cells of the bottom row of Figure 1, it should be possible to reproduce all the other cells in the bottom row. Obviously if we can predict response histograms then we can also predict tuning curves, since they are based on averages of response histograms. Most importantly, there are forms of separability of the tuning curve that become only apparent when studying the entire response histogram. Figure 4 illustrates this fact with an example. The curve shows the tuning curves of a particular neuron from an experiment fitted using the Morton-Massaro model. These curves were obtained by fitting the entire spike count histograms for each stimulus and background condition, and then obtaining the mean response for the predicted histograms. The large open circles represent the obtained average responses. The dots represent 95 % confidence intervals around those responses. Note that the two tuning curves do not appear separable in a discernable way (it is not possible to predict curve B by looking at curve A and a single point of curve B). Separability becomes only apparent when the entire histogram is analyzed, not just the tuning curves based on response averages. Figure 4: Tuning curves for a typical neuron as predicted by the Morton-Massaro model. The two curves represent the average response of the neuron to isoluminant stimulus, for two different background conditions. The elongated curve corresponds to the homogenous gray background and the circular curve to the colored background. The open dots are the obtained mean responses. The dots represent 95 % confidence interval of those responses. Note that the predicted curves do not appear separable in a classic sense. However since they are generated by Morton’s model the underlying code is factorial. This becomes apparent only when one looks at spike count histograms, not just mean tuning curves. 4 Discussion We introduced the notion of Morton-style factorial coding and illustrated how it may help analyze information integration and perceptual organization in the brain. We showed that by focusing on average responses one may miss the existence of factorial coding mechanisms that become only apparent when analyzing spike count histograms. The results of our study suggest that V1 represents color using a Morton-style factorial code. This may provide some cues to help understand perceptual coding in the brain and to develop new unsupervised learning algorithms. While methods like ICA (Bell & Sejnowski, 1997) develop independent codes, in Morton-style coding the goal is to make two or more external aspects of the world become independent when conditioning on internal representations. Morton-style coding is optimal when the statistics of stimulus and background exhibit a particular property: when conditioning on each possible response category (i.e., spike counts) the empirical likelihood ratios of stimulus and background factorize. Our study suggests that Morton coding of color in natural scenes should be optimal or approximately optimal, a prediction that can be tested via statistical analysis of color in natural scenes. Acknowledgments This project was supported by NSF’s grant ITR IIS-0223052. 5 References Bell, A., & Sejnowski, T. (1997). The ’independent components’ of natural scenes are edge filters. Vision Research, 37(23), 3327–3338. Brown, R. O., & MacLeod, D. I. A. (1997). Color appearance depends on the variance of surround colors. Current Biology, (7), 844–849. Derrington, A. M., Krauskopf, J., & Lennie, P. (1984). Chromatic mechanisms in lateral geniculate nucleus of macaque. Journal of Physiology, 357, 241–265. Domingos, P., & Pazzani, M. (1997). On the optimality of the simple Bayesian classifier under zero-one loss. Journal of Machine Learning, 29, 103–130. Massaro, D. W. (1987a). Categorical perception: A fuzzy logical model of categorization behavior. In S. Harnad (Ed.), Categorical perception. Cambridge,England: Cambridge University Press. Massaro, D. W. (1987b). Speech perception by ear and eye: A paradigm for psychological research. Hillsdale, NJ: Erlbaum. Massaro, D. W. (1989a). Perceiving talking faces. Cambridge, Massachusetts: MIT Press. Massaro, D. W. (1989b). Testing between the TRACE model and the fuzzy logical model of speech perception. Cognitive Psychology, 21, 398–421. Morton, J. (1969). The interaction of information in word recognition. Psychological Review, 76, 165–178. Movellan, J. R., & McClelland, J. L. (2001). The Morton-Massaro law of information integration: Implications for models of perception. Psychological Review, (1), 113–148. Stockman, A., MaCleod, D. I. A., & Johnson, N. E. (1993). Spectral sensitivities of the human cones. Journal of the Optical Society of America A, (10), 2491–2521. Wachtler, T., Sejnowski, T. J., & Albright, T. D. (2003). Representation of color stimuli in awake macaque primary visual cortex. Neuron, 37, 1–20. Wesner, M. F., & Shevell, S. K. (1992). Color perception within a chromatic context: Changes in red/green equilibria caused by noncontiguous light. Vision Research, (32), 1623–1634. Zeki, S. (1983). Colour coding in cerebral cortex: the responses of wavelength selective and colourcoded cells in monkey visual cortex to changes in wavelenght composition. Neuroscience, 9, 767–781.
|
2002
|
100
|
2,104
|
One-Class LP Classifier for Dissimilarity Representations El˙zbieta P˛ekalska1, David M.J.Tax2 and Robert P.W. Duin1 1Delft University of Technology, Lorentzweg 1, 2628 CJ Delft, The Netherlands 2Fraunhofer Institute FIRST.IDA, Kekuléstr.7, D-12489 Berlin, Germany ela@ph.tn.tudelft.nl,davidt@first.fraunhofer.de Abstract Problems in which abnormal or novel situations should be detected can be approached by describing the domain of the class of typical examples. These applications come from the areas of machine diagnostics, fault detection, illness identification or, in principle, refer to any problem where little knowledge is available outside the typical class. In this paper we explain why proximities are natural representations for domain descriptors and we propose a simple one-class classifier for dissimilarity representations. By the use of linear programming an efficient one-class description can be found, based on a small number of prototype objects. This classifier can be made (1) more robust by transforming the dissimilarities and (2) cheaper to compute by using a reduced representation set. Finally, a comparison to a comparable one-class classifier by Campbell and Bennett is given. 1 Introduction The problem of describing a class or a domain has recently gained a lot of attention, since it can be identified in many applications. The area of interest covers all the problems, where the specified targets have to be recognized and the anomalies or outlier instances have to be detected. Those might be examples of any type of fault detection, abnormal behavior, rare illnesses, etc. One possible approach to class description problems is to construct oneclass classifiers (OCCs) [13]. Such classifiers are concept descriptors, i.e. they refer to all possible knowledge that one has about the class. An efficient OCC built in a feature space can be found by determining a minimal volume hypersphere around the data [14, 13] or by determining a hyperplane such that it separates the data from the origin as well as possible [11, 12]. By the use of kernels [15] the data is implicitly mapped into a higher-dimensionalinner product space and, as a result, an OCC in the original space can yield a nonlinear and non-spherical boundary; see e.g.[15, 11, 12, 14]. Those approaches are convenient for data already represented in a feature space. In some cases, there is, however, a lack of good or suitable features due to the difficulty of defining them, as e.g. in case of strings, graphs or shapes. To avoid the definition of an explicit feature space, we have already proposed to address kernels as general proximity measures [10] and not only as symmetric, (conditionally) positive definite functions of two variables [2]. Such a proximity should directly arise from an application; see e.g.[8, 7]. Therefore, our reasoning starts not from a feature space, like in case of the other methods [15, 11, 12, 14], but from a given proximity representation. Here, we address general dissimilarities. The basic assumption that an instance belongs to a class is that it is similar to examples within this class. The identification procedure is realized by a proximity function equipped with a threshold, determining whether an instance is a class member or not. This proximity function can be e.g. a distance to an average representative, or a set of selected prototypes. The data represented by proximities is thus more natural for building the concept descriptors, i.e.OCCs, since the proximity function can be directly built on them. In this paper, we propose a simple and efficient OCC for general dissimilarity representations, discussed in Section 2, found by the use of linear programming (LP). Section 3 presents our method together with a dissimilarity transformation to make it more robust against objects with large dissimilarities. Section 4 describes the experiments conducted, and discusses the results. Conclusions are summarized in Section 5. 2 Dissimilarity representations Although a dissimilarity measure D provides a flexible way to represent the data, there are some constraints. Reflectivity and positivity conditions are essential to define a proper measure; see also [10]. For our convenience, we also adopt the symmetry requirement. We do not require that D is a strict metric, since non-metric dissimilarities may naturally be found when shapes or objects in images are compared e.g. in computer vision [4, 7]. Let z and pi refer to objects to be compared. A dissimilarity representation can now be seen as a dissimilarity kernel based on the representation set R ={p1, .., pN} and realized by a mapping D(z, R) : F →RN, defined as D(z, R) = [D(z, p1) . . . D(z, pN)]T . R controls the dimensionality of a dissimilarity space D(·, R). Note also that F expresses a conceptual space of objects, not necessarily a feature space. Therefore, to emphasize that objects, like z or pi, might not be feature vectors, they will not be printed in bold. The compactness hypothesis (CH) [5] is the basis for object recognition. It states that similar objects are close in their representations. For a dissimilarity measure D, this means that D(r, s) is small if objects r and s are similar.If we demand that D(r, s)=0, if and only if the objects r and s are identical, this implies that they belong to the same class. This can be extended by assuming that all objects s such that D(r, s)<ε, for a sufficient small ε, are so similar to r that they are members of the same class. Consequently, D(r, t)≈D(s, t) for other objects t. Therefore, for dissimilarity representations satisfying the above continuity, the reverse of the CH holds: objects similar in their representations are similar in reality and belong, thereby, to the same class [6, 10]. Objects with large distances are assumed to be dissimilar. When the set R contains objects from the class of interest, then objects z with large D(z, R) are outliers and should be remote from the origin in this dissimilarity space. This characteristic will be used in our OCC. If the dissimilarity measure D is a metric, then all vectors D(z, R), lie in an open prism (unboundedfrom above1), bounded from below by a hyperplane on which the objects from R are. In principle, z may be placed anywhere in the dissimilarity space D(·, R) only if the triangle inequality is completely violated. This is, however, not possible from the practical point of view, because then both the CH and its reverse will not be fulfilled. Consequently, this would mean that D has lost its discriminating properties of being small for similar objects. Therefore, the measure D, if not a metric, has to be only slightly nonmetric (i.e.the triangle inequalities are only somewhat violated) and, thereby, D(z, R) will still lie either in the prism or in its close neigbourhood. 1the prism is bounded if D is bounded 3 The linear programming dissimilarity data description To describe a class in a non-negative dissimilarity space, one could minimize the volume of the prism, cut by a hyperplane P : wT D(z, R)=ρ that bounds the data from above2 (note that non-negative dissimilarities impose both ρ≥0 and wi ≥0). However, this might be not a feasible task. A natural extension is to minimize the volume of a simplex with the main vertex being the origin and the other vertices vj resulting from the intersection of P and the axes of the dissimilarity space (vj is a vector of all zero elements except for vji =ρ/wi, given that wi ̸=0). Assume now that there are M non-zero weights of the hyperplane P, so effectively, P is constructed in a RM. From geometry we know that the volume V of such a simplex can be expressed as V = (VBase/M!) · (ρ/||w||2), where VBase is the volume of the base, defined by the vertices vj. The minimization of h = ρ/||w||2, i.e. the Euclidean distance from the origin to P, is then related to the minimization of V . Let {D(pi, R)}N i=1, N =|R| be a dissimilarity representation, bounded by a hyperplane P, i.e. wT D(pi, R) ≤ρ for i = 1, . . . , N, such that the Lq distance to the origin dq(0, P) = ρ/||w||p is the smallest (i.e.q satisfies 1/p + 1/q = 1 for p≥1) [9]. This means that P can be determined by minimizing ρ −||w||p. However, when we require ||w||p = 1 (to avoid any arbitrary scaling of w), the construction of P can be solved by the minimization of ρ only. The mathematical programming formulation of such a problem is [9, 1]: min ρ s.t. wT D(pi, R) ≤ρ, i = 1, 2, .., N, ||w||p = 1, ρ ≥0. (1) If p=2, then P is found such that h is minimized, yielding a quadratic optimization problem. A much simpler LP formulation, realized for p = 1, is of our interest. Knowing that ||w||2 ≤||w||1 ≤ √ M||w||2 and by the assumption of ||w||1 =1, after simple calculations, we find that ρ ≤h = ρ/||w||2 ≤ √ M ρ. Therefore, by minimizing d∞(0, P) = ρ, (and ||w||1 =1), h will be bounded and the volume of the simplex considered, as well. By the above reasoning and (1), a class represented by dissimilarities can be characterized by a linear proximity function with the weights w and the threshold ρ. Our one-class classifier CLPDD, Linear Programming Dissimilarity-data Description, is then defined as: CLPDD(D(z, ·)) = I( X wj ̸=0 wjD(z, pj) ≤ρ), (2) where I is the indicator function. The proximity function is found as the solution to a soft margin formulation (which is a straightforward extension of the hard margin case) with ν ∈(0, 1] being the upper bound on the outlier fraction for the target class: min ρ + 1 ν N PN i=1 ξi s.t. wT D(pi, R) ≤ρ + ξi, i = 1, 2, .., N P j wj = 1, wj ≥0, ρ ≥0, ξi ≥0. (3) In the LP formulations, sparse solutions are obtained, meaning that only some wj are positive. Objects corresponding to such non-zero weights, will be called support objects (SO). The left plot of Fig. 1 is a 2D illustration of the LPDD. The data is represented in a metric dissimilarity space, and by the triangle inequality the data can only be inside the prism indicated by the dashed lines. The LPDD boundary is given by the hyperplane, as close to the origin as possible (by minimizing ρ), while still accepting (most) target objects. By the discussion in Section 2, the outliers should be remote from the origin. Proposition. In (3), ν ∈(0, 1] is the upper bound on the outlier fraction for the target class, i.e. the fraction of objects that lie outside the boundary; see also [11, 12]. This means that 1 N PN i=1(1 −CLPDD(D(pi, ·)) ≤ν. 2P is not expected to be parallel to the prism’s bottom hyperplane K( ,x ) . i j D( ,p ) 0 0 1 . 1 ρ T ρ k k D( ,p ) −ρ K( ,x ) . j i ρ || w || = 1 w w . Dissimilarity space Similarity space LPSD: min 1/N sum (w K(x ,S) + ) 1 LPDD: min Figure 1: Illustrations of the LPDD in the dissimilarity space (left) and the LPSD in the similarity space (right). The dashed lines indicate the boundary of the area which contains the genuine objects. The LPDD tries to minimize the max-norm distance from the bounding hyperplane to the origin, while the LPSD tries to attract the hyperplane towards the average of the distribution. The proof goes analogously to the proofs given in [11, 12]. Intuitively, the proof follows this: assume we have found a solution of (3). If ρ is increased slightly, the term P i ξi in the objective function will change proportionally to the number of points that have non-zero ξi (i.e.the outlier objects). At the optimum of (3) it has to hold that Nν ≥#outliers. Scaling dissimilarities. If D is unbounded, then some atypical objects of the target class (i.e. with large dissimilarities) might badly influence the solution of (3). Therefore, we propose a nonlinear, monotonous transformation of the distances to the interval [0, 1] such that locally the distances are scaled linearly and globally, all large distances become close to 1. A function with such properties is the sigmoid function (the hyperbolical tangent can also be used), i.e. Sigm(x) = 2/(1 + e−x/s) −1, where s controls the ’slope’ of the function, i.e. the size of the local neighborhoods. Now, the transformation can be applied in an element-wise way to the dissimilarity representation such that Ds(z, pi)=Sigm(D(z, pi)). Unless stated otherwise, the CLPDD will be trained on Ds. A linear programming OCC on similarities. Recently, Campbell and Bennett have proposed an LP formulation for novelty detection [3]. They start their reasoning from a feature space in the spirit of positive definite kernels K(S, S) based on the set S = {x1, .., xN}. They restricted themselves to the (modified) RBF kernels, i.e. for K(xi, xj) = e−D(xi,xj)2/2 s2, where D is either Euclidean or L1 (city block) distance. In principle, we will refer to RBFp, as to the ’Gaussian’ kernel based on the Lp distance. Here, to be consistent with our LPDD method, we rewrite their soft-margin LP formulation (a hard margin formulation is then obvious), to include a trade-off parameter ν (which lacks, however, the interpretation as given in the LPDD), as follows: min 1 N PN i=1(wT K(xi, S) + ρ) + 1 ν N PN i=1 ξi s.t. wT K(xi, S) + ρ ≥−ξi, i = 1, 2, .., N P j wj = 1, wj ≥0, ξi ≥0. (4) Since K can be any similarity representation, for simplicity, we will call this method Linear Programming Similarity-data Description (LPSD). The CLPSD is then defined as: CLPSD(K(z, ·)) = I( X wj ̸=0 wjK(z, xj) + ρ ≥0). (5) In the right plot of Fig. 1, a 2D illustration of the LPSD is shown. Here, the data is represented in a similarity space, such that all objects lie in a hypercube between 0 and 1. Objects remote from the representation objects will be close to the origin. The hyperplane is optimized to have minimal average output for the whole target set. This does not necessarily mean a good separation from the origin or a small volume of the OCC, possibly resulting in an unnecessarily high outlier acceptance rate. LPDD on the Euclidean representation −0.5 0 0.5 1 −0.5 0 0.5 1 1.5 s = 0.3 −0.5 0 0.5 1 −0.5 0 0.5 1 1.5 s = 0.4 −0.5 0 0.5 1 −0.5 0 0.5 1 1.5 s = 0.5 −0.5 0 0.5 1 −0.5 0 0.5 1 1.5 s = 1 −0.5 0 0.5 1 −0.5 0 0.5 1 1.5 s = 3 LPSD based on RBF2 −0.5 0 0.5 1 −0.5 0 0.5 1 1.5 s = 0.3 −0.5 0 0.5 1 −0.5 0 0.5 1 1.5 s = 0.4 −0.5 0 0.5 1 −0.5 0 0.5 1 1.5 s = 0.5 −0.5 0 0.5 1 −0.5 0 0.5 1 1.5 s = 1 −0.5 0 0.5 1 −0.5 0 0.5 1 1.5 s = 3 Figure 2: One-class hard margin LP classifiers for an artificial 2D data. From left to right, s takes the values of 0.3d, 0.4d, 0.5d, d, 3d, where d is the average distance. Support objects are marked by squares. Extensions. Until now, the LPDD and LPSD were defined for square (dis)similarity matrices. If the computation of (dis)similarities is very costly, one can consider a reduced representation set Rred ⊂R, consisting of n<<N objects. Then, a dissimilarity or similarity representations are given as rectangular matrices D(R, Rred) or K(S, Sred), respectively. Both formulations (3) and (4) remain the same with the only change that R/S is replaced by Rred/Sred. An another reason to consider reduced representations is the robustness against outliers. How to choose such a set is beyond the scope of this paper. 4 Experiments Artificial datasets. First, we illustrate the LPDD and the LPSD methods on two artificial datasets, both originally created in a 2D feature space. The first dataset contains two clusters with objects represented by Euclidean distances. The second dataset contains one uniform, square cluster and it is contaminated with three outliers. The objects are represented by a slightly non-metric L0.95 dissimilarity (i.e. d0.95(x, y) = [P i(xi−yi)0.95]1/0.95). In Fig. 2, the first dataset together with the decision boundaries of the LPDD and the LPSD in the theoretical input space are shown. The parameter s used in all plots refers either to the scaling parameter in the sigmoid function for the LPDD (based on Ds) or to the scaling parameter in the RBF kernel. The pictures show similar behavior of both the LPDD and the LPSD; the LPDD tends to be just slightly more tight around the target class. LPDD on the Euclidean representation 0 0.5 1 0 0.5 1ν = 0.1; s = 0.2; e = 0 0 0.5 1 0 0.5 1ν = 0.1; s = 0.29; e = 0.04 0 0.5 1 0 0.5 1ν = 0.1; s = 0.46; e = 0.06 0 0.5 1 0 0.5 1ν = 0.1; s = 0.87; e = 0.06 0 0.5 1 0 0.5 1ν = 0.1; s = 2.3; e = 0.08 LPSD based on RBF2 0 0.5 1 0 0.5 1ν = 0.1; s = 0.2; e = 0.04 0 0.5 1 0 0.5 1ν = 0.1; s = 0.29; e = 0.06 0 0.5 1 0 0.5 1ν = 0.1; s = 0.46; e = 0.06 0 0.5 1 0 0.5 1ν = 0.1; s = 0.87; e = 0.06 0 0.5 1 0 0.5 1ν = 0.1; s = 2.3; e = 0.08 Figure 3: One-class LP classifiers, trained with ν =0.1 for an artificial uniformly distributed 2D data with 3 outliers. From left to right s takes the values of 0.7dm, dm, 1.6dm, 3dm, 8dm, where dm is the median distance of all the distances. e refers to the error on the target set. Support objects are marked by squares. LPDD on the L0.95 representation 0 0.5 1 0 0.5 1ν = 0.1; s = 0.26; e = 0 0 0.5 1 0 0.5 1ν = 0.1; s = 0.37; e = 0.04 0 0.5 1 0 0.5 1ν = 0.1; s = 0.59; e = 0.06 0 0.5 1 0 0.5 1ν = 0.1; s = 1.1; e = 0.08 0 0.5 1 0 0.5 1ν = 0.1; s = 3; e = 0.08 LPSD based on RBF0.95 0 0.5 1 0 0.5 1ν = 0.1; s = 0.26; e = 0 0 0.5 1 0 0.5 1ν = 0.1; s = 0.37; e = 0.04 0 0.5 1 0 0.5 1ν = 0.1; s = 0.59; e = 0.06 0 0.5 1 0 0.5 1ν = 0.1; s = 1.1; e = 0.08 0 0.5 1 0 0.5 1ν = 0.1; s = 3; e = 0.06 Figure 4: One-class LP classifiers for an artificial 2D data. The same setting as in Fig.3 is used, only for the L0.95 non-metric dissimilarities instead of the Euclidean ones. Note that the median distance has changed, and consequently, the s values, as well. LPDD 0 0.5 1 0 0.5 1ν = 0.1; s = 0.41; e = 0.08 0 0.5 1 0 0.5 1ν = 0.1; s = 1; e = 0.08 0 0.5 1 0 0.5 1ν = 0.1; s = 0.4; e = 0.06 0 0.5 1 0 0.5 1ν = 0.1; s = 1; e = 0.08 LPSD 0 0.5 1 0 0.5 1ν = 0.1; s = 0.41; e = 0.06 0 0.5 1 0 0.5 1ν = 0.1; s = 1; e = 0.08 0 0.5 1 0 0.5 1ν = 0.1; s = 0.4; e = 0.08 0 0.5 1 0 0.5 1ν = 0.1; s = 1; e = 0.08 Figure 5: One-class LP classifiers, trained with ν = 0.1, for an artificial uniformly distributed 2D data with 3 outliers, given by the L0.95 non-metric rectangular 50×6 dissimilarity representations. The upper row shows the LPDD’s results and bottom row shows the LPSD’s results with the kernel RBF0.95. The objects of the reduced sets Rred and Sred are marked by triangles. Note that they differ from left to right. e refers to the error on the target set. Support objects are marked by squares. This becomes more clear in Fig. 3 and 4, where three outliers lying outside a single uniformly distributed cluster should be ignored when an OCC with a soft margin is trained. From these figures, we can observe that the LPDD gives a tighter class description, which is more robust against the scaling parameter and more robust against outliers, as well. The same is observed when L0.95 dissimilarity is used instead of the Euclidean distances. Fig. 5 presents some results for the reduced representations, in which just 6 objects are randomly chosen for the set Rred. In the left four plots, Rred contains objects from the uniform cluster only, and both methods perform equally well. In the right four plots, on the other hand, Rred contains an outlier. It can be judged that for a suitable scaling s, no outliers become support objects in the LPDD, which is often a case for the LPSD; see also Fig. 4 and 3. Also, a crucial difference between the LPDD and LPSD can be observed w.r.t. the support objects. In case of the LPSD (applied to a non-reduced representation), they lie on the boundary, while in case of the LPDD, they tend to be ’inside’ the class. Condition monitoring. Fault detection is an important problem in the machine diagnostics: failure to detect faults can lead to machine damage, while false alarms can lead to unnecessary expenses. As an example, we will consider a detection of four types of fault in ball-bearing cages, a dataset [16] considered in [3]. Each data instance consists of 2048 samples of acceleration taken with a Bruel and Kjaer vibration analyser. After pre-processing with a discrete Fast Fourier Transform, each signal is characterized by 32 attributes. The dataset consists of five categories: normal behavior (NB), corresponding Table 1: The errors of the first and second kind (in %) of the LPDD and LPSD on two dissimilarity representations for the ball-bearing data. The reduced representations are based on 180 objects. Euclidean representation Method Error Optimal s # of SO NB T1 T2 T3 T4 LPDD 200.4 10 1.4 0.0 45.0 69.8 70.0 LPDD-reduced 65.3 17 1.1 0.0 20.2 47.5 50.9 LPSD 320.0 8 1.3 0.0 46.7 71.7 74.5 LPSD-reduced 211.2 6 0.6 0.0 39.9 67.1 69.5 L1 dissimilarity representation Method Error Optimal s # of SO NB T1 T2 T3 T4 LPDD 566.3 12 1.3 0.0 1.6 20.9 19.8 LPDD-reduced 329.5 10 1.3 0.0 2.3 18.7 16.9 LPSD 1019.3 8 0.9 0.0 2.2 27.9 27.2 LPSD-reduced 965.7 5 0.3 0.0 3.5 26.3 27.5 to measurements made from new ball-bearings, and four types of anomalies, say, T1 – T4, corresponding either to the damaged outer race or cages or a badly worn ball-bearing. To compare our LPDD method with the LPSD method, we performed experiments in the same way, as described in [3], making use of the same training set, and independent validation and test sets; see Fig. 6. The optimal values of s were found for both LPDD and Train Valid. Test NB 913 913 913 T1 747 747 T2 913 996 T3 996 T4 996 Figure 6: Fault detection data. LPSD methods by the use of the validation set on the Euclidean and L1 dissimilarity representations. The results are presented in Table 1. It can be concluded that the L1 representation is far more convenient for the fault detection, especially if we look at the fault type T3 and T4 which were unseen in the validation process. The LPSD performs better on normal instances (yields a smaller error) than the LPDD. This is to be expected, since the boundary is less tight, by which less support objects (SO) are needed. On the contrary, the LPSD method deteriorates w.r.t. the outlier detection. Note also that the reduced representation, based on randomly chosen 180 target objects (≈20%) mostly yields significantly better performances in outlier detection for the LPDD, and in target acceptance for the LPSD. Therefore, we can conclude that if a failure in the fault detection has higher costs than the cost of misclassifying target objects, then our approach should be recommended. 5 Conclusions We have proposed the Linear Programming Dissimilarity-data Description (LPDD) classifier, directly built on dissimilarity representations. This method is efficient, which means that only some objects are needed for the computation of dissimilarities in a test phase. The novelty of our approach lies in its reformulation for general dissimilarity measures, which, we think, is more natural for class descriptors. Since dissimilarity measures might be unbounded, we have also proposed to transform dissimilarities by the sigmoid function, which makes the LPDD more robust against objects with large dissimilarities. We emphasized the possibility of using the LP procedures for rectangular dissimilarity/similarity representations, which is especially useful when (dis)similarities are costly to compute. The LPDD is applied to artificial and real-world datasets and compared to the LPSD detector as proposed in [3]. For all considered datasets, the LPDD yields a more compact target description than the LPSD. The LPDD is more robust against outliers in the training set, in particular, when only some objects are considered for a reduced representation. Moreover, with a proper scaling parameter s of the sigmoid function, the support objects in the LPDD do not contain outliers, while it seems difficult for the LPSD to achieve the same. In the original formulation, the support objects of the LPSD tend to lie on the boundary, while for the LPDD, they are mostly ’inside’ the boundary. This means that a removal of such an object will not impose a drastic change of the LPDD. In summary, our LPDD method can be recommended when the failure to detect outliers is more expensive than the costs of a false alarm. It is also possible to enlarge the description of the LPDD by adding a small constant to ρ. Such a constant should be related to the dissimilarity values in the neighborhood of the boundary. How to choose it, remains an open issue for further research. Acknowledgements. This work is partly supported by the Dutch Organization for Scientific Research (NWO) and the European Community Marie Curie Fellowship. The authors are solely responsible for information communicated and the European Commission is not responsible for any views or results expressed. References [1] K.P. Bennett and O.L. Mangasarian. Combining support vector and mathematical programming methods for induction. In B. Schölkopf, C.J.C. Burges, and A.J. Smola, editors, Advances in Kernel Methods, Support Vector Learning, pages 307–326. MIT Press, Cambridge, MA, 1999. [2] C. Berg, J.P.R. Christensen, and P. Ressel. Harmonic Analysis on Semigroups. Springer-Verlag, 1984. [3] C. Campbell and K.P. Bennett. A linear programming approach to novelty detection. In Neural Information Processing Systems, pages 395–401, 2000. [4] M.P. Dubuisson and A.K. Jain. Modified Hausdorff distance for object matching. In 12th Internat. Conference on Pattern Recognition, volume 1, pages 566–568, 1994. [5] R.P.W. Duin. Compactness and complexity of pattern recognition problems. In Internat. Symposium on Pattern Recognition ’In Memoriam Pierre Devijver’, pages 124–128, Royal Military Academy, Brussels, 1999. [6] R.P.W. Duin and E. P˛ekalska. Complexity of dissimilarity based pattern classes. In Scandinavian Conference on Image Analysis, 2001. [7] D.W. Jacobs, D. Weinshall, and Y. Gdalyahu. Classification with non-metric distances: Image retrieval and class representation. IEEE Trans. on PAMI, 22(6):583–600, 2000. [8] A.K. Jain and D. Zongker. Representation and recognition of handwritten digits using deformable templates. IEEE Trans. on PAMI, 19(12):1386–1391, 1997. [9] Mangasarian O.L. Arbitrary-norm separating plane. Operations Research Letters, 24(1-2):15– 23, 1999. [10] E. P˛ekalska, P. Paclik, and R.P.W. Duin. A generalized kernel approach to dissimilarity-based classification. Journal of Machine Learning Research, 2(2):175–211, 2001. [11] B. Schölkopf, J.C. Platt, A.J. Smola, and R.C. Williamson. Estimating the support of a highdimensional distribution. Neural Computation, 13:1443–1471, 2001. [12] B. Schölkopf, Williamson R.C., A.J. Smola, J. Shawe-Taylor, and J.C. Platt. Support vector method for novelty detection. In Neural Information Processing Systems, 2000. [13] D.M.J. Tax. One-class classification. PhD thesis, Delft University of Technology, The Netherlands, 2001. [14] D.M.J. Tax and R.P.W. Duin. Support vector data description. Machine Learning, 2002. accepted. [15] V. Vapnik. The Nature of Statistical Learning. Springer, N.Y., 1995. [16] http://www.sidanet.org.
|
2002
|
101
|
2,105
|
Regularized Greedy Importance Sampling Finnegan Southey Dale Schuurmans Ali Ghodsi School of Computer Science University of Waterloo fdjsouth,dale,aghodsib @cs.uwaterloo.ca Abstract Greedy importance sampling is an unbiased estimation technique that reduces the variance of standard importance sampling by explicitly searching for modes in the estimation objective. Previous work has demonstrated the feasibility of implementing this method and proved that the technique is unbiased in both discrete and continuous domains. In this paper we present a reformulation of greedy importance sampling that eliminates the free parameters from the original estimator, and introduces a new regularization strategy that further reduces variance without compromising unbiasedness. The resulting estimator is shown to be effective for difficult estimation problems arising in Markov random field inference. In particular, improvements are achieved over standard MCMC estimators when the distribution has multiple peaked modes. 1 Introduction Many inference problems in graphical models can be cast as determining the expected value of a random variable of interest, , given observations drawn according to a target distribution . That is, we are interested in computing
. Unfortunately, in natural situations is usually not in a form that we can sample from efficiently. For example, in standard Bayesian network inference corresponds to for a given assignment to evidence variables in a given network . It is usually not possible to sample from this distribution directly, nor efficiently evaluate or even approximate at given points [2]. It is therefore necessary to consider restricted architectures or heuristic and approximate algorithms to perform these tasks [6, 3]. Among the most convenient and successful techniques for performing inference are stochastic methods which are guaranteed to converge to a correct solution in the limit of large random samples [7, 14, 4]. These methods can be easily applied to complex inference problems that overwhelm deterministic approaches. The family of stochastic inference methods can be grouped into the independent Monte Carlo methods (importance sampling and rejection sampling [7, 4]) and the dependent Markov Chain Monte Carlo (MCMC) methods (Gibbs sampling, Metropolis sampling, and Hybrid Monte Carlo) [7, 5, 8, 14]. The goal of all these methods is to simulate drawing a random sample from a target distribution defined by a graphical model that is hard to sample from directly. In this paper we improve the greedy importance sampling (GIS) technique introduced in [12, 11]. GIS attempts to improve the variance of importance sampling by explicitly searching for important regions in the target distribution . Previous work has shown that search can be incorporated in an importance sampler while maintaining unbiasedness, leading to improved estimation in simple problems. However, the drawbacks of the previous GIS method are that it has free parameters whose settings affect estimation performance, and its importance weights are directed at achieving unbiasedness without necessarily being directed at reducing variance. In this paper, we introduce a new, parameterless form of greedy importance sampling that performs comparably to the previous method given its best parameter settings. We then introduce a new weight calculation scheme that preserves unbiasedness, but provides further variance reduction by “regularizing” the contributions each search path gives to the estimator. We find that the new procedure significantly improves the original technique and achieves competitive results on difficult estimation problems arising in large discrete domains, such as those posed by Boltzmann machines. Below we first review the generalized importance sampling procedure that forms the core of our estimators before describing the innovations that lead to improved estimators. 2 Generalized importance sampling Importance sampling is a useful technique for estimating when cannot be sampled from directly. The basic idea is to draw independent points according to a simple proposal distribution but then weight the points according to
. Assuming that we can evaluate the weighted sample can be used to estimate desired expectations (Figure 1).1 The unbiasedness of this procedure is easy to establish, since for a random variable the expected weighted value of under is
. (For simplicity we will focus on the discrete case in this paper.) The main difficulty with importance sampling is that even though it is an effective estimation technique when approximates over most of the domain, it performs poorly when does not have reasonable mass in high probability regions of . A mismatch of this type results in a high variance estimator since the sample will almost always contains unrepresentative points but will intermittently be dominated by a few high weight points. The idea behind greedy importance sampling (GIS) [11, 12] is to avoid generating under-weight samples by explicitly searching for significant regions in the target distribution . To develop a provably unbiased GIS procedure it is useful to first consider a generalization of standard importance sampling that can be proved to yield unbiased estimates: The generalized importance sampling procedure introduced in [12] operates by sampling deterministic blocks of points instead of individual points (Figure 1). Here, to each domain point we associate a fixed block !" #%$ , where &' is the length of block ( . When is drawn from the proposal distribution we recover block ) and add the block points to the sample.2 Ensuring unbiasedness then reduces to weighting the sampled points appropriately. To this end, [12] introduces an auxiliary weighting scheme that can be used to obtain unbiased estimates: To each pair of points * , ,+ (such that ,+. ( ) one associates a weight / 0+ , where intuitively / ,+ is the weight that initiating point assigns to sample point + in its block . The / + values can be arbitrary as long 1Unfortunately, for standard inference problems in graphical models it is usually not possible to evaluate 1243,5 directly but rather just 6 1243058791243,5;: for some unknown constant : . However it is still possible to apply the “indirect” importance sampling procedure shown in Figure 1 by assigning indirect weights <=243,587>6 1243,5;?@.243,5 and renormalizing. The drawback of the indirect procedure is that it is no longer unbiased at small sample sizes, but instead only becomes unbiased in the large sample limit [4]. To keep the presentation simple we will focus on the “direct” form of importance sampling described in Figure 1 and establish unbiasedness for that case—keeping in mind that every extended form of importance sampling we discuss below can be converted to an “indirect” form. 2There is no restriction on the blocks other than that they be finite—blocks can overlap and need not even contain their initiating point 3,A —however their union has to cover the sample space B , and @ cannot put zero probability on initiating points which leaves sample points uncovered. “Direct” importance sampling Draw 3 3 indep. according to @ . Weight each point by 2430A 5 7
$
$ . Estimate
24305 by 7 A 243 A 5 243 A 5 . “Indirect” importance sampling Draw 3 3 indep. according to @ . Weight each point by <=243 A 5 7
$
$ where 6 1 7 1 : for some unknown : . Estimate
24305 by 7 $! #"
$$
$ % $& $
$' . “Generalized” importance sampling Draw 3 3 indep. according to @ . For each 30A , recover its block ( A 7*) 3 A,+ 3 A,+ $/. . Create a large sample out of the blocks 3 + 0 3 + 10 3 + 0 3 + . Weight 33254 ( A by A 24362"5 7
7
$98
$ +
7 Estimate :
24305 by 7 ; < = A $ = > 2430A'+ > 5 A 2430A,+ > 5 (direct form) Figure 1: Basic importance sampling procedures as they satisfy $ / + @? + A (1) for every ,+ . (Here ? 0+ indicates ? 0+ BA if 0+ and ? 0+ DC if +FE .) That is, for each destination point + , the total of the incoming / -weight has to sum to A . In fact, it is quite easy to prove that this yields unbiased estimates [12] since the expected weighted value of when sampling initiating * under is
$HG4
7JI6K $ 243 2 5 A 243 2 5L 7
$ INM
7JI6K $ 243 2 5
7
$O 243 A 3 2 5 @.243 A 5 7 =
$ IPM =
7JINMRQ 2430A;362"5 243325 1243325 O 2430AS;3325 7 =
7IPM =
$ INMRQ 2430AS;3325 243325 124332 5 O 2430AS;332"5 7
7 IPM 24332"5 124362"5
$ IPM Q 2430AS;3325 O 2430AS;3325 7
7 INM 243325 1243325 7T
243,5 Crucially, this argument does not depend on how the block decomposition is chosen or how the / -weights are set, so long as they satisfy (1). That is, one could fix any block decomposition and weighting scheme, even one that depends on the target distribution and random variable , without affecting the unbiasedness of the procedure. Intuitively, this works because the block structure and weighting scheme are fixed a priori, and unbiasedness is achieved by sampling blocks and assigning fair weights to the points. The generality of this outcome allows one to consider using a wide range of alternative importance sampling schemes, while employing appropriate / -weights to cancel any bias. In particular, we will determine blocks on-line by following deterministic greedy search paths. 3 Parameter-free greedy importance sampling Our first contribution in this paper is to derive an efficient greedy importance sampling (GIS) procedure that involves no free parameters, unlike the proposal in [12]. One key motivating principle behind GIS is to realize that the optimal proposal distribution for estimating with standard importance sampling is VU , which minimizes the resulting variance [10]. GIS attempts to overcome a poor proposal distribution by explicitly searching for points that maximally increase the objective (Figure 2). The primary difficulty in implementing GIS is finding ways to assign the auxiliary weights / + so that they satisfy the constraint (1). If this can be achieved, the resulting GIS procedure will be unbiased via the arguments of the previous section. However, the / -weights must not only satisfy the constraint (1), they must also be efficiently calculable from a given sample. “Greedy” importance sampling Draw 3 3 independently from @ . For each 30A , let 30A,+ 7 30A and: Compute block ( A 7 ) 3 A,+ ;3 A,+ 3 A,+ $ . by taking local steps in the direction of maximum 243,5 1243,5 until a local max. Weight each 3 2 4 ( A by A 243 2 5 7
7
$ 8
$ +
7 where 8
$ +
7 is defined in (2). Create the final sample from the blocks 3 + 90 3 + 10 3 + 0 3 + . Estimate
24305 by 7 A $ > 243 A'+ > 5 A 243 A'+ > 5 . 7 O O 0 O O 0 O ... ... ... ... 0 O N
7 2 ; 5 12 ; 5 2 5 12 5 ... 2 < 5 12 < 5 Figure 2: “Greedy” importance sampling procedure (left); Section 4 matrix (right) A computationally efficient / -weighting scheme can be determined by distributing weight in a search tree in a top down manner: Note that to verify (1) for a domain point + we have to consider every search path that starts at some other point * and passes through + . If the search is deterministic (which we assume) then the set of search paths entering + will form a tree. Let + denote the tree of points that lead into + and let / + 7 / + . In principle, the tree will have unbounded depth since the greedy search procedure does not stop until it has reached a local maximum. Therefore, to ensure / *+ A we distribute weight down the tree from level C (the root, + ) to levels A 0 by a convergent series; where for simplicity we set the total weight allocated at level , / + , to be / + . This trivially ensures ! "$# / + A .3 (Finite depth bounds will be handled automatically below.) Having established the total weight at level , / + , we must then determine how much of that weight is allocated to a particular point at that level. Given the entire search tree this would be trivial, but the greedy search paths will typically provide only a single branch of the tree. We accomplish the allocation by recursively dividing the weight equally amongst branches, starting at the root of the tree. Thus, if % & is the inward branching factor at the root, we divide / + by % & at the first level. Then, following the path to a desired point , we successively divide the remaining weight at each point by the observed branching factor % ')( , % &( , etc. until we reach . In the case % C , has no descendants and we compensate by adding the mass of the missing subtree to * ’s weight. This scheme is efficient to compute because we require only the branching factors along a given search path to correctly allocate the weight. This yields the following weighting scheme that runs in linear time and exactly satisfies the constraint (1): Given a start point and a search path & ,+ from to 0+ , we assign a weight / = 0+ by / ,+ * + $-,& + $.,/102020 + $., 1 if % 43 C + $-,& + $.,/102020 + $., if % %C (2) where % 65 denotes the inward branching factor of point 65 . A simple induction proof can be used to show that $ / ,+ A . Therefore, the new / -weighting scheme provides an efficient unbiased method for implementing GIS that does not use any free parameters. 4 Variance reduction While GIS reduces variance by searching, the / -weight correction scheme outlined above is designed only to correct bias and does not specifically address variance issues. However, 3We merely chose the simplest heavy tailed convergent series available. there is a lot of leeway in setting the / -weights since the normalization constraint (1) is quite weak. In fact, one can exploit this additional flexibility to determine minimum variance unbiased estimators in simple cases. To illustrate, consider a toy domain consisting of points A 0 , , where C A
A . Assume the search is constrained to move between adjacent points so that from every initial point the greedy search will move to the right until it hits point . Any / -weighting scheme for this domain can be expressed as a matrix, , shown in Figure 2, where row corresponds to the search block retrieved by starting at point . Note that the constraint (1) amounts to requiring that the columns of sum to A . However, it is the rows of that correspond to search blocks sampled during estimation. If we assume a uniform proposal distribution then gives the column vector of block estimates that correspond to each start point. The variance of the overall estimator then becomes equal to the variance of the column vector . In particular, if each row produces the same estimate, the estimator will have zero variance. We conclude that zero variance is achieved iff equals a constant. Thus, the unbiasedness constraints behave orthogonally to the zero variance constraints: unbiasedness imposes a constraint on columns of whereas zero variance imposes a constraint on rows of . An optimal estimator will satisfy both sets of constraints. Since there are constraints in total and %A variables, one can apparently solve for a zero variance unbiased estimator (for 3 ). However, it turns out that the constraint matrix does not have full rank, and it is not always possible to achieve zero bias and variance for given . Nevertheless, one can obtain an optimal GIS estimator by solving a quadratic program for the which minimizes variance subject to satisfying the linear unbiasedness constraints. The point of this simple example is not to propose a technique that explicitly enumerates the domain in order to construct a minimum variance GIS estimator. (Although the above discussion applies to any finite domain—all one needs to do is encode the search topology in the weight matrix .) Rather, the point is to show that a significant amount of flexibility remains in setting the / -weights—even after the unbiasedness constraints have been satisfied—and that this additional flexibility can be exploited to reduce variance. We can now extend these ideas to a more realistic, general situation: To reduce the variance of the GIS estimator developed in Section 3, our idea is to equalize the block totals among different search paths. The main challenge is to adjust / -weights in a way that equalizes block totals without introducing bias, and without requiring excessive computational overhead. Here we follow the style of local correction employed in Section 3. First note that when traversing a path from to + , the blocks sampled by GIS produce estimates of the form 5 "# $., $., $ $ $., . Now consider an intermediate point = 65 in the search. This point will have been arrived at via some predecessor 65 ( , but we could have arrived at = 65 via any one of its possible predecessors . We would like to equalize the block totals that would have been obtained by arriving via any one of these predecessor points. The key to maintaining unbiasedness is to ensure that any weight calculation performed at a point in a search tree is consistent, regardless of the path taken to reach that point. Since we cannot anticipate the initial points, it is only convenient to equalize the subtotals from the predecessors , through 65 , and up to the root ,+ . Let $5 denote the total sum obtained by points after 65 ; i.e. from 65 to + . We equalize the different predecessor totals by determining factors which satisfy the constraints 65 65 65 over the predecessors . This scales the parent quantity $5 65 65 on each path to compensate for differences between predecessors. The equalization and unbiasedness constraints form a linear system whose solution we rescale to obtain positive ! . The are computed starting at the end of the block and working backwards. The results can be easily incorporated into the GIS procedure by multiplying the original / -weights in (2) by the product , 0 65 ( . Importantly, at a given search point, any of its predecessors will calculate the same -correction scheme locally, regardless of which predecessor is actually sampled. This means that the correction scheme is not sample-dependent but fixed ahead of time. It is easy to prove that any fixed -weighting scheme that satisfies + $., " % 65 , and is applied to an unbiased / -weighting, will satisfy (1). The benefit of this scheme is that it reduces variance while preserving unbiasedness.4 5 Empirical results: Markov random field estimation To investigate the utility of the GIS estimators we conducted experiments on inference problems in Markov random fields. Markov random fields are an important class of undirected graphical model which include Boltzmann machines as a special case [1]. These models are known to pose intractable inference problems for exact methods. Typically, standard MCMC methods such as Gibbs sampling and Metropolis sampling are applied to such problems, but their success is limited owing to the fact that these estimators tend to get trapped in local modes [7]. Moreover, improved MCMC methods such as Hybrid Monte Carlo [8] cannot be directly applied to these models because they require continuous sample spaces, whereas Boltzmann machines and other random field models define distributions on a discrete domain. Standard importance sampling is also a poor estimation strategy for these models because a simple proposal distribution (like uniform) has almost no chance of sampling in relevant regions of the target distribution [7]. Explicitly searching for modes would seem to provide an effective estimation strategy for these problems. We consider a generalization of Boltzmann machines that defines a joint distribution over a set of discrete variables , A A , according to
where + += + + Here is the “temperature” of the model and defines the “energy” of configuration ; the functions + and define the local energy between pairs of variables and individual variables respectively; and is a normalization constant. Exact inference in such a model is difficult because the normalization constant is typically unknown. Moreover, is usually not possible to obtain exactly because it is defined as an exponentially large sum that is not prone simplification.5 We experimented with two classes of generalized Boltzmann machines: generalized Ising models, where the underlying graph is a 2 dimensional grid, and random models, where the graph is generated by randomly choosing links between variables. For each model, the function values were chosen randomly from a standard normal distribution. We considered the objective functions (expected energy); A (expected number of 1’s in a configuration); and + + + A (expected number of pairwise “and’s” in a configuration). The latter two objectives are summaries of the quantities needed to estimate gradients in standard Boltzmann machine learning algorithms [1]. This would seem to be an ideal model on which to test our methods. We conducted experiments by fixing a model and temperature and ran the estimators for a fixed amount of CPU time. Each estimator was re-run 1000 times to estimate their root mean squared error (RMSE) on small models where exact answers could be calculated, or standard deviation (STD) on large models where no such exact answer is feasible. We compared estimators by controlling their run time (given a reasonable C implementation) not just their sample size, because the different estimators use different computational overheads, and run time is the only convenient way to draw a fair comparison. For example, GIS methods require a substantial amount of additional computation to find the greedy search 4This variance reduction scheme applies naturally to unbiased direct estimators. With indirect estimators, bias is typically more problematic than variance. Therefore, for indirect GIS we employ an alternative -weighting scheme that attempts to maximize total block weight. 5Interesting recent progress has been made on developing exact and approximate sampling methods for the special case of Ising models [9, 15, 13]. E(energy) Avg SS RMSE @ T=1.0 T=0.5 T=0.25 T=0.1 T=0.05 T=0.025 IS 5094 27.75 68.96 145.97 374.04 749.42 1503.73 GISold 1139 13.89 12.93 12.96 13.35 10.46 12.59 GISnew 1015 14.31 13.73 13.94 15.25 11.78 11.03 GISreg 1015 3.01 4.10 5.57 6.61 6.20 7.72 Gibbs 36524 0.21 0.37 4.44 21.86 53.44 108.13 Metro 35885 0.28 0.53 5.75 24.56 56.16 122.46 0 5 10 15 20 25 0.01 0.1 1 RMSE Temperature GISreg 4x4 GISreg 5x5 GISreg 6x6 GISreg 7x7 GISreg 8x8 0 5 10 15 20 25 0.01 0.1 1 RMSE Temperature Gibbs 4x4 Gibbs 5x5 Gibbs 6x6 Gibbs 7x7 Gibbs 8x8 Figure 3: Estimating average energy in a random field model (table shows results for ). E(and’s) Avg SS RMSE @ T=1.0 T=0.5 T=0.25 T=0.1 T=0.05 T=0.025 IS 4764 6.10 8.42 9.60 10.45 10.15 10.15 GISold 1125 6.33 5.16 4.03 2.57 0.64 0.43 GISnew 1015 6.09 5.16 4.30 2.85 0.61 0.15 GISreg 1015 3.56 3.06 2.43 0.90 0.17 0.05 Gibbs 22730 0.33 0.36 0.59 0.70 1.41 1.54 Metro 25789 0.37 0.43 0.63 0.76 1.30 1.41 0 1 2 3 4 5 6 7 8 0.01 0.1 1 RMSE Temperature GISreg 4x4 GISreg 5x5 GISreg 6x6 GISreg 7x7 GISreg 8x8 0 1 2 3 4 5 6 7 8 0.01 0.1 1 RMSE Temperature Gibbs 4x4 Gibbs 5x5 Gibbs 6x6 Gibbs 7x7 Gibbs 8x8 Figure 4: Estimating average “sum of and’s” in a random field model (table shows ). paths and calculate inward branching factors, and consequently they must use substantially smaller sample sizes than their counterparts to ensure a fair comparison. However, the GIS estimators still seem to obtain reasonable results despite their sample size disadvantage. For the GIS procedures we implemented a simple search that only ascends in not , and we only used a uniform proposal distribution in all our experiments. We also only report results for the indirect versions of all importance samplers (cf. Figure 1). Figures 3 and 4 show typical outcomes of our experiments. Table 3 shows results for estimating expected energy in an generalized Ising model when temperature is dropped from 1.0 to 0.025. Figure 4 shows comparable results for estimating the “sum of and’s”. Standard importance sampling (IS) is a poor estimator in this domain, even when it is able to use 4.5 times as many data points as the GIS estimators. IS becomes particularly poor when the temperature drops. Among GIS estimators, the new, parameter-free version introduced in Section 3 (GIS new) compares favorably to the previous technique of [12] (GIS old). The regularized GIS from Section 4 (GIS reg) is clearly superior to either. Next, to compare the importance sampling approaches to the MCMC methods, we see the dramatic effect of temperature reduction. Owing to their simplicity (and an efficient implementation), the MCMC samplers were able to gather about 20 to 30 times as many data points as the GIS estimators in the same amount of time. The effect of this substantial sample size advantage is that the MCMC methods demonstrate far better performance at high temperatures; apparently owing to an evidential advantage. However, as the temperature is lowered, a well known effect takes hold as the the low energy configurations begin to dominate the distribution. At low temperatures the modes around the low energy configurations become increasingly peaked and standard MCMC estimators become trapped in modes from which they are unable to escape [8, 7]. This results in a very poor estimate that is dominated by arbitrary modes. Figures 3 and 4 show the RMSE curves of Gibbs sampling and GIS reg, side by side, as temperature is decreased in different models. By contrast to MCMC procedures, the GIS procedures exhibit almost no accuracy loss as the temperature is lowered, and in fact sometimes improve their performance. There seems to be a clear advantage for GIS procedures in sharply peaked distributions. Also they appear to have much more robustness against varying steepness in the underlying distribution. However, at warmer temperatures the MCMC methods are clearly superior. It is important to note that greedy importance sampling is not equivalent to adaptive importance sampling. Sample blocks are completely independent in GIS, but sample points are not independent in AIS. Nevertheless, GIS can benefit from adapting the proposal distribution in the same way as standard IS. Clearly we cannot propose GIS methods as a replacement for MCMC approaches, and in fact believe that useful hybrid combinations are possible. Our goal in this research is to better understand a novel approach to estimation that appears to be worth investigating. Much work remains to be done in reducing computational overhead and investigating additional variance reduction techniques. References [1] D. Ackley, G. Hinton, and T. Sejnowski. A learning algorithm for Boltzmann machines. Cognitive Science, 9:147–169, 1985. [2] P. Dagum and M. Luby. Approximating probabilistic inference in Bayesian belief networks is NP-hard. Artificial Intelligence, 60:141–153, 1993. [3] P. Dagum and M. Luby. An optimal approximation algorithm for Bayesian inference. Artificial Intelligence, 93:1–27, 1997. [4] J. Geweke. Baysian inference in econometric models using Monte Carlo integration. Econometrica, 57:1317–1339, 1989. [5] W. Gilks, S. Richardson, and D. Spiegelhalter. Markov Chain Monte Carlo in Practice. Chapman and Hall, 1996. [6] M. Jordan, Z. Ghahramani, T. Jaakkola, and L. Saul. An introduction to variational methods for graphical models. In Learning in Graphical Models. Kluwer, 1998. [7] D. MacKay. Intro to Monte Carlo methods. In Learning in Graphical Models. Kluwer, 1998. [8] R. Neal. Probabilistic inference using Markov chain Monte Carlo methods. Tech report, 1993. [9] J. Propp and D. Wilson. Exact sampling with coupled Markov chains and applications to statistical mechanics. Random Structures and Algorithms, 9:223–253, 1996. [10] R. Rubinstein. Simulation and the Monte Carlo Method. Wiley, New York, 1981. [11] D. Schuurmans. Greedy importance sampling. In Proceedings NIPS-12, 1999. [12] D. Schuurmans and F. Southey. Monte Carlo inference via greedy importance sampling. In Proceedings UAI, 2000. [13] R. Swendsen, J. Wang, and A. Ferrenberg. New Monte Carlo methods for improved efficiency of computer simulations in statistical mechanics. In The Monte Carlo Method in Condensed Matter Physics. Springer, 1992. [14] M. Tanner. Tools for Statistical Inference: Methods for Exploration of Posterior Distributions and Likelihood Functions. Springer, New York, 1993. [15] D. Wilson. Sampling configurations of an Ising system. In Proceedings SODA, 1999.
|
2002
|
102
|
2,106
|
Modeling Midazolam's Effect on the __H_il!Jlocampus and Recognition Memor! Kenneth J'" .I\'lalJrnbeJ~2 Departn1ent of Psychology Indiana V'uiversity Bloomington, IN' 47405 Rene Le!ele:nD~er2 Department of rS'/cnOlCHIV Indiana University Bloomington, IN 47405 rzeelenb(~~indiana.edu Richard 1\'1.. Sbiffrin Departm.entsof Cognitive Science and Psychology Indiana 'University Bloomington, TN' 47405 shiffrin@indiaful.etiu Abstract The benz.odiaze:pine '~1idazolam causes dense,but temporary ~ anterograde amnesia, similar to that produced by- hippocampal damage~Does the action of M'idazola:m on the hippocanlpus cause less storage, or less accurate storage,.of information in episodic. long-term menlory?- \rVe used a sinlple variant of theREJv1. JD.odel [18] to fit data collected. by IIirsbnla.n~Fisher, .IIenthorn,Arndt} and Passa.nnante [9] on the effects of Midazola.m, study time~ and normative \vQrd...frequenc:y on both yes-no and remember-k.novv recognition m.emory. That a: simple strength. 'model fit well \\tas cont.rary to the expectations of 'flirshman et aLMore important,within the Bayesian based R.EM modeling frame\vork, the data were consistentw'ith the view that Midazolam causes less accurate storage~ rather than less storage, of infornlation in episodic mcm.ory.. 1 Introduction 'Danlage to the hippocampus (and nearby regions), often caused by lesiclns1 leaves normal cognitive function intact in the short term., including long".tenn memo-ry retrieval, but prevents learning of new1' inJornlat.ion.We have found a ,yay to begin to distinguish two alternative accounts for this lea.ming deficit: Does damage cause less storage, or less accurate storagc1 of information in long-term episodic menlQry£! We addressed this question by using the REM model of recognition 'mC'mQry [18] to fit data collected by Hirshnlan and colleagues [9], vlho tested recognition memory in nornlalparticipants given either saline (control group) or Midazolam, a benzodiazepine that temporarily- causes anterograde amnesia with effects that generally' 'mimic those found after hippoca.mpa1 dan1age. 2 Empirical findings The participants in Hirshman et at [9] studied lists of \~ords that ·varied in nomlative. word frequency (Le., lo\v-frequency vs. high.-frequency) and the amount of time allocated for study (either not studied, or studied for 500, 1.200, or 2500 ms per ·word)+ These variables are known to have a robust effect on rec.ognition memory in nornlal populations; Lo\v-frequency (LF) \vords are better recognized tllan high·· frequency (FIT) \"rOrd5~ an.d a.n. increase in study tinle inJproves recognition perfbl1:11ance. In. addi.tion~ the probability ofrespon.ding 'oldY to studied words (temJed hit rate~ or FfR) is higher forL·F \:vords than forHF '\¥ords, and t11e probability of responding 'old· to unstu.died. \\lords (~ermed fa.tse alarm. rate, or FAR) is lo\>ver for l,F \vords than. tor HF '\Tords. Th.is pattern ofdata is commonly kno\vn as a ~l;mirror etIecf' [7]., In. Hirshulan et al. [9], participants received either salin.e or l'vfidazolatn a11d then studied a list of ·words. A.fter a delay of about an hour they ,vere sho\vn studied words eold t ) and unstudied words Cnew 1)'1 a.nd asked to give old-new recognition and. renlenlber/k'11o\v judgments. The HR and F.AR. ii.ndin.gs are depicted in Figure 1 as the large circles (tl.I1ed for l,F test¥iords and un.filled for HF test '~i'ords). The results fror.n the saline condition, given in the left panel, replicate tIle standard effects in the literature: In the figure; the points labeled with zero study time give FAR.s (for ne·';fl test itelns), and the other poin.ts give HRs (for old test items). Thus ,ve see that the saline group exhibits better performance forL·F words al1d a rnirror effect: ForLF words~ FA·Rs are IO\,l.ler an-dHRs are higheL The Midazolam group of course gave ]oVi-rer performance (see right pan.el). More critically, the pattern of results differs from that for the sal ine group: The mirror effect was lostL,F \vords produced both.loweTF~A,..Rsand lower HR.s+ Study Time (ms.) o ~ " ; HRsandFARs in MidazolamCondition --.- LF Data ···0···· HFData --.- LF Fit ..··0· .. · HF Fit ....::::::::::::::::..,.@ o 500 1000 1500 .2000 25003000 Study Time (ms.) HRsandFARs in Saline Condition -,---------------, ....---------------,- 0.8 0.7 0.6 II ~ 0.5 0c. 0.4 '-' 0.3 -'---.,----,.---,------r--,----,..-----t '----,----.,-----,--.,----,.----r----+ 0.2 50010001500 200025003000 0.8 0.7 0.6 0.5 0.4 0.3 0.2 Figure J.!.y"·cs-no recognition data. from Hirshman ct at and predictions o:f aR.EMm.odeL ZerQ U1S study time refers to 'new~ items so the data gives the false-alarnl rate (FAR). Data sh(nvn for non-zero study times give hit rates (lIR). Only the REM parameter & varies bchvecn the saline andm.idaz:olam conditions. 1~hc fi.ts are hased on 300 JVlonte (~arlo simulations usinggLF ~ .325 t g= ..40" gnr ~ ,,45; ~:= 16; 10 ~ 4, ~ ~ .8~ !!* ~ ..025, QS.l ,~ .77, ~M1d = .25, CritQ/N'= .92. ·LF·= low-frequency words and lIP = high-freque.ncy \yords. The participants also indicated 'Athether their "old'" judgrnents \vere made on the basis of '~rememberingHthe study event or on the basis of "kno\¥ing" the v.rord \vas studied even tb.ougb. tlley could not explicitly rernenlber the study event [5]. Data: are sh.o\¥n in Figure 2. Of greatest interest (or present purposes, ~~knowr~" and "rernelnber)' responses \vere differently affected by the \vord frequen.cy and the drug manipulations. In the 1Vlidazol.aul condition~ th.e conditional probability of a t'kno\\{'~ judgnlent (given an t'o]d:l~ response) was consistently higber tb.an that of a "remember'} judg1nent (for both HF and L,F Vi-i'ords). lVl"oreover, these probabilities ·were hardly affected by study timei A different pattern \vas obtained in the Saline condition.FQT HF words, the cQnditional probability of a '.tknO\~l~1 judgment vvas higher than that,of a "rerrleulber" judgmen.t~ but the difference decreased with study' time..Final1y~ tor LF \vords, the conditional probability of a "'kn.o\v" Judgment vvas higher than that of a Hrernenlber~' judgrrlent tor nonstudjed foj.ls~ but tor studied targets the conditional probability of a. Hremernber" judgrnent \vas 11igher tha.n that of a '~kn.ow·" judgrnent The recognition an.d rerrlenlberlk"u\¥ re·sults were interpreted by Hirshman et aL [91 to require a dual process account; in particular~ tlle authors argued against Hnlenlory strel1gtll~' accounts [4 t 6~ 11]. Although not the n1uin message of this tlote~ itvvitl be of som.e inteTest to m.emory theorists to n.ote that our present results. sho·ws tIllS con.clusion to be in.correct. 2000 2000 Dr.------ID LF ... Midazolam 1000 Study Time 1000 o o 0.6 ~::::::::::::::::::::::'a:::::::::::::::::::::? 0.4 ~ -Ii -IIp~Uremember~f·old) 0.2 pJlk.now.~~ I~'Old.) -0- Rmember Fit 0.0 ..J..-,···_O····--,.··_Ko_o_w_F_it_---r__---' 0.8 1.0 ..,-------------, HF ... Midazolam ~ 0.6 O-·······:::::~·..:::::::;;::···············,,,·o·..··~--_····_-_ ..·~~_·····-O [J 0.4 ~o:t 0.2 .2 ~ 0.0 ~------,-------.-------' o 1I:r(U 1.0 ..,.---------------, ..c ~ 0.8 E OJ r~cr 2000 LF • Saline 1000 2000 1000 Study Time 0= 1.0 .,----,---------------, ,-.. 0.8 D."... HF .. Saline ~ 0.6 ···""O:::::::::::::::::::::::~::::::::::::::::::::.. I: ~ 0.4 g 0.2 ~ ~o 0.0 \.. 0 ~ 1.0 .,-----------------, E 4) 0.8 ! OJ>. C:t,... ~ D:' 0.4 0.2 0.0 ..J..-,-------,-------.-------' Figu.re2~ .Remember/kn.ow data froluHirshman et aL and predictions ora REM fllodeL The paramete:r values are those listed in t.he caption for.Figure I,. plus there arc two remember...know crite.rion:Fo.r the saline group,. CritR/K;; 1.52; for the midazo.lanl group, (:ritRlK;;;' 1~30~ 3 A REM model for recognition and remember/know judgments Aconlmonway to conceive of recognition. nlenl0ry is to posit that memory. is probed \vith the test item, and the recognition decision is based on a. continuous random variable that is oft.en conceptuali.zed as the resultant strength, intensity, or fam.iliarity [6]" If the familiarity exceeds a subjective c.riteri.on, then the subject responds'~old"+Otherwise, a "n.ew" response is made [8]. A subclass ·of this type of model accounts for the vvord-frequency mirror effect by assuming that there exist four underlying d.istributions of fa.nliHarity values~ such th.at th.e means of these distributions. are arranged along a familiarity scale in the follo\ving n1annct: p(L·F-nc\v) ~:::: jl(HF-nc\\r) <~ p(HP-old) < p(LF~old). The left side of Figure 3 displa.ys this relation graphical.ly. l\.. model of this type can predict the recognition fmdings of IIirshn1a.n ·ct a1. (in press) if the effect of . Midazolam is to rearrange the underlying distribut.ions on the familiarity scale such that }t(L.F-old) < p(HF-old). The right side of Figure 3 displays this relation. graphically. The R.EM 1110del of the \~lord-frequency effect described by Shiffrin and Steyvers [13, 18, 19] is a member ofthis class of models, as \ve describe next. RE.M [1.8] assumes that memory traces consist of vectors Y, of length ~, of nonnegativ·e integer feature values v Zero represents no infomlation about a feature + ()thenvise the values for a given feature· are assum.ed to tbllo\\l the geometric. probability distribution given as Equation 1: P(V =j) = (l_gy-lg, for j= 1 and higber~ Thus higber integer values represent feature values less likely to be encountered in the environment R.EM adopts a "feature-frequen.cy'" assumption [13]: the lexicalJsemantic traces of lU\\ler frequency ·words are generated \vith a low·er value of g (Le. gLP < gllr). These lexical/semantic traces represent general knovvledge (ekg~, the orthographic, pl1onological, senlantic, and contextual characteristics ofa \vord) and bave very many non-zero feature values~ most of'\vl1icb. are e.ncoded correctly. Episodic traces represent the occurrence of stinluli in a certain environmental context; they are built of tlle same feature types as lexical/senlantic traces, but tend to be in,cOlnplete (bavemany zero values) and inaccurate (the values do not nec.essarily represent correctly the ·v·alues ofth,e presented event). When a \vord is studied, an incomplete and error prone representation ofthe '~lord's lexical/semantic trace is stored in a. separate episodic image. The probability that a feature ',eVill be stored in the episodic inlage after! time units of study is given as Equation 2: 1 - (1 - 11*)1, where !!* is the probability of storing a feature in an arbitrary un-it of time~ The number of attempts, 1j, at storin.g a con.tent featur-e for an itenl studied for j units of time is co.mputed from Equation 3: 11 == 11.=.1(1 +' ~-JAJ), \vh.-ere Saline new (}ld LF HF HF LF Midazolam lleW cld LF HF I.F HF JI less. Fmniliarity tnQ~ less Fami1:L."U'ity more Figure 3. l\rrallgomcnt of Inoans of the theoretica.l distributions of strcngth...bascd models that may give risc to 'Hirshman ct at ~s findings. HF and LF = high or LF freq~ucncy ,vol'ds:; respectively.. :§; i.s a rat.e parameter:- and t, is the number of atten1pts at storing a. feature in the' first 1 s. of study. "rhus, increased study time increases the storage of features, but the gain in the amount of information stored diminishes as the itctn is studied longer. Features that arc not copied frotn the lexical/semantic trace arc represented by a valu.c of O. If storage of a feature docs occur, the feature value is correctly copied from the ,vord~s lexical/semantic tI'aCC '\vith probability Q. With probability l ..~ the value is incorrectly copied and sall1plcd randolnly from the long-run base-rate gco111ctric distribution:, a distribution defin.ed. by g such that gHF ~> g > gLF. (4) At test, a probe made with context features only is assumed to activate the episodic traces~ Ij, ofthe !l list itenlS and no otllefS [24]. Then the content features ofthe probe cue are tnatched in parallel to the activated traces..For each episodic trace, Ii, the system notes the values. of features ofIi that rnatch the corresponding feature of the cue (njjm stands for the number ofmatching values in tl1e j-th image that have value i)} and the ntnnber of nlisulatcbing featq.res (njq stands tor the number ofmismatching values in the fhimage). Next~ a likelihood ratio, ~j~ is cOlnputed for each Ii: (. )12, n°C.",""'>. [.. c + (l-,{.~)g(l- g)r-l ].,.• l1. fi m A ~ l-c )Il '. . i-I j /;1 gel-g) ~ is the likelihood ratio for the fh itnage~ It can be thought of as a· runtchstrength bet\veen the retrieval cue and.Ii. It gives tlle probability ofthe data (the olutcl1es and misn1atches) given that tlle retrieval cue and the inlage represent the san1e word (in which case features are expected to luatch, except for errors in storage) divided by the probability' ofthe data given t11at the retrieval cue and tIle irnage represent different "fords (in \vhich case features matell only by chance). TI,e recognition decision is based on the odds1 <1>, giving the probability that the test item is old divided by the probability the test itetn is ne," [18]. This is just the average ofthe likelihood ratios: 1 n =-LA.» (5) 11 j=4 J Ifthe odds exceed a criterion~ then an Uoldj~ response is 1nade, The default criterion is '1.0 (wllich maximizes probability correct) although subjects could of course deviate from this setting. Thus an Hold" response is given 'Vvnen there is more evidence that the test ,vord is old. !\1atching features contribute evidence that an item is old (contribute factors to the product in ,Eq. 3 greater than 1~O) and n1ismatching features contribute evidence that an item is ne\\' (contribute factors less than l .O)~ RE!vlpredicts an effect of study time because storage of Olore non-zero features increases the number of matching target-trace features; this factor outweighs the general increase in variance produced by'" greater nunlbers of non-zero features in an vectors. 'RENt predicts a L·F HR advantage because the matching ofthe more uncon1mon features associated 'W'"ith LF words produces greater evidence that the item is old than the matching of the more COOlmon features associated with H.F words..For foils~ however~every teature match is due to chance; such matching occurs n10re frequently for HF tl1an LF \vords because HF features are ,nore common [12]. TIlis factor outweighs the higher diagnosticity of matches tl1f theLF words, andHF vV'otds are predicted to have higher FARs than L·F '\vords~ Much evidence points to the critical role of the hippocampal region in storing episodic memory' traces ['I, 14, l5, ]6l 20]. Interestingly, f\.1idazolam has been sho\vn to affect the storage, but not the retrieval of memory traces [22]. As described above, there are tw'o parameters in R.EM that affect the storage offeatures in tnemory: 11* detennines the nuolber of features that get stored, ~nd £. deternlines the accuracy with which features get stored. In order to lower performance, it could be assun1ed that Midazolanl reduces the values of eitl1er or both oftl1eseparameters. Ho\vever, Hirshulan et at '8 data constrain wl11ch of these possibilities is viable. Let us assutne that MidazoJam only causes the hippocampal region to store fe\ver features, relative to the saline condition (i.e. ll* is reduced). In REM~ this causes te\\>Ter terms in the product given by .Eq. 4~ and a lO\\>Tervalue for tlle result~ on the average. Het1ce~ if Midazolam causes fe\ver features to be stored~ subjects should approach chance-le,\tel performance for both HF and .LF \-'lords: LF{FA.R) ~ H.F(F..A.R) .....,/ L-F(HR) ....~ HF(HR). However, Hirshnlan et a1 found that the difference in the LF and H'F FA.Rs \¥as not affected b:y 1\1idazolam. In RETvl this difference would n.ot be much affected; if at al1~ 'by changes in criterion, or c.hanges in & that one 1111ght assume Midazolam induces. Thus \vithin the fratnework of R.ENf, the main effect of l\1idazolam on the functioning of the hippocampal region is not to reduce the n.umber of features "that get stored. Alternatively let us assunle that Midazolam causes tIle hippocalTIpal region to store '~nQisier" episodic traces, as o.pposed to traces with feV~ter :non-zero features~ instantiated in RE·Tvf by d.ecreasing the valu.e of th.e ~ parameter (that governs coo-ect copying of a feature value). Decreasing Q only slightly affects the false alann rates~ because these FARs are based on chance matches14 .HO\\feVer, decreasing ~ causes the LF an.d .HF old-itenl distributions (see Figure 3) to ~pproacb. the L~F and HF ne\\L.. item distrihutions; \vhen. the decrease is large en.oug:h.~ this factor tnust cause the LF and .HF old-item distributions to reverse position. The reversal occ.urs bee-ause the H,F retrieval cues used to prope melTIOry have more comnlon features (on average) than the LF retrieval cues, a factor that cornes to dominate \vhen the true 'signar (mate-hing features in the target trace) begins to disintegrate into noise (due to l.o\vering of~). Figure 1. shows predictions of a REM nlodel incorporating the· assumption that only ~ \taries benveen the saline a.nd IVIidazolalTI groups~ a.nd only at storage, .For retrieval the same ~ value \vas used iri both the saline and Midazolanl conditions to calculate the likelihoods in E,q~ation 4 (an assumptioll consistent with retrieval tuned to the partiei.pant's lifetime learning, and consistent ,vith prior findings sh{)~ring that Midazolam affec.ts the storage of traces and not their retrieval [17]. The criterion for an. oldlnc\v judgment '--va.s set to ;,92~ rather than. tlle nornlatively optimal value of I ~O!lin order to obtain a good qua.ntitative fit, but the criterion did not vary betw~een. the 1v1idazolarn and saline gro~ps, and therefore is 110t of consequence tor the present article x \Vithin the RE,M framework; then; the main effect of Midazolan1 is to ca·use the hippocampal region to store more noisy episodic traces. These conclusions are based on the recognition data. 'h7e tum next to the remenlber/kno\v judgments. \Ve chose to model renlenlber...kno\v judgments in "vhat is probably the shnplest way. The approach is based on the olodels described by Donaldson [41 and .Hirsbrnan and Master [10, II]. As described above~ an totd t decision is given when the familiarity (Le~ a,ctivation~ or in RE1vf tenns the odds) a.ssociated '\vith a test 'word exceeds tb.e yes-no criterion. \\7Jlen this happens, th.en. it is aSSUllled th.at a higher remember/kl10\V criterion is set. \llords ,,,bose familiarity exceeds the higher renlenlherllo,O\v" criterion a.re given the ·'renlenlber" response, and a "know H response is givenw'hen the remember/kno\¥ criterion is 110t exceeded. Figure 2 shows that this lnodel predicts the effects of MidazQlam and saline both qualitatively and qua,ntitatively·. TIllS fit was obtained by' using slightly different renlenlber~know criteria in the saline and 'Midazolam conditions (1.40 and 1.26 in the saline and Midazolam conditions, respectively), but aJl the qualitative effects are predicted correctly even\vhen the same criterion is adopted for remembetlknow. 1 Slight din-'erences are predicted depending on the interrelations of ,g~ gl1f~ and gLf These predictions pro'lide a.n existence proof that Hirshman et aL [9] were a bit hasty in. usin.g tlleir data to reject single...process tnodels of the present type [4, 11]:t an.d sho\v that single- versus dual-process models \\lQuld hav·e to be distinguished on the basis of other sorts of studies. There is already a large literature devoted to this as-yet-unresolved issue [10], and spa.ce prevents discussion here. Thus far we detnonstrated tlle sufticien.cy of a model assulning that lVHdazolanl reduc.es storage acc·uracy rathe-r than storage quantity, an.d have argued that the reverse assumption cannot 'Vvork. \Vhat degree of Inixture of tllese assumptions tnight be conlpatible with the data'? A.l1 ans"ver "~lould require an exhaust.ive ex:ploration. of the paralnet.er s.pace" but \¥e found that tD.e use of a 50~/Q reduced value of y* for the Midazola.m group (11*suI == .02; Y*rrti*i == .01) predicted an LF-Fi\R. advantage that deviated from the data by bein.g noticeably snlaller in. the Midazolanl than saline condition. Within. the RE.1\1 fratnework this result suggests the maill effect of l\1idazolalu (possibly all tIle effect) is on ~ (accuracy of storage) rather than Otll1* (quantity of storage). AJtern.atively~ i.t is possible to conceive of a much more complex RE·M model that assurnes that the effect of IVIidazolatll is to reduce the aOlount of storage. Accordillg1.y~ one might assunle th.at relatively little in.f1)rnlation is stored. in. m.emory in the Mid.azo]am. cOl1dition.~ an.d that the retrieval cue is Inatch.ed primarily aga.inst traces stored prior to tl1e experiment Such a modeL Inightpredict Hirshman et at "5 tin.din.gs bec.ause· once again. targets will only be randonlly similar to contents of m.emory.. Ho\vever, suell a lTIodel is far tnore com:plex. than. the InQdel described above. Perhaps, future research will provide data that requires a Olore complex m.odel~ but for n.O\V the simple m.odel presented here is sufficien.t+ 4 Neurosc.ientific. Speculations The }lippocatnpus (proper) consists of approximately 'I O~/~ C]ABAergic intern.euron.s, and these intern.eurons are th.ought to control tbe firing of the remaining 909/~ of the hippocan1pal principle neurons [21]. Some of the principle neur011S are gra.nule neurons and SOlne are pyramidal neurons~ The granule cells are associated ,vitb. a rhythmic pattern of neuronal activity k~llown as theta,,vaves [1]~ Tl1eta \\laves are associated ",tith exploratory activities in both animals [1.6] and hUlnans [2]~ activities in \vhic.h infortnation about novel situations is being acquired. Midazolam is a. benzodiazepine~ and benzodiazepines inhibit the tiring of (]ABAergic interneurons in the hippocampus [3]. Hence, if tv1idazolan) inhibits the tiring of those cells that regulate the orderly firing of the vac;t majority of hippocampal cel1s!l then it is a reasonable to speculate that the result is a "noisier" episodic memory trace~ The a.rgUlnent that?vt idazolaln causes noisier storage rather than less storage raises tb.e question whether a sitnilar process produces th.e silnilar effects caused by hippocampal lesions or other sorts of datnage (e.g.. Korsakoff's syndrolne). l'hi8 question could be explored in future research. Refel~ences [1 ]Bazsaki~ Gy (1989), T\vo..stagc mode1 of memory trace formation: A role for HnoisyH brain stales. Neu70sciencelj 31 j 55l-510. [2] Caplan, J. B.1 R.aghavachari, S. and Madscn~ J. R..,Kahana, M. 1 (2001). Distinct patterns of brain Qscillat1(lns underlie two b~qjc parameters ofhum.an n13ze learning. .l ajNeurophys,,> 86} 3683g0~ [3] Dcad\\rytcr, S. t.\.~ Wcst, M., &, Lynch!! G. (1979). I\ctivity of dentate granule cells during learning; difJerentiation of peri'orant path input. Brain Res~, "] 69~ 29-43. [4] Donaldson, Vl. (1996). The role oJ decision processes in l~emembering and kno\ving, .J.\1emory & Cognition" 24, 523-533. [5] Gardincr~ J. tvL (1988). Functional aspects or rccollective experience. A-fenu)ry & (To&rnition, 1671 309-313. [6] Gillund" G., & Shiffrin~ R__ !Vt (1984). A retrieval model for both recognition and rcc·alL .P,r.;yeh. Rev., 91, 1-67. [7] Glanzer, 1v1., & Adams~l :K.. (1985). The mirror effect in rc·c·ognition. lllcmory..l\lenl0ry & Cognition., 12, 8-20. [8] Green7 D·, tVL~ & S\veisJ J. A. (1966). Signal detection theo~y and psycho]Jh...vsics. Ne·\\l York: Wiley. [9] H·h·shman, E", 'Fisheric J.~ H'cnthorn:1 T.~Arndt~ J-, &. Passannantc~ l\. (in press) :Mjdazolam amnesia and dual-process models of the ,Yord frcquc·ncy mirror effect..I. qll\{enUJJ:V and L·anguage~ 47~ 499-516. [lOJ Hirshman~ E. & Henzler, A_ (1998) The role of decision processes in conscious memory, Psych. ScL, 9, 61-64+ [11J IIiTshman~ E~ & JvIaster., S. (1997) I\1odeling the conscious c(ltrelates ofrecQgnitioll memory: Reflections on the 'Rctnctnbcr-Know paradigm. .Atemory & G"()gniti(J11~ 25~ 345--352. [12] Malmberg, K. J. & t\,furnane:t·K. (2002)_ List compos111on and the word-frequency effecL for recognition memory. J. oj·Exp. P~ych.: LeanlingJ l\;lemory~ and c..of:,rnition:t 28~ 616-630. [13] l\lfa.hnbcrg, K. J~, StcY\lCrsf. IVL, Stephens, I D., &. Shiffrint R~ 1.f~ (in prcss)~ Feature frequency efiects in recognition men1Qry. A/emory &G'o&:TTlition. [14] tv1a.rr~D~ (1971).. Simple lUCl110ry: a theory tor the atchicortcx. Proceedin.gs o,.(the Royal Society, L,ondon B 84l, 262:23-81. [15] lVIc·CLcUand., J. L,.• rv1.c·Naughtou, .B~ L..• & 07~Rcil.ly,R. C. (1995). \Vhy there arc CoUtplCnlcntary learning systems in the hippocampus and ncocortex: Insights fronl the succe·gses and failures orconnecti()uist n1{)d~ls of learning and memory.. Psych. Rev" "J 02, 419-457. [1.6] O~.Kccfc, J. &,N'adcl~ .L. (1978). 17:e hippoclunpus as a cOJJnilive 11't~p. Oxford: Clarendon IJnivcrsity Press. [17] Polster,MA, ~1cCarthy, RA, O;Sullivan, G., Gray,P., & Park, G. (1993). lVlidazolam.Induced amn.csia~ Implications for the hnpUcit/cxplicit tncrnory distinction. Br(tit1 & Cognition.., 22, 244-265. [18J Shiffrin~ R.M.'t & Stcyvcrs. M. (1997). A tllodcl for-recognition memory: REM ~ retrieving effectively frotu lllcmory. Psy'cho/1(Jlnic Bulletin & Review~ 4; 145-166. [19] Sh.iffrin,R...M.. & Steyvcrs~ .'1\4:. (1998). The effectiven.ess ofretrieval fronl m.crnoty. In .1Vt Oak.sforrl &N. Chater (Eds.), .Rational fttodels o.l'cogt1Uio!2. (pp" 73...95)~ London: Oxford University Press~ [20] Squirc~ L. 'R.. (1987)~ }~lenlory and the .Brain. 'Nc\v York: Oxfor.d~ 1:21] 'lizi~ E. S. & Kiss:t K. P. (1998). Neurocnemistry and pharmacology of the nlajor hippocatnpaJ. tranSfilittcr systcnls: Synaptic an.d .No.nsynaptic interactions. 'H~ppoCall~p1JS, 8, 566-607.
|
2002
|
103
|
2,107
|
Fast Kernels for String and Tree Matching S. V. N. Vishwanathan Dept. of Compo Sci. & Automation Indian Institute of Science Bangalore, 560012, India vishy@csa . iisc . ernet . in Alexander J. Smola Machine Learning Group, RSISE Australian National University Canberra, ACT 0200, Australia Alex . Smola@anu . edu . au Abstract In this paper we present a new algorithm suitable for matching discrete objects such as strings and trees in linear time, thus obviating dynarrtic programming with quadratic time complexity. Furthermore, prediction cost in many cases can be reduced to linear cost in the length of the sequence to be classified, regardless of the number of support vectors. This improvement on the currently available algorithms makes string kernels a viable alternative for the practitioner. 1 Introduction Many problems in machine learning require the classifier to work with a set of discrete examples. Common examples include biological sequence analysis where data is represented as strings [4] and Natural Language Processing (NLP) where the data is in the form a parse tree [3]. In order to apply kernel methods one defines a measure of similarity between discrete structures via a feature map ¢ : X ----+ Jek. Here X is the set of discrete structures (eg. the set of all parse trees of a language) and JeK is a Hilbert space. Furthermore, dot products then lead to kernels k(x, x') = (¢(x), ¢(X') ) (1) where x, x' E X. The success of a kernel method employing k depends both on the faithful representation of discrete data and an efficient means of computing k. This paper presents a means of computing kernels on strings [15, 7, 12] and trees [3] in linear time in the size of the arguments, regardless of the weighting that is associated with any of the terms, plus linear time complexity for prediction, regardless of the number of support vectors. This is a significant improvement, since the so-far fastest methods [8, 3] rely on dynarrtic programming which incurs a quadratic cost in the length of the argument. Note that the method we present here is far more general than strings and trees, and it can be applied to finite state machines, formal languages, automata, etc. to define new kernels [14]. However for the scope of the current paper we Iirrtit ourselves to a fast means of computing extensions of the kernels of [15, 3, 12]. In a nutshell our idea works as follows: assume we have a kernel k(x, x') I:iE I ¢i (x )¢i (x') , where the index set I may be large, yet the number of nonzero entries is small in comparison to III- Then an efficient way of computing k is to sort the set of nonzero entries ¢(x) and ¢(X') beforehand and count only matching non-zeros. This is similar to the dot-product of sparse vectors in numerical mathematics. As long as the sorting is done in an intelligent manner, the cost of computing k is linear in the sum of non-zeros entries combined. In order to use this idea for matching strings (which have a quadratically increasing number of substrings) and trees (which can be transformed into strings) efficient sorting is realized by the compression of the set of all substrings into a suffix tree. Moreover, dictionary keeping allows us to use arbitrary weightings for each of the substrings and still compute the kernels in linear time. 2 String Kernels We begin by introducing some notation. Let A be a finite set which we call the alphabet. The elements of A are characters. Let $ be a sentinel character such that $ tf. A. Any x E A k for k = 0, 1, 2 ... is called a string. The empty string is denoted by E and A * represents the set of all non empty strings defined over the alphabet A. In the following we will use s , t, u , v, w, x, y, z E A * to denote strings and a, b, c E A to denote characters. Ixl denotes the length of x , uv E A * the concatenation of two strings u , v and au the concatenation of a character and a string. We use xli : j] with 1 ::; i ::; j ::; Ixl to denote the substring of x between locations i and j (both inclusive). If x = uvw for some (possibly empty) u, v, w, then u is called a prefix of x while v is called a substring (also denoted by v [;;; x ) and w is called a suffix of x . Finally, numy(x ) denotes the number of occurrences of yin x . The type of kernels we will be studying are defined by k(x, X'):= L ws6s,s' = L nums(x ) nums(x')ws. (2) s EA " That is, we count the number of occurrences of every string s in both x and x' and weight it by ws , where the latter may be a weight chosen a priori or after seeing data, e.g., for inverse document frequency counting [11]. This includes a large number of special cases: • Setting W s = 0 for all lsi > 1 yields the bag-of-characters kernel, counting simply single characters. • The bag-of-words kernel is generated by requiring s to be bounded by whitespace. • Setting Ws = 0 for all lsi> n yields limited range correlations of length n. • The k-spectrum kernel takes into account substrings of length k [J 2]. It is achieved by setting W s = 0 for all lsi i- k. • TFIDF weights are achieved by first creating a (compressed) list of all s including frequencies of occurrence, and subsequently rescaling W s accordingly. All these kernels can be computed efficiently via the construction of suffix-trees, as we will see in the following sections. However, before we do so, let us turn to trees. The latter are important for two reasons: first since the suffix tree representation of a string will be used to compute kernels efficiently, and secondly, since we may wish to compute kernels on trees, which will be carried out by reducing trees to strings and then applying a string-kernel. 3 Tree Kernels A tree is defined as a connected directed graph with no cycles. A node with no children is referred to as a leaf A subtree rooted at node n is denoted as Tn and t F T is used to indicate that t is a subtree of T. If a set of nodes in the tree along with the corresponding edges forms a tree then we define it to be a subset tree. If every node n of the tree contains a label, denoted by label( n), then the tree is called an labeled tree. If only the leaf nodes contain labels then the tree is called an leaf-labeled tree. Kernels on trees can be defined by defining kernels on matching subset trees as proposed by [3] or (more restrictively) by defining kernels on matching subtrees. In the latter case we have k(T, T') = L Wt6t ,t' . (3) tFT,t' FT' Ordering Trees An ordered tree is one in which the child nodes of every node are ordered as per the ordering defined on the node labels. Unless there is a specific inherent order on the trees we are given (which is, e.g., the case for parse-trees), the representation of trees is not unique. For instance, the following two unlabeled trees are equivalent and can obtained from each other by reordering the nodes. ~ c!0 To order trees we assume that a lexicographic order is associated with the labels if they exist. Furthermore, we assume that the additional symbols '[', '1' satisfy ' [' < '1', and that '1', '[' < label( n) for all labels. We will use these symbols to define Figure 1: Two equivalent trees tags for each node as follows: • For an unlabeled leaf n define tag( n) := [ l. • For a labeled leaf n define tag( n) : = [ label( n) 1 . • For an unlabeled node n with children nl, ... , nc sort the tags of the children in lexicographical order such that tag( ni) ::=; tag( nj) if i < j and define tag(n) = [tag(nl)tag(n2) ... tag(nc)l . • For a labeled node perform the same operations as above and set tag(n) = [ label(n)tag(nl)tag(n2) ... tag(nc) l . For instance, the root nodes of both trees depicted above would be encoded as [[] [[] [lll. We now prove that the tag of the root node, indeed, is a unique identifier and that it can be constructed in log linear time. Theorem 1 Denote by T a binary tree with I nodes and let A be the maximum length of a label. Then the following properties hold for the tag of the root node: 1. tag (root) can be computed in (A + 2)(llog21) time and linear storage in I. 2. Substrings S oftag(root) starting with '[' and ending with a balanced ']' correspond to subtrees T' ofT where s is the tag on T'. 3. Arbitrary substrings s oftag(root) correspond to subset trees T' ofT. 4. tag (root) is invariant under permutations of the leaves and allows the reconstruction of an unique element of the equivalence class (under permutation). Proof We prove claim 1 by induction. The tag of a leaf can be constructed in constant time by storing [, ], and a pointer to the label of the leaf (if it exists), that is in 3 operations. Next assume that we are at node n, with children nl, n2. Let Tn contain In nodes and Tn, and Tn2 contain h, 12 nodes respectively. By our induction assumption we can construct the tag for nl and n2 in (A + 2)(h log2 h) and (A + 2)(12 log2 12) time respectively. Comparing the tags of nl and n2 costs at most (A + 2) min(h, l2) operations and the tag itself can be constructed in constant time and linear space by manipulating pointers. Without loss of generality we assume that h ::=; 12 • Thus, the time required to construct tag(n) (normalized by A + 2) is II (log2 11 + 1) + 1210g2 (12) = h log2 (2h) + l210g2 (12) ::=; In log2 (In). (4) One way of visualizing our ordering is by imagining that we perform a DFS (depth first search) on the tree T and emit a '[' followed by the label on the node, when we visit a node for the first time and a '1' when we leave a node for the last time. It is clear that a balanced substring s of tag (root) is emitted only when the corresponding DFS on T' is completed. This proves claim 2. We can emit a substring of tag( root) only if we can perform a DFS on the corresponding set of nodes. This implies that these nodes constitute a tree and hence by definition are subset trees of T. This proves claim 3. Since leaf nodes do not have children their tag is clearly invariant under permutation. For an internal node we perform lexicographic sorting on the tags of its children. This removes any dependence on permutations. This proves the invariance of tag(root) under permutations of the leaves. Concerning the reconstruction, we proceed as follows: each tag of a subtree starts with ' [' and ends in a balanced ']', hence we can strip the first [] pair from the tag, take whatever is left outside brackets as the label of the root node, and repeat the procedure with the balanced [ ... J entries for the children of the root node. This will construct a tree with the same tag as tag(root), thus proving claim 4. • An extension to trees with d nodes is straightforward (the cost increases to d log2 d of the original cost), yet the proof, in particular (4) becomes more technical without providing additional insight, hence we omit this generalization for brevity. Corollary 2 Kernels on trees T , T' can be computed via string kernels, if we use tag(T) , tag(T') as strings. Ifwe require that only balanced [ . .. J substrings have nonzero weight W s then we obtain the subtree matching kernel defined in (3). This reduces the problem of tree kernels to string kernels and all we need to show in the following is how the latter can be computed efficiently. For this purpose we need to introduce suffix trees. 4 Suffix Trees and Matching Statistics Definition The suffix tree is a compacted trie that stores all suffixes of a given text string. We denote the suffix tree of the string x by S (x) . Moreover, let nodes( S( x)) be the set of all nodes of S (x) and let root (S (x)) be the root of S (x). For a node w, father (w) denotes its parent, T(w) denotes the subtree tree rooted at the node, Ivs(w) denotes the number of leaves in the subtree and path( w) := w is the path from the root to the node. That is, we use the path w from root to node as the label of the node w. abc$ ab We denote by words(S(x)) the set of all strings w such that wu E nodes(S(x)) for some (possibly empty) string u, which means that words(S(x)) is the set of all possible substrings of x. For every t E words(S(x)) we define ceil ( t) as the node w such that Figure 2: Suffix Tree of ababc w = tu and u is the shortest (possibly empty) substring such that w E nodes(S(x)). Similarly, for every t E words(S(x)) we define floor(t) as the node w such that t = wu and u is the shortest (possibly empty) substring such that w E nodes(S(x)). Given a string t and a suffix tree S(x), we can decide if t E words(S(x)) in O(ltl) time by just walking down the corresponding edges of S(x). If the sentinel character $ is added to the string x then it can be shown that for any t E words(S(x)), lvs( ceil( t)) gives us the number of occurrence of t in x [5]. The idea works as follows: all suffixes of x starting with t have to pass through ceil(t), hence we simply have to count the occurrences of the sentinel character, which can be found only in the leaves. Note that a simple depth first search (OFS) of S(x) will enable us to calculate Ivs(w) for each node in S(x) in O(lxl) time and space. Let aw be a node in S(x), and v be the longest suffix of w such that v E nodes(S(x)). An unlabeled edge aw ---+ v is called a suffix link in S (x). A suffix link of the form aw ---+ W is called atomic. It can be shown that all the suffix links in a suffix tree are atomic [5, Proposition 2.9]. We add suffix links to S(x), to allow us to perform efficient string matching: suppose we found that aw is a substring of x by parsing the suffix tree S (x). It is clear that w is also a substring of x. If we want to locate the node corresponding to w, it would be wasteful to parse the tree again. Suffix links can help us locate this node in constant time. The suffix tree building algorithms make use of this property of suffix links to perform the construction in linear time. The suffix tree construction algorithm of [13] constructs the suffix tree and all such suffix links in linear time. Matching Statistics Given strings x, y with Ix l = nand Iy l = m, the matching statistics of x with respect to y are defined by v, C E p,[n, where Vi is the length of the longest substring of y matching a prefix of xli : n], Vi := i + v i - 1, Ci is a pointerto ceil(x[i : Vi]) and Ci is a pointer to floor(x [i : Vi]) in S(y). For an example see the table below. String a b b a For a given y one can construct v, C correspond2 1 2 1 ing to x in linear time. The key observation is that ab b babeS ab VH I ::::: Vi - 1, since if xli : Vi] is a substring of y then definitely xli + 1 : Vi] is also a substring of Table 1: Matching statistic of abba with respect to S (a babc). y. Besides this, the matching substring in y that we find, must have xli + 1 : Vi] as a prefix. The Matching Statistics algorithm [2] exploits this observation and uses it to cleverly walk down the suffix links of S(y) in order to compute the matching statistics in O( lxl ) time. More specifically, the algorithm works by maintaining a pointer Pi = floor(x [i : Vi ]). It then finds P H I = floor(x[i + 1 : Vi ]) by first walking down the suffix link of Pi and then walking down the edges corresponding to the remaining portion of xli + 1 : Vi] until it reaches floor(x[i + 1 : Vi]) . Now VH I can be found easily by walking from P H I along the edges of S(y) that match the string x li + l : n], until we can go no further. The value of VI is found by simply walking down S(y) to find the longest prefix of x which matches a substring of y. Matching substrings Using V and C we can read off the number of matching substrings in x and y. The useful observation here is that the only substrings which occur in both x and y are those which are prefixes of x li : Vi] . The number of occurrences of a substring in y can be found by lvs(ceil(w)) (see Section 4). The two lemmas below formalize this. Lemma 3 w is a substring of x iff there is an i such that w is a prefix of x li : n]. The numbe r of occurrences of w in x can be calculated by finding all such i. Lemma 4 The set of matching substrings of x and y is the set of all prefixes of xli : Vi]. Proof Let w be a substring of both x and y. By above lemma there is an i such that w is a prefix of xli : n]. Since Vi is the length of the maximal prefix of xli : n] which is a substring in y, it follows that Vi ::::: Iwl. Hence w must be a prefix of x li : Vi] . • 5 Weights and Kernels From the previous sections we know how to determine the set of all longest prefixes x li : Vi ] of x li : n] in y in linear time. The following theorem uses this information to compute kernels efficiently. Theorem 5 Let x and y be strings and c and V be the matching statistics of x with respect to y. Assume that W(y , t) = L Wus W u where u = floor(t) and t = uv. (5) sE prefix(v) can be computed in constant time for any t. Then k(x, y) can be computed in O(lx l + Iyl) time as Ixl Ixl k(x, y) = L val(x[i : Vi ]) = L val(ci) + lvs(ceil(x[i : Vi])) W(y , xli : Vi ]) (6) i = 1 i = 1 where val ( t) := lYse ceil ( t)) . W (y, t) + val(floor( t)) and val ( root) := O. Proof We first show that (6) can indeed be computed in linear time. We know that for S(y) the number of leaves can be computed in linear time and likewise c, v. By assumption on W(y, t) and by exploiting the recursive nature of valet) we can compute W(y, nodes(i)) for all the nodes of S(y) by a simple top down procedure in O(lyl) time. Also, due to recursion, the second equality of (6) holds and we may compute each term in constant time by a simple lookup for val(ci) and computation of W(y , xli : Vi])' Since we have Ixl terms, the whole procedure takes O( lxl ) time, which proves the O( lxl + Iyl) time complexity. Now we prove that (6) really computes the kernel. We know from Lemma 4 that the sum in (2) can be decomposed into the sum over matches between y and each of the prefixes of xli : Vi] (this takes care of all the substrings in x matching with y). This reduces the problem to showing that each term in the sum of (6) corresponds to the contribution of all prefixes of x li : vJ Assume we descend down the path xli : Vi] in S(y) (e.g., for the string bab with respect to the tree of Figure 2 this would correspond to (root, b, bab», then each of the prefixes t along the path (e.g., (' , , b, ba, bab) for the example tree) occurs exactly as many times as Ivs( ceil( t)) does. In particular, prefixes ending on the same edge occur the same number of times. This allows us to bracket the sums efficiently, and W(y , x) simply is the sum along an edge, starting from the ceiling of x to x . Unwrapping val(x) shows that this is simply the sum over the occurrences on the path of x, which proves our claim. • So far, our claim hinges on the fact that W(y, t) can be computed in constant time, which is far from obvious at first glance. We now show that this is a reasonable assumption in all practical cases. Length Dependent Weights If the weights Ws depend only on lsi we have Ws = wisi. Define Wj := Li=l Wj and compute its values beforehand up to W J where J ~ Ix l for all x. Then it follows that It I W(y , t) = L Wj WI floor(tl l = Wlt l WI floor(tl l (7) j=1 ceil(tl l which can be computed in constant time. Examples of such weighting schemes are the kernels suggested by [15], where Wi = A - i, [7] where Wi = 1, and [10], where Wi = Olio Generic Weights In case of generic weights, we have several options: recall that one often will want to compute m 2 kernels k(x , x'), given m strings x E X. Hence we could build the suffix trees for Xi beforehand and annotate each of the nodes and characters on the edges explicitly (at super-linear cost per string), which means that later, for the dot products, we will only need to perform table lookup of W(x, x'(i : Vi)). However, there is an even more efficient mechanism, which can even deal with dynamic weights, depending on the relative frequency of occurrence of the substrings in all x . We can build a suffix tree I; of all strings in X. Again, this can be done in time linear in the total length of all the strings (simply consider the concatenation of all strings). It can be shown that for all x and all i, xli : Vi] will be a node in this tree. Leaves-counting allows to compute these dynanUc weights efficiently, since I; contains all the substrings. For W(x,x'(i : Vi)) we make ilie simplifying assumption that Ws = ¢ (Isl ) . ¢(freq(s)), that is, Ws depends on length and frequency only. Now note that all the strings ending on the same edge in I; will have the same weights assigned to them. Hence, can rewrite (5) as It I W(y , t) = L W s L W s = ¢ (freq(t)) L ¢ (i) (8) s Eprefix(tl s Eprefix(floor(tl l i= 1 floor(t l l+l where u = floor(t), t = uv and s E prefix(v). By precomputing L i ¢ (i) we can evaluate (8) in constant time. The benefit of (8) is twofold: we can compute the weights of all the nodes of I; in time linear in the total length of strings in X . Secondly, for arbitrary x we can compute W(y , t) in constant time, thus allowing us to compute k( Xi' x') in O(lxi l + Ix' l) time. Linear Time Prediction Let Xs = {Xl, X2 , . . . , xm} be the set of support vectors. Recall that, for prediction in a Support Vector Machine we need to compute f(x) = L : I Ctik(Xi, x), which implies that we need to combine the contribution due to matching substrings from each one of the Support Vectors. We first construct S (Xs) in linear time by using the [1] algorithm. In S(X8 ) , we associate weight Cti with each leaf associated with the support vector Xi . For a node V E nodes(S(X8)) we modify the definition of Ivs(v) as the sum of weights associated with the subtree rooted at node v. A straightforward application of the matching statistics algorithm of [2] shows that we can find the matching statistics of x with respect to all strings in Xs in O(lxl) time. Now Theorem 5, can be applied unchanged to compute f (x). A detailed account and proof can be found in [14]. In summary, we can classify texts in linear time regardless of the size of the training set. This makes SVM for large-scale text categorization practically feasible. Similar modifications can also be applied for training SMO like algorithms on strings. 6 Experimental Results For a proof of concept we tested our approach on a remote homology detection problem 1 [9] using Stafford Noble's SVM package2 as the training algorithm. A length weighted kernel was used and we assigned weights W s = Aisl for all substring matches of length greater than 3 regardless of triplet boundaries. To evaluate performance we computed the ROC50 scores.3 -~- -"1 • e.\a.. lsIrbda .. O.7ti _ Spectrum !(.ernel _"._--...... _ ... _---"',---.. _--°o~--~~----~----~----~----~ Figure 3: Total number of families for which an SVM classifier exceeds a ROC50 score threshold. Being a proof of concept, we did not try to tune the soft margin SVM parameters (the main point of the paper being the introduction of a novel means of evaluating string kernels efficiently rather than applications - a separate paper focusing on applications is in preparation). Table 3 contains the ROC50 scores for the spectrum kernel with k = 3 [12] and our string kernel with A = 0.75. We tested with A E {0.25, 0.5, 0.75, O.g} and report the best results here. As can be seen our kernel outperforms the spectrum kernel on nearly every every family in the dataset. It should be noted that this is the first method to allow users to specify weights rather arbitrarily for all possible lenghts of matching sequences and still be able to compute kernels at O(lxl + Ix'l) time, plus, to predict on new sequences at O(lxl) time, once the set of support vectors is established.4 7 Conclusion We have shown that string kernels need not come at a super-linear cost in SVMs and that prediction can be carried out at cost linear only in the length of the argument, thus providing optimal run-time behaviour. Furthermore the same algorithm can be applied to trees. The methodology pointed out in our paper has several immediate extensions: for instance, we may consider coarsening levels for trees by removing some of the leaves. For not too-unbalanced trees (we assume that the tree shrinks at least by a constant factor at each coarsening) computation of the kernel over all coarsening levels can then be carried out at cost still linear in the overall size of the tree. The idea of coarsening can be extended to approximate string matching. If we remove characters, this amounts to the use of wildcards. Likewise, we can consider the strings generated by finite state machines and thereby compare the finite state machines themselves. This leads to kernels on automata and other dynamical systems. More details and extensions can be found in [14]. IDetails and data available at www.cse.ucsc.edu/research/compbio/discriminative. 2 Available at www.cs.columbia.edu/compbio/svm. 3The ROC50 score [6, 12] is the area under the receiver operating characteristic curve (the plot of true positives as a function of false positives) up to the first 50 false positives. A score of I indicates perfect separation of positives from negatives, whereas a score of 0 indicates that none of the top 50 sequences selected by the algorithm were positives. 4[12] obtain an O(klxl ) algorithm in the (somewhat more restrictive) case ofws = 6k(lsl). Acknowledgments We would like to thank Patrick Haffner, Daniela Pucci de Farias, and Bob Williamson for comments and suggestions. This research was supported by a grant of the Australian Research Council. SVNV thanks Trivium India Software and Netscaler Inc. for their support. References [1] A. Amir, M. Farach, Z. Galil, R. Giancarlo, and K. Park. Dynamic dictionary matching. Journal of Computer and System Science, 49(2):208-222, October 1994. [2] w. I. Chang and E. L. Lawler. Sublinear approximate sting matching and biological applications. Algorithmica, 12(4/5):327-344, 1994. [3] M. Collins and N. Duffy. Convolution kernels for natural language. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14, Cambridge, MA, 2001. MIT Press. [4] R. Durbin, S. Eddy, A. Krogh, and G. Mitchison. Biological Sequence Analysis: Probabilistic models of proteins and nucleic acids. Cambridge University Press, 1998. [5] R. Giegerich and S. Kurtz. From Ukkonen to McCreight and Weiner: A unifying view of linear-time suffix tree construction. Algorithmica, 19(3):331-353, 1997. [6] M. Gribskov and N. L. Robinson. Use of receiver operating characteristic (ROC) analysis to evaluate sequence matching. Computers and Chemistry, 20(1):25-33, 1996. [7] D. Haussler. Convolutional kernels on discrete structures. Technical Report UCSCCRL-99-1O, Computer Science Department, UC Santa Cruz, 1999. [8] R. Herbrich. Learning Kernel Classifiers: Theory and Algorithms. MIT Press, 2002. [9] T. S. Jaakkola, M. Diekhans, and D. Haussler. A discriminative framework for detecting remote protein homologies. Journal of Computational Biology, 7:95-114, 2000. [10] T. Joachims. Making large-scale SVM learning practical. In B. SchOlkopf, C. J. C. Burges, and A. J. Smola, editors, Advances in Kernel Methods-Support Vector Learning, pages 169-184, Cambridge, MA, 1999. MIT Press. [11] E. Leopold and J. Kindermann. Text categorization with support vector machines: How to represent text in input space? Machine Learning, 46(3):423-444, March 2002. [12] C. Leslie, E. Eskin, and W. S. Noble. The spectrum kernel: A string kernel for SVM protein classification. In Proceedings of the Pacific Symposium on Biocomputing, pages 564-575, 2002. [13] E. Ukkonen. On-line construction of suffix trees. Algorithmica, 14(3):249-260, 1995. [14] S. V. N. Vishwanathan. Kernel Methods: Fast Algorithms and Real Life Applications. PhD thesis, Indian Institute of Science, Bangalore, India, November 2002. [15] C. Watkins. Dynamic alignment kernels. In A. J. Smola, P. L. Bartlett, B. Scholkopf, and D. Schuurmans, editors, Advances in Large Margin Classifiers, pages 39-50, Cambridge, MA, 2000. MIT Press.
|
2002
|
104
|
2,108
|
Expected and Unexpected Uncertainty: ACh and NE in the Neocortex Angela Yu Peter Dayan Gatsby Computational Neuroscience Unit 17 Queen Square, London WC1N 3AR, United Kingdom. feraina@gatsby.ucl.ac.uk dayan@gatsby.ucl.ac.uk Abstract Inference and adaptation in noisy and changing, rich sensory environments are rife with a variety of specific sorts of variability. Experimental and theoretical studies suggest that these different forms of variability play different behavioral, neural and computational roles, and may be reported by different (notably neuromodulatory) systems. Here, we refine our previous theory of acetylcholine’s role in cortical inference in the (oxymoronic) terms of expected uncertainty, and advocate a theory for norepinephrine in terms of unexpected uncertainty. We suggest that norepinephrine reports the radical divergence of bottom-up inputs from prevailing top-down interpretations, to influence inference and plasticity. We illustrate this proposal using an adaptive factor analysis model. 1 Introduction Animals negotiating rich environments are faced with a set of hugely complex inference and learning problems, involving many forms of variability. They can be unsure which context presently pertains, cues can be systematically more or less reliable, and relationships amongst cues can change smoothly or abruptly. Computationally, such different forms of variability need to be represented, manipulated, and wielded in different ways. There is ample behavioral evidence that can be interpreted as suggesting that animals do make and respect these distinctions,5 and there is even some anatomical, physiological and pharmacological evidence as to which neural systems are engaged.29 Perhaps best delineated is the involvement of neocortical acetylcholine (ACh) in uncertainty. Following seminal earlier work,11,14 we suggested6,35 that ACh reports on the uncertainty associated with a top-down model, and thus controls the integration of bottom-up and top-down information during inference. A corollary is that ACh should also control the way that bottom-up information influences the learning of top-down models. Intuitively, this cholinergic signal reports on expected uncertainty, such that ACh levels are high when top-down information is not expected to support good predictions about bottom-up data and should be modified according to the incoming data. We6,35 formally demonstrated the inference aspects of this idea using a hidden Markov model (HMM), in which top-down uncertainty derives from slow contextual changes. In extending this quantitative model to learning, we found, surprisingly, that it violated our qualitative theory of ACh. That is, in the HMM model, greater uncertainty in the topdown model (ie a lower posterior responsibility for the predominant context), reported by higher ACh levels, leads to comparatively slower learning about that context. By contrast, we had expected that higher ACh should lead to faster learning, since it would indicate that the top-down model is potentially inadequate. In resolving this conflict, we realized that, at least in this particular HMM framework, we had incorrectly fused different sorts of uncertainty. As a further consequence, by thinking more generally about contextual change, we also realized the formal need for a signal reporting on unexpected uncertainty, that is, on strong violation of top-down predictions that are expected to be correct. There is suggestive empirical evidence that one of many roles for neocortical norepinephrine (NE) is reporting this;29 it is also consonant with various existing theories associated with NE. In sum, we suggest that expected and unexpected uncertainty play complementary but distinct roles in representational inference and learning. Both forms of uncertainties are postulated to decrease the influence of top-down information on representational inference and increase the rate of learning. However, unexpected uncertainty rises whenever there is a global change in the world, such as a context change, while expected uncertainty is a more subtle quantity dependent on internal representations of properties of the world. Here, we start by outlining some of the evidence for the individual and joint roles of ACh and NE in uncertainty. In section 3, we describe a simple, adaptive, factor analysis model that clarifies the uncertainty notions. Differential effects induced by disrupting ACh and NE are discussed in Section 4, accompanied by a comparison to impairments found in animals. 2 ACh and NE ACh and NE are delivered to the cortex from a small number of subcortical nuclei: NE originates solely in the locus coeruleus, while the primary sources of ACh are nuclei in the basal forebrain (nucleus basalis magnocellularis, mainly targeting the neocortex, and medial septum, mainly targeting the hippocampus). Cortical innervations of these modulators are extensive, targeting all cortical regions and layers.9,30 As is typical for neuromodulators, physiological studies indicate that the effects of direct application of ACh or NE are confusingly diverse. Within a small cortical area, iontophoresis or perfusion of ACh or NE (or their agonists) may cause synatic facilitation or suppression, depending on the cell and depending on whether the firing is spontaneous or stimulusevoked; it may also induce direct hyperpolarization or depolarization.9,10,17 Direct application of either neuromodulator or its agonist, paired with sensory stimulation, results in a general enhancement of stimulus-evoked responses, as well as an increased propensity for experience-dependent reorganization of cortical maps (in contrast, depletion of either substance attenuates cortical plasticity).9 More interestingly, ACh and NE both seem to selectively suppress intracortical and feedback synaptic transmission while enhancing thalamocortical processing.8,12,13,15,17,18,20 Based on these roughly similar anatomical and physiological properties, cholinergic and noradrenergic systems have been attributed correspondingly similar general computational roles, such as modulating the signal-to-noise ratio in sensory processing.9,10 However, the effects of ACh and NE depletion in animal behavioral studies, as well as microdialysis of the neuromodulators during different conditions, point to more specific and distinct computational roles for ACh and NE. In our previous work on ACh,6,35 we suggested that it reports on expected uncertainty, ie uncertainty associated with estimated parameters in an internal model of the external world. This is consistent with results from animal conditioning experiments, in which animals learn faster about stimuli with variable predictive consequences.24 A series of lesion studies indicates cortical ACh innervation is essential for this sort of faster learning.14 In contrast to ACh, a large body of experimental data associates NE with the specific ability to learn new underlying relationships in the world, especially those contradicting existent knowledge. Locus coeruleus (LC) neurons fire phasically and robustly to novel objects encountered during free exploration,34 novel sensory stimuli,25,28 unpredicted changes in stimulus properties such as presentation time,2 introduction of association of a stimulus with reinforcement,19,28,32 and extinction or reversal of that association.19,28 Moreover, this activation of NE neurons habituates rapidly when there is no predictive value or contingent response associated with the stimuli, and also disappears when conditioning is expressed at a behavioral level.28 There are few sophisticated behavioral studies into the interactions between ACh and NE. However, it is known that NE and ACh both rise when contingencies in an operant conditioning task are changed, but while NE level rapidly habituates, ACh level is elevated in a more sustained fashion.3,28 In a task designed to tax sustained attention, lesions of the basal forebrain cholinergic neurons induced persistent impairments,22 while deafferentation of cortical adrenergic inputs did not result in significant impairment compared to controls.21 One of the best worked-out computational theories of the drive and function of NE is that of Aston-Jones, Cohen and their colleagues.1,33 They studied NE in the context of vigilance and attention in well-learned tasks, showing how NE neurons are driven by selective task-relevant stimuli, and that, influenced by increased electrotonic coupling in the locus coeruleus, a transition from a high tonic, low phasic activity mode to a low tonic, high phasic activity mode is associated with increased behavioral performance through NE’s suggested effect of increasing the signal to noise ratio of target cortical cells. This is a very impressive theory, with neural and computational support. However, its focus on well-learned tasks, means that other drives of NE activity (particularly novelty) and effects (particularly plasticity) are downplayed, and a link to ACh is only a secondary concern. We focus on these latter aspects, proposing that NE reports unexpected uncertainty, ie uncertainty induced by a mismatch between prediction and observation, such as when there is a dramatic change in the external environment. We do not claim that this is the only role of NE; but do see it as an important complement to other suggestions. 3 Inference and Learning in Adaptive Factor Analysis Our previous model of the role of ACh in cortical inference involved a generative scheme with a discrete contextual variable , evolving over time with slow Markov dynamics
, a discrete representational variable that was stochastically determined by , and a noisy observed variable ! "$#&%('*) (normal distribution). The inferential task was to determine ' $+-,(,(,. ; the HMM structure makes this interesting because top-down ( ) and bottom-up ( ) information have to be integrated. Top down information can be uncertain, in which case mainly bottom-up information should be used to infer . We suggested that ACh reports the uncertainty in the top-down context, namely /1032 4 65 ,(,(,. 78 , where 95 is the most likely value of the context and 2 indicates the use of an approximation. ACh thereby reports expected uncertainty, as in the qualitative picture above, and appropriately controls cortical inference. However, if one also considers learning, for instance if is unknown, then the less certain the animal is that 95 is the true contextual state, the less learning accorded to 65 . This is exactly the opposite of what we should expect according to our empirically-supported arguments above. In fact, this way of viewing ACh is also not consistent with a more systematic reading5,16 of Holland & Gallagher’s cholinergic results,14 which imply that ACh is better seen as a report of uncertainty in parameters rather than uncertainty in states. In order to model this more fitting picture of ACh, we need an explicit model of parameter uncertainty. We constrain the problem to a single, implicit, context :; / . It is easiest (and perhaps more realistic) to develop the new picture in a continuous space, in which the parameter governing the relationship between / and is < (scalar for convenience), which is imperfectly known (hence the parameter uncertainty, reported by ACh), and indeed can change. Again, stochastically specifies through a normal distribution. Specifying how < can change over time requires making an assumption about the nature of the context. In particular, novelty plays a critical role in model evolution. In general, x y p(y; µ) p(x|y) −10 0 10 −10 0 10 20 0 35 70 −5 0 5 10 15 0 1 2 3 4 0 1 2 3 4
Figure 1: Adaptive factor analysis model. (a) 2-layer adaptive factor analysis model, as specified by Eq. 1 & 2. (b) Sample sequence of data points generated with parameters: ! "$# &% , ' ' '( *),+-.+0/ , 132 54 , 176 8+ , 9;:< 5=> , 17? @4 . 4 major shifts in A occurred (including initial ACB ), whose projections into D space, ' ' 'A , are denoted as large circles. E : DGF AIHJ+. , K : DLF AIHNM , O : DGF AIHQP , R : DLF A
HJSP . Small T denotes U&V projected into D space and fall along the line ' ' 'WU . (c) Same sequence viewed in U space. X : major shifts in A , Y : A V , R : U V , Z : D optimally projected into U space, ie [ D
]\' ' '&^I9`_ B : ' ' '&a_ B ' ' '^b9`_ B : D , where [ D is the mean of the posterior distribution of U given only the observation D and flat priors. (d) Scatter plot of Fdc U`V3Sec AIf V F vs. Fdc U&V3S [ DhgF . X : iJf V "+.M , Z : ijf V k# P , K : i f V kl$# % , dashed line denotes parity. Larger i f V corresponds to greater reliance on D V rather than c A f V for inferring c U V , while the intermediate value of i f V kl# % exactly balances top-down uncertainty with bottom-up uncertainty in the inference of c UV . we might expect small amounts of novelty, as models continually readjust, and we can allow for this by modeling continual small changes in < . However, in order to allow for the possibility of macroscopic changes implied by substantial novelty (as reported by NE), which are of evident importance in many experiments, we must add a specific component to the model. The interaction between microscopic and macroscopic novelty is essentially the interaction between ACh and NE. In all, assume that m m m 'onQp < 'q + # < < 7$rkstrvu xw (1) s y 'q + z w y '0q + { u / / 0 u y 8}| (2) with the initial value <~ y '0q + (see Figure 1). We will see later that the binary u is the key to the model of NE; it comes from an assumption that there can occasionally ( | / ) be dramatic changes in a model that force its radical revision. m m m is another parameter; we assume it is known and fixed. Figure 1(b) & (c) shows a sample sequence of a particular setting of the model: the output can be quite noisy, although there are clear underlying regularities in . At time , consider the case that we can make the approximation that <; <C '0
, where <L is the estimate of < and
is its variance (uncertainty), which is reported by ACh. Here, the open circles indicate that this estimate is made before is observed. We first consider how the ACh term influences inference about ; then go on to study learning. For inference, it can easily be shown that ' q + , where q + q + # r
Q $ r m m mGn 8 p m m m q + q + # r
N $ < r m m m$n 8 p (3) whence the effect of ACh is exactly as in our qualitative picture. The more uncertainty (ie the larger
), the smaller the role of the top-down expectation <G in determining . Examples of just such effects can be found in Figure 1 (d). For learning, start with the distribution of < given and assume u y . In this case, writing m m mm m m q + # r n p , we get < ' u y m m m 8 W m m m 8 m m m9' / m m m 8 m m m C y 'o
~ r q + z with the obvious semantics for the product of two Gaussian distributions. This is almost exactly the standard form for a Kalman filter update for < , and leads to standard results, such as variance of the estimate going initially like / , but ultimately reaching an asymptote which balances the rate of change from q + z and the rate of new information from the . Importantly, in this simple model, the uncertainty in < does not depend on the prediction errors 0 m m mG < , but rather changes as a function only of time. However, if one takes into account the possibility that u / , then the posterior distribution for < is the two-component mixture < u y < : ' u y 7r u / < ' u / (4) m m m 8 m m m 8 m m m9' / m m m $ m m m C /0 | y '0
~ r q + z 3r | y 'o
~ r q + z r q + { As increases, the number of mixture components in the posterior distribution increases exponentially as , since each setting of the 0 length binary string u .u + ,(,, u is, barring probability zero accidents, associated with a different component in the mixture. Thus, just as for switching state-space models,7 exact inference is impractical. One possibility would be to use a variational approximations.7,23 From the neural perspective of the involvement of neuromodulators, we propose an approximate learning algorithm in which signals reporting uncertainty, corresponding to our conceptual roles for ACh and NE, control the interactions between the (approximate) distribution at 0 / , 2 < 7$ 7$ , where 7$ ' + ' ,,(, ' 78 , and bottom-up information relayed by the new observation,
. To control the exponential expansion in the hidden space, we approximate the posterior 2 < 78 78 as a single Gaussian, < 78 < 78 '0
78 . < 78 is our best estimate of < 7$ after observing 7$ , and
78 , corresponding to the ACh level, is the uncertainty in our estimate < 78 . In general, we might consider the NE level as reporting the posterior responsibility of the u6 / component of the equivalent mixture of equation 4. Even more straightforwardly, we can measure a Z-score, namely prediction error scaled by uncertainty in our estimates: 0 $ 0 , where m m mG < 78 and m m mm m m
7$r q + # r q + z r nQp , assuming that u6 y . Whenever exceeds a threshold value , ie is unlikely to have come from an unmodified version of the current component, we assume u / . Otherwise, u y . Now the learning problem reduces to a modified version of Kalman filter:
7$r q + z r prediction variance about < (5)
m m m m m m
m m m r m m mm m m q + # r n p 8 Kalman gain (6)
0
78 m m m
correction variance (7) < < 78 r
0 m m m; < 7$ estimated mean (8) The difference from the conventional Kalman filter is the additional component of the transition noise variance, , which depends on u6 : y if u y , q + { if u / . Closer examination indicates that the ACh (
) and NE ( ) signals have the desired semantics. In the learning algorithm, large uncertainty about the mean estimate,
, results in large Kalman gain,
, which causes a large shift in < . Large
also weakens the influence of top-down information in inference as in equation 3. High NE levels also leads to faster learning: large means u / , which causes q (rather than y had u been y ), ultimately resulting in a large Kalman gain and thus fast shifting of < . High NE levels also enhances the dominance of bottom-up information in inference via its interactions with ACh: large promotes large
. Note that this system predicts interesting reciprocal relationships between ACh and NE: higher ACh leads to smaller normalized prediction errors and therefore less active NE signalling, whereas greater NE would generally increase estimator uncertainty and thus ACh level. Figure 2(a) shows an example sequence of < ' < + ' ,,(, generated from a model (same parameters as in Figure 1), and the estimated means using our approximate learning algorithm. The learning algorithm is clearly able to adjust to major changes in < , although 0 35 70 −5 0 5 10 15 0 35 70 0 5 0 3 10 0 1000 2000 3000 4000 5000 Figure 2: Approximate learning algorithm. (a)
: DtV projected into U space, Y : actual AIV , X : estimated means c A V . General patterns of A V are captured by c A V , though details may differ. k l . (b) S Z S : ACh, S : NE, : . ACh level rises whenver c V detected to be + (NE level exceeds ) and then smoothly falls. NE level is constant monitor of prediction error. (c) Mean summed square error over -step sequences trials ( V \c A VIS AIV a ) , as a function of . Error bars show standard errors of the means over %W trials. Mean square error for optimal }l is Pl , compared to exact learning error l$+.l (lower line). Model parameters were same as in Figure 1. more subtle changes in < can miss detection, such as the third large shift in < . Figure 2(b) shows higher ACh (
) and NE ( levels both correspond to fast learning, ie fast shifting of < . However, whereas NE is a constant monitor of prediction errors and fluctuates accordingly with every data point, ACh falls smoothly and predictably, and only depends on the observations when global changes in the environment have been detected. Figure 2(b) shows ladle-shaped dependence of estimation error, < 0 < , on the threshold value . For the particular setting of model parameters used here, learning is optimal for around . 4 Differential Effects of Disrupting ACh and NE Signalling The different roles of the NE ( ) and ACh (
) can be teased apart by disrupting each and observing the subsequent effects on learning in our model. We will examine several different manipulations of and
that disrupt normal learning, and relate the results to impairments observed in experimental manipulation of ACh or NE levels in animals. Of course, the complete experimental circumstances are far more complicated; we consider the general nature of the effects. First, we simulate depletion of cortical NE by setting y . An example is shown in Figure 3(a). By ruling out the possibility of u / , the system is unable to cope with abrupt, global changes in the world, ie when < shifts. Mean error over yy trials (same setting as in Figure 2(c)) without NE is y , more than an order of magnitude larger than full approximate learning ( ) and exact learning ( / ). This is consistent with the large errors of similar magnitude in Figure 2(c) for very large , which effectively blocks the NE system from reporting global changes. However, as long as the underlying parameters remain the same, ie < does not change greatly, the inference process functions normally, as we can see in the first y steps in Figure 3(a). These results are consistent with experimental observations: NE-lesioned animals are impaired in learning changes in reinforcement contingencies,26,28 but have little difficulty doing previously learned discrimination tasks.21 We can also simulate depletion of cortical ACh by setting
to a small constant value. Figure 3(b) shows severe damage is caused the learning algorithm, but the inference symptoms are distinct from NE depletion. Permanently small
corresponds to over-confidence in estimates of < , thus making adaptation of that estimate slow, similar to NE depletion. However, because the NE system is still intact, the system is able to detect when dramatically differs from the prediction (which is often, since < is slow to adapt and leaves little room for variance), and thus to base inference of directly on the bottom-up information . Thus, inference is less impaired than learning, which has also been observed in 1 35 70 0 10 1 35 70 0 10 Figure 3: Disrupting NE and ACh signals. (a) NE signal set to . (b) ACh signal set to $#+oM . S : V , S S : c A , : c U&V , Z :projection of DIV into U space. Learning of c V is poor in both manipulations, but inference in ACh-depletion is less impaired. ACh-lesioned animals.31 Moreover, the system exhibits a peculiar hesitancy in inference, ie constantly switching back and forth between relying on top-down estimate of , based on < and bottom-up estimate, based on m m m $ m m m 8 m m m . This tendency is particularly severe when the new < is similar to the previous one, which can be thought of as a form of interference. Interestingly, hippocampal cholinergic deafferentation in animals also bring about a stronger susceptibility to interference compared with controls.10 Saturation of ACh and NE are also easy to model, by setting
and very high all the time. The effect of these two manipulations are similar, both cause the estimation of < and inference of to base strongly on the observation (data not shown). The performance decrements in the estimation of < and inference about are functions of the output noise, nNp '0q + # in our model, and do not worsen when there are global changes in contingencies. Unfortunately, directly relevant experimental data is scarce. Administration of cholinergic agonists in the cortex has failed to induce impairments in tasks with changing contingencies, consistent with our predictions. However, to our knowledge, cholinergic and noradrenergic agonists have not yet been administered in combination with systematic manipulation of variability in the predictive consequences of stimuli and so the validity of our predictions remains to be tested. 5 Discussion We have suggested that ACh and NE report expected and unexpected uncertainty in representational learning and inference. As such, high levels of ACh and NE should both correspond to faster learning about the environment and enhancement of bottom-up processing in inference. However, whereas NE reports on dramatic changes, ACh has the subtler role of reporting on uncertainties in internal estimates. We formalized these ideas in an adaptive factor analysis model. The model is adaptive in that the mean of the hidden variable is allowed to alter greatly from time to time, capturing the idea of a generally stable context which occasionally undergoes large changes, leading to substantial novelty in inputs. As exact learning is intractable, we proposed an approximate learning algorithm in which the roles for ACh and NE are clear, and demonstrated that it performs learning and inference competently. Moreover, by disrupting one or both of ACh and NE signalling systems, we showed that the two systems have interacting but distinct patterns of malfunctioning that qualitatively resemble experimental results in animal studies. There is no single collection of definitive experimental studies, and teasing apart the effects of NE and ACh is tricky, since they appear to share many properties. Our model helps understand why, and should also help with the design of experiments to clarify the relationship. Of course, the adaptive factor analysis model is overly simple in many ways. In particular, it only considers one particular context; and so refers all the uncertainty to the parameters of that context. This is exactly the complement of our previous model,6,35 which referred all the uncertainty to the choice of context rather than the parameters within each context. The main conceptual difference is that the idea that ACh reports on the latter form of contextual uncertainty sits ill with the data on how uncertainty boosts learning; this fits better within the present model. Given multiple contexts, which could formally be handled within the framework of a mixture model, the tricky issue is to decide whether the parameters of the current context have changed, or a new (or pre-existing) context has taken over. Exploring this is important work for the future. More generally, a thoroughly hierarchical and non-linear model is clearly required as at a minimum as a way of addressing some of the complexities of cortical inference. Acknowledgement We are very grateful to Zoubin Ghahramani and Maneesh Sahani for helpful discussions. Funding was from the Gatsby Charitable Foundation and the NSF. References [1] Aston-Jones, G, Rajkowski, J, & Cohen, J (1999) Biol Psychiatry 46:1309-1320. [2] Carli, M, Robbins, TW, Evenden, JL, & Everitt, BJ (1983) Behav Brain Res 9:361-80. [3] Dalley, JW et al. (2001) J Neurosci 21:4908-4914. [4] Daw, ND, Kakade, S, & Dayan, P (2001) Neural Networks 15:603-616. [5] Dayan, P, Kakade, S, & Montague, PR (2000) In NIPS 2000:451-457. [6] Dayan, P & Yu, A (2002) In NIPS 2002. [7] Ghahramani, Z & Hinton, G (2000) Neural Computation 12:831-64. [8] Gil, Z, Conners, BW, & Amitai, Y (1997) Neuron 19:679-86. [9] Gu, Q (2002) Neuroscience, 111:815-835. [10] Hasselmo, ME (1995) Behavioural Brain Research 67:1-27. [11] Hasselmo, ME, Wyble, BP & Wallenstein, GV (1996) Hippocampus 6:693-708. [12] Hasselmo, ME & Cekic, M (1996) Behavioural Brain Research 79: 153-161. [13] Hasselmo, ME et al (1997) J Neurophysiology 78:393-408. [14] Holland, PC & Gallagher, M (1999) Trends In Cognitive Sciences 3:65-73. [15] Hsieh, CY, Cruikshank, SJ, & Metherate, R (2000) Brain Research 880:51064. [16] Kakade, S & Dayan, P (2002) Psychological Review 109:533-544. [17] Kimura, F, Fukuada, M, & Tsumoto, T (1999) Eur. Jour. of Neurosci. 11:3597-3609. [18] Kobayashi, M et al. (1999) European Journal of Neuroscience 12:264-272. [19] Mason, ST & Iversen, SD (1978) Brain Res150:135-48. [20] McCormick, DA (1989) Trends Neurosci 12:215-221. [21] McGaughy, J, Sandstrom, M, et al (1997) Behav Neurosci 111:646-52. [22] McGaughy, J & Sarter, M (1998) Behav Neurosci 112:1519-25. [23] Minka, TP (2001) A Family of Algorithms for Approximate Bayesian Inference. PhD, MIT. [24] Pearce, JM & Hall, G (1980) Psychological Review 87:532-552. [25] Rajkowski, J, Kubiak, P, & Aston-Jones, G (1994) Brain Res Bull 35:607-16. [26] Robbins, TW (1984) Psychological Medicine 14:13-21. [27] Robbins, TW, Everitt, BJ, & Cole, BJ (1985) Physiological Psychology 13:127-150. [28] Sara, SJ, Vankov, A, & Herve, A (1994) Brain Res Bull 35:457-65. [29] Sara, SJ (1998) Comptes Rendus de l’Academie des Sciences Serie III 321:193-198. [30] Sarter, M, Bruno, JP (1997) Brain Research Reviews 23:28-46. [31] Sarter, M, Holley, LA, & Matell, M (2000) In SFN 2000 abstracts. [32] Sullivan, RM (2001) Ingegrative Physiological and Behavioral Science 36:293-307. [33] Usher, M, et al. (1999) Science 5401:549-554. [34] Vankov, A, Herve-Minvielle, A, & Sara, SJ (1995) Eur J Neurosci109:903-911. [35] Yu, A & Dayan, P (2002) Neural Networks 15:719-730
|
2002
|
105
|
2,109
|
Feature Selection and Classification on Matrix Data: From Large Margins To Small Covering Numbers Sepp Hochreiter and Klaus Obermayer Department of Electrical Engineering and Computer Science Technische Universit¨at Berlin 10587 Berlin, Germany {hochreit,oby}@cs.tu-berlin.de Abstract We investigate the problem of learning a classification task for datasets which are described by matrices. Rows and columns of these matrices correspond to objects, where row and column objects may belong to different sets, and the entries in the matrix express the relationships between them. We interpret the matrix elements as being produced by an unknown kernel which operates on object pairs and we show that - under mild assumptions - these kernels correspond to dot products in some (unknown) feature space. Minimizing a bound for the generalization error of a linear classifier which has been obtained using covering numbers we derive an objective function for model selection according to the principle of structural risk minimization. The new objective function has the advantage that it allows the analysis of matrices which are not positive definite, and not even symmetric or square. We then consider the case that row objects are interpreted as features. We suggest an additional constraint, which imposes sparseness on the row objects and show, that the method can then be used for feature selection. Finally, we apply this method to data obtained from DNA microarrays, where “column” objects correspond to samples, “row” objects correspond to genes and matrix elements correspond to expression levels. Benchmarks are conducted using standard one-gene classification and support vector machines and K-nearest neighbors after standard feature selection. Our new method extracts a sparse set of genes and provides superior classification results. 1 Introduction Many properties of sets of objects can be described by matrices, whose rows and columns correspond to objects and whose elements describe the relationship between them. One typical case are so-called pairwise data, where rows as well as columns of the matrix represent the objects of the dataset (Fig. 1a) and where the entries of the matrix denote similarity values which express the relationships between objects. 0.2 -0.9 0.2 0.4 -0.3 -0.5 -0.7 -0.3 -0.1 -0.7 -0.5 -0.6 0.6 0.5 -0.7 0.3 -0.6 -0.6 0.2 -0.1 0.9 0.9 0.1 -0.1 0.3 -0.2 -0.3 -0.3 -0.3 -0.5 -0.6 -0.7 -0.8 -0.9 0.7 -0.3 -0.8 0.8 -0.7 -0.9 0.3 -0.1 0.5 0.2 0.2 0.1 -0.9 0.6 0.7 0.6 0.6 -0.7 0.4 0.7 -0.4 0.9 0.6 -0.3 0.4 -0.9 -0.7 -0.1 -0.5 0.1 0.3 -0.5 A B C D F G I J K L E H A B C D E F H I J K L A B C D F G E β χ δ ε φ γ η ι ϕ κ λ 1.3 -2.2 6.6 -7.5 -1.8 9.0 3.8 1.2 1.9 -2.9 -4.4 7.8 -0.3 0.3 2.6 -0.7 -5.4 1.2 1.8 3.6 2.9 -9.4 2.7 -9.4 -7.4 -7.7 4.4 -2.4 9.2 4.6 -8.3 -4.3 3.7 -3.9 -1.1 -1.6 -4.8 3.9 2.3 -8.4 -2.2 0.1 2.5 0.8 -5.7 -0.6 -4.7 7.2 9.2 -8.3 1.9 8.6 -9.7 6.9 0.2 0.2 0.9 -1.2 0.1 -4.8 2.6 -1.8 -7.2 0.7 -6.2 -6.2 9.0 4.8 -0.8 -2.0 -1.9 -2.1 8.4 7.7 -1.1 1.5 6.2 7.0 9.6 2.5 0.7 α G Pairwise Data (a) Feature Vectors (b) -1.7 0.9 9.0 0.9 0.9 0.9 0.9 0.9 0.9 0.9 0.9 0.9 0.9 0.9 0.9 -0.1 -0.9 -0.3 -0.5 -0.7 -0.3 -0.6 -0.6 -0.7 -0.5 -0.1 -0.7 -0.6 0.2 0.9 -0.1 -0.2 -0.3 -0.5 -0.6 -0.8 -0.9 -0.3 -0.8 -0.7 -0.7 -0.9 -0.9 0.6 -0.5 0.4 -0.4 -0.3 -0.9 -0.7 -0.1 -0.5 0.1 0.8 0.4 -0.7 -0.3 0.7 -0.1 0.6 0.3 0.9 0.4 0.2 0.5 0.2 0.6 0.2 0.3 0.7 0.1 0.3 0.5 0.3 0.2 0.7 0.6 0.1 0.6 -0.3 0.9 Figure 1: Two typical examples of matrix data (see text). (a) Pairwise data. Row (A-L) and column (A-L) objects coincide. (b) Feature vectors. Column objects (A-G) differ form row objects (α - λ). The latter are interpreted as features. Another typical case occurs, if objects are described by a set of features (Fig. 1b). In this case, the column objects are the objects to be characterized, the row objects correspond to their features and the matrix elements denote the strength with which a feature is expressed in a particular object. In the following we consider the task of learning a classification problem on matrix data. We consider the case that class labels are assigned to the column objects of the training set. Given the matrix and the class labels we then want to construct a classifier with good generalization properties. From all the possible choices we select classifiers from the support vector machine (SVM) family [1, 2] and we use the principle of structural risk minimization [15] for model selection - because of its recent success [11] and its theoretical properties [15]. Previous work on large margin classifiers for datasets, where objects are described by feature vectors and where SVMs operate on the column vectors of the matrix, is abundant. However, there is one serious problem which arise when the number of features becomes large and comparable to the number of objects: Without feature selection, SVMs are prone to overfitting, despite the complexity regularization which is implicit in the learning method [3]. Rather than being sparse in the number of support vectors, the classifier should be sparse in the number of features used for classification. This relates to the result [15] that the number of features provide an upper bound on the number of “essential” support vectors. Previous work on large margin classifiers for datasets, where objects are described by their mutual similarities, was centered around the idea that the matrix of similarities can be interpreted as a Gram matrix (see e.g. Hochreiter & Obermayer [7]). Work along this line, however, was so far restricted to the case (i) that the Gram matrix is positive definite (although methods have been suggested to modify indefinite Gram matrices in order to restore positive definiteness [10]) and (ii) that row and column objects are from the same set (pairwise data) [7]. In this contribution we extend the Gram matrix approach to matrix data, where row and column objects belong to different sets. Since we can no longer expect that the matrices are positive definite (or even square), a new objective function must be derived. This is done in the next section, where an algorithm for the construction of linear classifiers is derived using the principle of structural risk minimization. Section 3 is concerned with the question under what conditions matrix elements can indeed be interpreted as vector products in some feature space. The method is specialized to pairwise data in Section 4. A sparseness constraint for feature selection is introduced in Section 5. Section 6, finally, contains an evaluation of the new method for DNA microarray data as well as benchmark results with standard classifiers which are based on standard feature selection procedures. 2 Large Margin Classifiers for Matrix Data In the following we consider two sets X and Z of objects, which are described by feature vectors x and z. Based on the feature vectors x we construct a linear classifier defined through the classification function f(x) = ⟨w, x⟩+ b, (1) where ⟨., .⟩denotes a dot product. The zero isoline of f is a hyperplane which is parameterized by its unit normal vector ˆw and by its perpendicular distance b/∥w∥2 from the origin. The hyperplane’s margin γ with respect to X is given by γ = min x∈X |⟨ˆw, x⟩+ b/∥w∥2 | . (2) Setting γ = ∥w∥−1 2 allows us to treat normal vectors w which are not normalized, if the margin is normalized to 1. According to [15] this is called the “canonical form” of the separation hyperplane. The hyperplane with largest margin is then obtained by minimizing ∥w∥2 2 for a margin which equals 1. It has been shown [14, 13, 12] that the generalization error of a linear classifier, eq. (1), can be bounded from above with probability 1 −δ by the bound B, B(L, a/γ, δ) = 2 L µ log2 ³ EN ³ γ 2 a, F, 2L ´´ + log2 µ4 L a δ γ ¶¶ , (3) provided that the training classification error is zero and f(x) is bounded by −a ≤f(x) ≤a for all x drawn iid from the (unknown) distribution of objects. L denotes the number of training objects x, γ denotes the margin and EN (ϵ, F, L) the expected ϵ-covering number of a class F of functions that map data objects from T to [0, 1] (see Theorem 7.7 in [14] and Proposition 19 in [12]). In order to obtain a classifier with good generalization properties we suggest to minimize a/γ under proper constraints. a is not known in general, however, because the probability distribution of objects (in particular its support) is not known. In order to avoid this problem we approximate a by the range m = 0.5 ¡ maxi⟨ˆw, xi⟩−mini⟨ˆw, xi⟩ ¢ of values in the training set and minimize the quantity B(L, m/γ, δ) instead of eq. (3). Let X := ¡ x1, x2, . . . , xL¢ be the matrix of feature vectors of L objects from the set X and Z := ¡ z1, z2, . . . , zP ¢ be the matrix of feature vectors of P objects from the set Z. The objects of set X are labeled, and we summarize all labels using a label matrix Y : [Y ]ij := yiδij ∈RL×L, where δ is the Kronecker-Delta. Let us consider the case that the feature vectors X and Z are unknown, but that we are given the matrix K := XT Z of the corresponding scalar products. The training set is then given by the data matrix K and the corresponding label matrix Y . The principle of structural risk minimization is implemented by minimizing an upper bound on (m/γ)2 given by ∥XT w∥2 2, as can be seen from m/γ ≤∥w∥2 maxi |⟨ˆw, xi⟩| ≤ qP i (⟨w, xi⟩)2 = ∥XT w∥2. The constraints f(xi) = yi imposed by the training set are taken into account using the expressions 1 −ξ+ i ≤yi ¡ ⟨w, xi⟩+ b ¢ ≤ 1 + ξ− i , where ξ+ i , ξ− i ≥0 are slack variables which should also be minimized. We thus obtain the optimization problem min w,b,ξ+,ξ− 1 2 ∥XT w∥2 2 + M + 1T ξ+ + M −1T ξ− (4) s.t. Y −1 ¡ XT w + b1 ¢ −1 + ξ+ ≥0 Y −1 ¡ XT w + b1 ¢ −1 −ξ−≤0 ξ+, ξ−≥0 . M + penalizes wrong classification and M −absolute values exceeding 1. For classification M −may be set to zero. Note, that the quadratic expression in the objective function is convex, which follows from ∥XT w∥2 2 = wT X XT w and the fact that X XT is positive semidefinite. Let ˜α+, ˜α−be the dual variables for the constraints imposed by the training set, ˜α := ˜α+ − ˜α−, and α a vector with ˜α = Y ¡ XT Z ¢ α. Two cases must be treated: α is not unique or does not exist. First, if α is not unique we choose α according to Section 5. Second, if α does not exist we set α = ¡ ZT X Y −T Y −1 XT Z ¢−1 ZT X Y −T ˜α, where Y −T Y −1 is the identity. The optimality conditions require that the following derivatives of the Lagrangian L are zero: ∂L/∂b = 1T Y −1 ˜α, ∂L/∂w = X XT w −X Y −1 ˜α, ∂L/∂ξ± = M ±1 −˜α± + µ±, where µ+, µ−≥0 are the Lagrange multipliers for the slack variables. We obtain ZT X XT (w −Z α) = 0 which is ensured by w = Z α, 0 = 1T ¡ XT Z ¢ α, ˜αi ≤M +, and −˜αi ≤M −. The Karush–Kuhn–Tucker conditions give b = ¡ 1T Y 1 ¢ / ¡ 1T 1 ¢ if ˜αi < M + and −˜αi < M −. In the following we set M + = M −= M and C := M ∥Y ¡ XT Z ¢ ∥−1 row so that ∥α∥∞≤C implies ∥˜α∥∞≤∥Y ¡ XT Z ¢ ∥row ∥α∥∞≤M, where ∥.∥row is the row-sum norm. We then obtain the following dual problem of eq. (4): min α 1 2αT KT K α −1T Y K α (5) subject to 1T K α = 0 , |αi| ≤C. If M + ̸= M −we must add another constraint. For M −= 0, for example, we have to add Y K (α+ −α−) ≥0. If a classifier has been selected according to eq. (5), a new example u is classified according to the sign of f(u) = ⟨w, u⟩+ b = P X i=1 αi ⟨zi, u⟩+ b. (6) The optimal classifier is selected by optimizing eq. (5), and as long as a = m holds true for all possible objects x (which are assumed to be drawn iid), the generalization error is bounded by eq. (3). If outliers are rejected, condition a = m can always be enforced. For large training sets the number of rejections is small: The probability P{|⟨w, x⟩| > m} that an outlier occurs can be bounded with confidence 1 −δ using the additive Chernoffbounds (e.g. [15]): P{|⟨w, x⟩| > m} ≤ r −log δ 2L . (7) But note, that not all outliers are misclassified, and the trivial bound on the generalization error is still of the order L−1. 3 Kernel Functions, Measurements and Scalar Products In the last section we have assumed that the matrix K is derived from scalar products between the feature vectors x and z which describe the objects from the sets X and Z. For all practical purposes, however, the only information available is summarized in the matrices K and Y . The feature vectors are not known and it is even unclear whether they exist. In order to apply the results of Section 2 to practical problems the following question remains to be answered: What are the conditions under which the measurement operator k(., z) can indeed be interpreted as a scalar product between feature vectors and under which the matrix K can be interpreted as a matrix of kernel evaluations? In order to answer these questions, we make use of the following theorems. Let L2(H) denote the set of functions h from H with R h2(x)dx < ∞and ℓ2 the set of infinite vectors (a1, a2, . . . ) where P i a2 i converges. Theorem 1 (Singular Value Expansion) Let H1 and H2 be Hilbert spaces. Let α be from L2(H1) and let k be a kernel from L2(H2, H1) which defines a HilbertSchmidt operator Tk : H1 →H2 (Tkα)(x) = f(x) = Z k(x, z) α(z) dz . (8) Then there exists an expansion k(x, z) = P n sn en(z) gn(x) which converges in the L2-sense. The sn ≥0 are the singular values of Tk, and en ∈H1, gn ∈H2 are the corresponding orthonormal functions. Corollary 1 (Linear Classification in ℓ2) Let the assumptions of Theorem 1 hold and let R H1(k(x, z))2 dz ≤K2 for all x. Let ⟨.⟩H1 be the a dot product in H1. We define w := (⟨α, e1⟩H1, ⟨α, e2⟩H1, . . . ), and φ(x) := (s1g1(x), s2g2(x), . . . ). Then the following holds true: • w, φ(x) ∈ℓ2, where ∥w∥2 ℓ2 = ∥α∥2 H1, and • ∥f∥2 H2 = ⟨T ∗ k Tkα, α⟩H1, where T ∗ k is the adjoint operator of Tk, and the following sum convergences absolutely and uniformly: f(x) = ⟨w, φ(x)⟩ℓ2 = X n sn ⟨α, en⟩H1 gn(x) . (9) Eq. (9) is a linear classifier in ℓ2. φ maps vectors from H2 into the feature space. We define a second mapping from H1 to the feature space by ω (z) := (e1(z), e2(z), . . . ). For α = PP i=1 αiδ(zi), where δ(zi) is the Dirac delta, we recover the discrete classifier (6) and w = PP i=1 αi ω ¡ zi¢ . We observe that ∥f∥2 H2 = αT KT K α = ∥XT w∥2 2. A problem may arise if zi belongs to a set of measure zero which does not obey the singular value decomposition of k. If this occurs δ(zi) may be set to the zero function. Theorem 1 tells us that any measurement kernel k applied to objects x and z can be expressed for almost all x and z as k(x, z) = ⟨φ (x) , ω (z)⟩, where ⟨.⟩defines a dot product in some feature space for almost all x, z. Hence, we can define the a matrix X := ¡ φ ¡ x1¢ , φ ¡ x2¢ , . . . , φ ¡ xL¢¢ of feature vectors for the L column objects and a matrix Z := ¡ ω ¡ z1¢ , ω ¡ z2¢ , . . . , ω ¡ zP ¢¢ of feature vectors for the P row objects and apply the results of Section 2. 4 Pairwise Data An interesting special case occurs if row and column objects coincide. This kind of data is known as pairwise data [5, 4, 8] where the objects to be classified serve as features and vice versa. Like in Section 3 we can expand the measurement kernel via singular value decomposition but that would introduce two different mappings (φ and ω) into the feature space. We will use one map for row and column objects and perform an eigenvalue decomposition. The consequence is that that eigenvalues may be negative (see the following theorem). Theorem 2 (Eigenvalue Expansion) Let definitions and assumptions be as in Theorem 1. Let H1 = H2 = H and let k be symmetric. Then there exists an expansion k(x, z) = P n νn en(z) en(x) which converges in the L2-sense. The νn are the eigenvalues of Tk with the corresponding orthonormal eigenfunctions en. Corollary 2 (Minkowski Space Classification) Let the assumptions of Theorem 2 and R H(k(x, z))2 dz ≤ K2 for all x hold true. We define w := ( p |ν1|⟨α, e1⟩H, p |ν2|⟨α, e2⟩H, . . . ), φ(x) := ( p |ν1|e1(x), p |ν2|e2(x), . . . ), and ℓ2 S to denote ℓ2 with a given signature S = (sign(ν1), sign(ν2), . . . ). Then the following holds true: ∥w∥2 ℓ2 S = P n sign(νn) ³p |νn| ⟨α, en⟩H ´2 = P n νn⟨α, en⟩2 H = ⟨Tkα, α⟩H, ∥φ(x)∥2 ℓ2 S = P n νn en(x)2 = k(x, x) in the L2 sense, and the following sum convergences absolutely and uniformly: f(x) = ⟨w, φ(x)⟩ℓ2 S = X n νn ⟨α, en⟩H en(x) . (10) Eq. (10) is a linear classifier in the Minkowski space ℓ2 S. For the discrete case α = PP i=1 αiδ(zi), the normal vector is w = PP i=1 αiφ ¡ zi¢ . In comparison to Corollary 1, we have ∥w∥2 ℓ2 S = αT K α. and must assume that ∥φ(x)∥2 ℓ2 S does converge. Unfortunately, this can be assured in general only for almost all x. If k is both continuous and positive definite and if H is compact, then the sum converges uniformly and absolutely for all x (Mercer). 5 Sparseness and Feature Selection As mentioned in the text after optimization problem (4) α may be not u nique and an additional regularization term is needed. We choose the regularization term such that it enforces sparseness and that it also can be used for feature selection. We choose ”ϵ ∥α∥1”, where ϵ is the regularization parameter. We separate α into a positive part α+ and a negative part α−with α = α+ −α−and α+ i , α− i ≥0 [11]. The dual optimization problem is then given by min α 1 2 ¡ α+ −α−¢T KT K ¡ α+ −α−¢ − (11) 1T Y K ¡ α+ −α−¢ + ϵ 1T ¡ α+ + α−¢ s.t. 1T K ¡ α+ −α−¢ = 0 , C1 ≥α+, α−≥0 . If α is sparse, i.e. if many αi = α+ i −α− i are zero, the classification function f(u) = ⟨w, u⟩+ b = PP i=1 ¡ α+ i −α− i ¢ ⟨zi, u⟩+ b contains only few terms. This saves on the number of measurements ⟨zi, u⟩for new objects and yields to improved classification performance due to the reduced number of features zi [15]. 6 Application to DNA Microarray Data We apply our new method to the DNA microarray data published in [9]. Column objects are samples from different brain tumors of the medullablastoma kind. The samples were obtained from 60 patients, which were treated in a similar way and the samples were labeled according to whether a patient responded well to chemoor radiation therapy. Row objects correspond to genes. Transcriptions of 7,129 genes were tagged with fluorescent dyes and used as a probe in a binding assay. For every sample-gene pair, the fluorescence of the bound transcripts - a snapshot of the level of gene expression - was measured. This gave rise to a 60 × 7, 129 real valued sample-gene matrix where each entry represents the level of gene expression in the corresponding sample. For more details see [9]. The task is now to construct a classifier which predicts therapy outcome on the basis of samples taken from new patients. The major problem of this classification task is the limited number of samples - given the large number of genes. Therefore, feature selection is a prerequisite for good generalization [6, 16]. We construct the classifier using a two step procedure. In a first step, we apply our new method on a 59 × 7, 129 matrix, where one column object was withhold to avoid biased feature selection. We choose ϵ to be fairly large in order to obtain a sparse set of features. In a second step, we use the selected features only and apply our method once more on the reduced sample-gene matrix, but now with a small value of ϵ. The C-parameter is used for regularization instead. Feature Selection # # Feature Selection C # # / Classification F E / Classification F E TrkC 1 20 P-SVM / C-SVM 1.0 40/45/50 5/4/5 statistic / SVM 15 P-SVM / C-SVM 0.01 40/45/50 5/5/5 statistic / Comb1 14 P-SVM / P-SVM 0.1 40/45/50 4/4/5 statistic / KNN 8 13 statistic / Comb2 12 Table 1: Benchmark results for DNA microarray data (for explanations see text). The table shows the classification error given by the number of wrong classifications (“E”) for different numbers of selected features (“F”) and for different values of the parameter C. The feature selection method is signal-to-noise-statistic and t-statitic denoted by “statistic” or our method P-SVM. Data are provided for “TrkC”-Gene classification, standard SVMs, weighted “TrkC”/SVM (Comb1), K nearest neighbor (KNN), combined SVM/TrkC/KNN (Comb2), and our procedure (P-SVM) used for classification. Except for our method (P-SVM), results were taken from [9]. Table 1 shows the result of a leave-one-out cross-validation procedure, where the classification error is given for different numbers of selected features. Our method (P-SVM) is compared with “TrkC”-Gene classification (one gene classification), standard SVMs, weighted “TrkC”/SVM-classification, K nearest neighbor (KNN), and a combined SVM/TrkC/KNN classifier. For the latter methods, feature selection was based on the correlation of features with classes using signal-to-noisestatistics and t-statistics [3]. For our method we use C = 1.0 and 0.1 ≤ϵ ≤1.5 for feature selection in step one which gave rise to 10 −1000 selected features. The feature selection procedure (also a classifier) had its lowest misclassification rate between 20 and 40 features. For the construction of the classifier we used in step two ϵ = 0.01. Our feature selection method clearly outperforms standard methods — the number of misclassification is down by a factor of 3 (for 45 selected genes). Acknowledgments We thank the anonymous reviewers for their hints to improve the paper. This work was funded by the DFG (SFB 618). References [1] B. E. Boser, I. M. Guyon, and V. N. Vapnik. A training algorithm for optimal margin classifiers. In Proc. of the 5th Annual ACM Workshop on Computational Learning Theory, pages 144–152. ACM Press, Pittsburgh, PA, 1992. [2] C. Cortes and V. N. Vapnik. Support vector networks. Machine Learning, 20:273–297, 1995. [3] R. Golub, D. K. Slonim, P. Tamayo, C. Huard, M. Gaasenbeek, J. P. Mesirov, H. Coller, M. Loh, J. R. Downing, M. A. Caligiuri, C. D. Bloomfield, and E. S. Lander. Molecular classification of cancer: Class discovery and class prediction by gene expression monitoring. Science, 286(5439):531–537, 1999. [4] T. Graepel, R. Herbrich, P. Bollmann-Sdorra, and K. Obermayer. Classification on pairwise proximity data. In NIPS 11, pages 438–444, 1999. [5] T. Graepel, R. Herbrich, B. Sch¨olkopf, A. J. Smola, P. L. Bartlett, K.-R. M¨uller, K. Obermayer, and R. C. Williamson. Classification on proximity data with LP–machines. In ICANN 99, pages 304–309, 1999. [6] I. Guyon, J. Weston, S. Barnhill, and V. Vapnik. Gene selection for cancer classification using support vector machines. Mach. Learn., 46:389–422, 2002. [7] S. Hochreiter and K. Obermayer. Classification of pairwise proximity data with support vectors. In The Learning Workshop. Y. LeCun and Y. Bengio, 2002. [8] T. Hofmann and J. Buhmann. Pairwise data clustering by deterministic annealing. IEEE Trans. on Pat. Analysis and Mach. Intell., 19(1):1–14, 1997. [9] S. L. Pomeroy, P. Tamayo, M. Gaasenbeek, L. M. Sturla, M. Angelo, M. E. McLaughlin, J. Y. H. Kim, L. C. Goumnerova, P. M. Black, C. Lau, J. C. Allen, D. Zagzag, J. M. Olson, T. Curran, C. Wetmore, J. A. Biegel, T. Poggio, S. Mukherjee, R. Rifkin, A. Califano, G. Stolovitzky, D. N. Louis, J. P. Mesirov, E. S. Lander, and T. R. Golub. Prediction of central nervous system embryonal tumour outcome based on gene expression. Nature, 415(6870):436–442, 2002. [10] V. Roth, J. Buhmann, and J. Laub. Pairwise clustering is equivalent to classical k-means. In The Learning Workshop. Y. LeCun and Y. Bengio, 2002. [11] B. Sch¨olkopf and A. J. Smola. Learning with kernels — Support Vector Machines, Reglarization, Optimization, and Beyond. MIT Press, Cambridge, 2002. [12] J. Shawe-Taylor, P. L. Bartlett, R. C. Williamson, and M. Anhtony. A framework for structural risk minimisation. In Comp. Learn. Th., pages 68–76, 1996. [13] J. Shawe-Taylor, P. L. Bartlett, R. C. Williamson, and M. Anhtony. Structural risk minimization over data-dependent hierarchies. IEEE Transactions on Information Theory, 44:1926–1940, 1998. [14] J. Shawe-Taylor and N. Cristianini. On the generalisation of soft margin algorithms. Technical Report NC2-TR-2000-082, NeuroCOLT2, Department of Computer Science, Royal Holloway, University of London, 2000. [15] V. Vapnik. The nature of statistical learning theory. Springer, NY, 1995. [16] J. Weston, S. Mukherjee, O. Chapelle, M. Pontil, T. Poggio, and V. Vapnik. Feature selection for SVMs. In NIPS 12, pages 668–674, 2000.
|
2002
|
106
|
2,110
|
Learning Semantic Similarity Jaz Kandola John Shawe-Taylor Royal Holloway, University of London {jaz, john}@cs.rhul.ac.uk N ella Cristianini University of California, Berkeley nello@support-vector.net Abstract The standard representation of text documents as bags of words suffers from well known limitations, mostly due to its inability to exploit semantic similarity between terms. Attempts to incorporate some notion of term similarity include latent semantic indexing [8], the use of semantic networks [9], and probabilistic methods [5]. In this paper we propose two methods for inferring such similarity from a corpus. The first one defines word-similarity based on document-similarity and viceversa, giving rise to a system of equations whose equilibrium point we use to obtain a semantic similarity measure. The second method models semantic relations by means of a diffusion process on a graph defined by lexicon and co-occurrence information. Both approaches produce valid kernel functions parametrised by a real number. The paper shows how the alignment measure can be used to successfully perform model selection over this parameter. Combined with the use of support vector machines we obtain positive results. 1 Introduction Kernel-based algorithms exploit the information encoded in the inner-products between all pairs of data items (see for example [1]). This matches very naturally the standard representation used in text retrieval, known as the 'vector space model', where the similarity of two documents is given by the inner product between high dimensional vectors indexed by all the terms present in the corpus. The combination of these two methods, pioneered by [6], and successively explored by several others, produces powerful methods for text categorization. However, such an approach suffers from well known limitations, mostly due to its inability to exploit semantic similarity between terms: documents sharing terms that are different but semantically related will be considered as unrelated. A number of attempts have been made to incorporate semantic knowledge into the vector space representation. Semantic networks have been considered [9], whilst others use co-occurrence analysis where a semantic relation is assumed between terms whose occurrence patterns in the documents of the corpus are correlated [3]. Such methods are also limited in their flexibility, and the question of how to infer semantic relations between terms or documents from a corpus remains an open issue. In this paper we propose two methods to model such relations in an unsupervised way. The structure of the paper is as follows. Section 2 provides an introduction to how semantic similarity can be introduced into the vector space model. Section 3 derives a parametrised class of semantic proximity matrices from a recursive definition of similarity of terms and documents. A further parametrised class of kernels based on alternative similarity measures inspired by considering diffusion on a weighted graph of documents is given in Section 4. In Section 5 we show how the recently introduced alignment measure [2] can be used to perform model selection over the classes of kernels we have defined. Positive experimental results with the methods are reported in Section 5 before we draw conclusions in Section 6. 2 Representing Semantic Proximity Kernel based methods are an attractive choice for inferring relations from textual data since they enable us to work in a document-by-document setting rather than in a term-by-term one [6]. In the vector space model, a document is represented by a vector indexed by the terms of the corpus. Hence, the vector will typically be sparse with non-zero entries for those terms occurring in the document. Two documents that use semantically related but distinct words will therefore show no similarity. The aim of a semantic proximity matrix [3] is to correct for this by indicating the strength of the relationship between terms that even though distinct are semantically related. The semantic proximity matrix P is indexed by pairs of terms a and b, with the entry Pab = Pba giving the strength of their semantic similarity. If the vectors corresponding to two documents are d i , d j , their inner product is now evaluated through the kernel k(di , dj ) = d~Pdj, where x' denotes the transpose of the vector or matrix x. The symmetry of P ensures that the kernel is symmetric. We must also require that P is positive semidefinite in order to satisfy Mercer's conditions. In this case we can decompose P = R' R for some matrix R, so that we can view the semantic similarity as a projection into a semantic space ¢: d f--t Rd, since k(di,dj ) = d~Pdj = (Rdi , Rdj }. The purpose of this paper is to infer (or refine) the similarity measure between examples by taking into account higher order correlations, thereby performing unsupervised learning of the proximity matrix from a given corpus. We will propose two methods based on two different observations. The first method exploits the fact that the standard representation of text documents as bags of words gives rise to an interesting duality: while documents can be seen as bags of words, simultaneously terms can be viewed as bags of documents - the documents that contain them. In such a model, two documents that have highly correlated term-vectors are considered as having similar content. Similarly, two terms that have a correlated document-vector will have a semantic relation. This is of course only a first order approximation since the knock-on effect of the two similarities on each other needs to be considered. We show that it is possible to define term-similarity based on document-similarity, and vice versa, to obtain a system of equations that can be solved in order to obtain a semantic proximity matrix P. The second method exploits the representation of a lexicon (the set of all words in a given corpus) as a graph, where the nodes are indexed by words and where cooccurrence is used to establish links between nodes. Such a representation has been studied recently giving rise to a number of topological properties [4]. We consider the idea that higher order correlations between terms can affect their semantic relations as a diffusion process on such a graph. Although there can be exponentially many paths connecting two given nodes in the graph, the use of diffusion kernels [7] enables us to obtain the level of semantic relation between any two nodes efficiently, so inferring the semantic proximity matrix from data. 3 Equilibrium Equations for Semantic Similarity In this section we consider the first of the two methods outlined in the previous section. Here the aim is to create recursive equations for the relations between documents and between terms. Let X be the feature example (term/document in the case of text data) matrix in a possibly kernel-defined feature space, so that X' X gives the kernel matrix K and X X' gives the correlations between different features over the training set. We denote this latter matrix with G. Consider the similarity matrices defined recursively by K >"X'GX+K and G=>..X'KX+G (1) We can interpret this as augmenting the similarity given by K through indirect similarities measured by G and vice versa. The factor >.. < IIKII-1 ensures that the longer range effects decay exponentially. Our first result characterizes the solution of the above recurrences. Proposition 1 Provided>" < IIKII-1 = IIGII-1 , the kernels K and G that solve the recurrences (1) are given by K K(f - >"K)-l and G = G(I - >"G)-l Proof: First observe that K(I - >"K) - l 1 1 K(I - >"K)- l - -(I - >"K)- l + -(f - >"K) - l >.. >.. 1 1 1 1 --(I - >..K)(f - >"K)- + -(I - >"K)>.. >.. 1 1 1 -(I - >"K)- --f >.. >.. Now if we substitute the second recurrence into the first we obtain K >..2X'XKX'X+>"X'XX'X+K >..2 K(K(I - >..K)- l)K + >..K2 + K >..2 K( ~(I - >"K)-l - ~I)K + >..K2 + K >"K(I - >"K)-l K + K(I - >..K)-l(f - >"K) K(I - >"K)- l showing that the expression does indeed satisfy the recurrence. Clearly, by the symmetry of the definition the expression for G also satisfies the recurrence. _ In view of the form of the solution we introduce the following definition: Definition 2 von Neumann Kernel Given a kernel K the derived kernel K(>..) = K(f - >"K)-l will be referred to as the von Neumann kernel. Note that we can view K(>\) as a kernel based on the semantic proximity matrix P = >..a + I since X'PX = X'(>..a + I)X = >..x'ax + K = K(>"). Hence, the solution a defines a refined similarity between terms/features. In the next section, we will consider the second method of introducing semantic similarity derived from viewing the terms and documents as vertices of a weighted graph. 4 Semantic Similarity as a Diffusion Process Graph like structures within data occur frequently in many diverse settings. In the case of language, the topological structure of a lexicon graph has recently been analyzed [4]. Such a graph has nodes indexed by all the terms in the corpus, and the edges are given by the co-occurrence between terms in documents of the corpus. Although terms that are connected are likely to have related meaning, terms with a higher degree of separation would not be considered as being related. A diffusion process on the graph can also be considered as a model of semantic relations existing between indirectly connected terms. Although the number of possible paths between any two given nodes can grow exponentially, results from spectral graph theory have been recently used by [7] to show that it is possible to compute the similarity between any two given nodes efficiently without examining all possible paths. It is also possible to show that the similarity measure obtained in this way is a valid kernel function. The exponentiation operation used in the definition, naturally yields the Mercer conditions required for valid kernel functions. An alternative insight into semantic similarity, to that presented in section 2, is afforded if we multiply out the expression for K(>..) , K(>..) = K(I - >"K)-l = L:~l >..t-l Kt. The entries in the matrix Kt are given by t-1 Kfj = 2..= II KUtUt+l' U E {1, ... ,~}t £=1 U1 = i, Ut = j that is the sum of the products of the weights over all paths of length t that start at vertex i and finish at vertex j in the weighted graph on the examples. If we view the connection strengths as channel capacities, the entry Klj can be seen to measure the sum over all routes of the products of the capacities. If the entries satisfy that they are all positive and for each vertex the sum of the connections is 1, we can view the entry as the probability that a random walk beginning at vertex i is at vertex j after t steps. It is for these reasons that the kernels defined using these combinations of powers of the kernel matrix have been termed diffusion kernels [7]. A similar equation holds for Gt. Hence, examples that both lie in a cluster of similar examples become more strongly related, and similar features that occur in a cluster of related features are drawn together in the semantic proximity matrix P. We should stress that the emphasis of this work is not in its diffusion connections, but its relation to semantic proximity. It is this link that motivates the alternative decay factors considered below. The kernel K combines these indirect link kernels with an exponentially decaying weight. This suggests an alternative weighting scheme that shows faster decay for increasing path length, _ 00 >..tKt K(>..) = K 2..= -, = K exp(>..K) t. t=1 The next proposition gives the semantic proximity matrix corresponding to K(>"') . Proposition 3 Let K(>"') = K exp(>...K). Then K(>"') corresponds to a semantic proximity matrix exp(>"'G). Proof: Let X = UI;V' be the singular value decomposition of X, so that K = VAV' is the eigenvalue decomposition of K, where A = I;/I;. We can write K as K VAexp(>...A)V' = XIUI;- lAexp(>...A)I;- lUIX = XIU exp(>"'A)U' X = Xl exp(>"'G)X, as required. _ The above leads to the definition of the second kernel that we consider. Definition 4 Given a kernel K the derived kernels K(>"') = K exp(>...K) will be referred to as the exponential kernels. 5 Experimental Methods In the previous sections we have introduced two new kernel adaptations, in both cases parameterized by a positive real parameter >.... In order to apply these kernels on real text data, we need to develop a method of choosing the parameter >.... Of course one possibility would be just to use cross-validation as considered by [7]. Rather than adopt this rather expensive methodology we will use a quantitative measure of agreement between the diffusion kernels and the learning task known as alignment, which measures the degree of agreement between a kernel and target [2]. Definition 5 Alignment The (empirical) alignment of a kernel kl with a kernel k2 with respect to the sample S is the quantity A(S,k1,k2 ) = (K1,K2 )F , y!(K1,K1)F(K2,K2)F where Ki is the kernel matrix for the sample S using kernel ki. where we use the following definition of inner products between Gram matrices m (K1,K2)F = 2..= K 1(Xi ,Xj)K2(Xi,Xj ) (2) i,j=l corresponding to the Frobenius inner product. From a text categorization perspective this can also be viewed as the cosine of the angle between two bi-dimensional vectors Kl and K2, representing the Gram matrices. If we consider K2 = yyl, where y is the vector of outputs (+1/-1) for the sample, then A(S K I) _ (K,yy/)F , , yy y!(K K) ( I I) , F yy , yy F y'Ky mllKllF (3) The alignment has been shown to possess several convenient properties [2]. Most notably it can be efficiently computed before any training of the kernel machine takes place, and based only on training data information; and since it is sharply concentrated around its expected value, its empirical value is stable with respect to different splits of the data. We have developed a method for choosing>... to optimize the alignment of the resulting matrix K(>...) or k(>...) to the target labels on the training set. This method follows similar results presented in [2], but here the parameterization is non-linear in A so that we cannot solve for the optimal value. We rather seek the optimal value using a line search over the range of possible values of A for the value at which the derivative of the alignment with respect to A is zero. The next two propositions will give equations that are satisfied at this point. Proposition 6 If A* is the solution of A* = argmax~A(S, K(A), yy') and Vi, Ai are the eigenvector/eigenvalue pairs of the kernel matrix K then m m m m L Ai exp(A* Ai)(Vi, y)2 L AJ exp(2A* Ai) i=l i=l i = l i=l Proof: First observe that K(A) = V MV' = 2:~1 J.tiViV~, where Mii = J.ti(A) Ai exp(Ui). We can express the alignment of K(A) as A(S, K(A), yy') 2:~1 J.ti(A)(Vi , y)2 mJ2:~l J.ti(A)2 The function is a differentiable function of A and so at its maximal value the derivative will be zero. Taking the derivative of this expression and setting it equal to zero gives the condition in the proposition statement. _ Proposition 7 If A* is the solution of A* = argmaxAE(O,IIKII-,)A(S, K(A), yy'), and Vi, Ai are the eigenvector eigenvalue pairs of the kernel matrix K then ~ 1 ~ (Vi,y)2(2A*Ai -1) 6 (A*(l- VAi))2 6 (A*(l- A*Ai))2 ~ (Vi,y)2 ~ 2A*Ai -1 6 V(l- A*Ai) 6 (V(l- A*Ai))3 Proof: The proof is identical to that of Proposition 6 except that Mii = J.ti(A) = (l - Ai>.r' A .Definition 8 Line Search Optimization of the alignment can take place by using a line search of the values of A to find a maximum point of the alignment by seeking points at which the equations given in Proposition 6 and 7 hold. 5.1 Results To demonstrate the performance of the proposed algorithm for text data, the Medline1033 dataset commonly used in text processing [3] was used. This dataset contains 1033 documents and 30 queries obtained from the national library of medicine. In this work we focus on query20. A Bag of Words kernel was used [6]. Stop words and punctuation were removed from the documents and the Porter stemmer was applied to the words. The terms in the documents were weighted according to a variant of the tfidf scheme. It is given by 10g(1 + tf) * log(m/ df), where tf represents the term frequency, df is used for the document frequency and m is the total number of documents. A support vector classifier (SVC) was used to assess the performance of the derived kernels on the Medline dataset. A 10-fold cross validation procedure was used to find the optimal value for the capacity control parameter 'C'. Having selected the optimal 'C' parameter, the SVC was re-trained ten times using ten random training and test dataset splits. Error results for the different algorithms are presented together with F1 values. The F1 measure is a popular statistic used in the information retrieval community for comparing performance of TRAIN ALIGN SVC ERROR F1 A K80 0.851 {0.012} 0.017 {0.005} 0.795 {0.060} 0.197 ~0.004) B80 0.423 (0.007) 0.022 (0.007) 0.256 (0.351) K50 0.863 {0.025} 0.018 {0.006} 0.783 {0.074} 0.185 ~0.008) B50 0.390 (0.009) 0.024 (0.004) 0.456 (0.265) K 20 0.867 {0.029} 0.019 {0.004) 0.731 {0.089} 0.147 ~0.04) B 20 0.325 (0.009) 0.030 (0.005) 0.349 (0.209) Table 1: Medline dataset - Mean and associated standard deviation alignment, F1 and sve error values for a sve trained using the Bag of Words kernel (B) and the exponential kernel (K). The index represents the percentage of training points. TRAIN ALIGN SVC ERROR F1 A K80 0.758 (0.015) 0.017 (0.004) 0.765 (0.020) 0.032 (0.001) B80 0.423(0.007) 0.022 (0.007) 0.256 (0.351) K50 0.766 (0.025) 0.018 (0.005) 0.701 (0.066) 0.039 (0.008) B50 0.390 (0.009) 0.024 (0.004) 0.456 (0.265) K 20 0.728 (0.012) 0.028 (0.004) 0.376 (0.089) 0.029 (0 .07) B 20 0.325 (0.009) 0.030 (0.005) 0.349 (0.209) Table 2: Medline dataset - Mean and associated standard deviation alignment, F1 and sve error values for a sve trained using the Bag of Words kernel (B) and the von Neumann (K). The index represents the percentage of training points. algorithms typically on uneven data. F1 can be computed using F1 = ~~~, where P represents precision i.e. a measure of the proportion of selected items that the system classified correctly, and R represents recall i.e. the proportion of the target items that the system selected. Applying the line search procedure to find the optimal value of A for the diffusion kernels. All of the results are averaged over 10 random splits with the standard deviation given in brackets. Table 1 shows the results of using the Bag of Words kernel matrix (B) and the exponential kernel matrix (K). Table 2 presents the results of using the von Neumann kernel matrix (K) together with the Bag of Words kernel matrix for different sizes of the training data. The index represents the percentage of training points. The first column of both table 1 and 2 shows the alignments of the Gram matrices to the rank 1 labels matrix for different sizes of training data. In both cases the results presented indicate that the alignment of the diffusion kernels to the labels is greater than that of the Bag of Words kernel matrix by more than the sum of the standard deviations across all sizes of training data. The second column of the tables represents the support vector classifier (SVe) error obtained using the diffusion Gram matrices and the Bag of Words Gram matrix. The sve error for the diffusion kernels shows a decrease with increasing alignment value. F1 values are also shown and in all instances show an improvement for the diffusion kernel matrices. An interesting observation can be made regarding the F1 value for the von Neumann kernel matrix trained using 20% training data (K20). Despite an increase in alignment value and reduction of sve error the F1 value does not increase as much as that for the exponential kernel trained using the same proportion of the data K 20. This observation implies that the diffusion kernel needs more data to be effective. This will be investigated in future work. 6 Conclusions We have proposed and compared two different methods to model the notion of semantic similarity between documents, by implicitly defining a proximity matrix P in a way that exploits high order correlations between terms. The two methods differ in the way the matrix is constructed. In one view, we propose a recursive definition of document similarity that depends on term similarity and vice versa. By solving the resulting system of kernel equations, we effectively learn the parameters of the model (P), and construct a kernel function for use in kernel based learning methods. In the other approach, we model semantic relations as a diffusion process in a graph whose nodes are the documents and edges incorporate first-order similarity. Diffusion efficiently takes into account all possible paths connecting two nodes, and propagates the 'similarity' between two remote documents that share 'similar terms'. The kernel resulting from this model is known in the literature as the 'diffusion kernel'. We have experimentally demonstrated the validity of the approach on text data using a novel approach to set the adjustable parameter ..\ in the kernels by optimising their 'alignment' to the target on the training set. For the dataset partitions substantial improvements in performance over the traditional Bag of Words kernel matrix were obtained using the diffusion kernels and the line search method. Despite this success, for large imbalanced datasets such as those encountered in text classification tasks the computational complexity of constructing the diffusion kernels may become prohibitive. Faster kernel construction methods are being investigated for this regime. References [1] N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector Machines. Cambridge University Press, Cambridge, UK, 2000. [2] Nello Cristianini, John Shawe-Taylor, and Jaz Kandola. On kernel target alignment. In Proceedings of the Neural Information Processing Systems, NIPS '01, 2002. [3] Nello Cristianini, John Shawe-Taylor, and Huma Lodhi. Latent semantic kernels. Journal of Intelligent Information Systems, 18(2):127-152,2002. [4] R. Ferrer and R.V. Sole. The small world of human language. Proceedings of the Royal Society of London Series B - Biological Sciences, pages 2261- 2265, 200l. [5] Thomas Hofmann. Probabilistic latent semantic indexing. In Research and Development in Information Retrieval, pages 50-57, 1999. [6] T. Joachims. Text categorization with support vector machines. In Proceedings of European Conference on Machine Learning (ECML), 1998. [7] R.I. Kondor and J. Lafferty. Diffusion kernels on graphs and other discrete structures. In Proceedings of Intenational Conference on Machine Learning (ICML 2002), 2002. [8] Todd A. Letsche and Michael W. Berry. Large-scale information retrieval with latent semantic indexing. Information Sciences, 100(1-4):105- 137,1997. [9] G. Siolas and F. d'Alch Buc. Support vector machines based on a semantic kernel for text categorization. In IEEE-IJCNN 2000), 2000.
|
2002
|
107
|
2,111
|
How Linear are Auditory Cortical Responses? Maneesh Sahani Gatsby Unit, UCL 17 Queen Sq., London, WC1N 3AR, UK. maneesh@gatsby.ucl.ac.uk Jennifer F. Linden Keck Center, UCSF San Francisco, CA 94143–0732. linden@phy.ucsf.edu Abstract By comparison to some other sensory cortices, the functional properties of cells in the primary auditory cortex are not yet well understood. Recent attempts to obtain a generalized description of auditory cortical responses have often relied upon characterization of the spectrotemporal receptive field (STRF), which amounts to a model of the stimulusresponse function (SRF) that is linear in the spectrogram of the stimulus. How well can such a model account for neural responses at the very first stages of auditory cortical processing? To answer this question, we develop a novel methodology for evaluating the fraction of stimulus-related response power in a population that can be captured by a given type of SRF model. We use this technique to show that, in the thalamo-recipient layers of primary auditory cortex, STRF models account for no more than 40% of the stimulus-related power in neural responses. 1 Introduction A number of recent studies have suggested that spectrotemporal receptive field (STRF) models [1, 2], which are linear in the stimulus spectrogram, can describe the spiking responses of auditory cortical neurons quite well [3, 4]. At the same time, other authors have pointed out significant non-linearities in auditory cortical responses [5, 6], or have emphasized both linear and non-linear response components [7, 8]. Some of the differences in these results may well arise from differences in the stimulus ensembles used to evoke neuronal responses. However, even for a single type of stimulus, it is extremely difficult to put a number to the proportion of the response that is linear or non-linear, and so to judge the relative contributions of the two components to the stimulus-evoked activity. The difficulty arises because repeated presentations of identical stimulus sequences evoke highly variable responses from neurons at intermediate stages of perceptual systems, even in anaesthetized animals. While this variability may reflect meaningful changes in the internal state of the animal or may be completely random, from the point of view of modelling the relationship between stimulus and neural response it must be treated as noise. As previous authors have noted [9, 10], this noise complicates the evaluation of the performance of a particular class of stimulus-response function (SRF) model (for example, the class of STRF models) in two ways. First, it makes it difficult to assess the quality of the predictions given by any single model. Perfect prediction of a noisy response is impossible, even in principle, and since the the true underlying relationship between stimulus and neural response is unknown, it is unclear what degree of partial prediction could possibly be expected. Second, the noise introduces error into the estimation of the model parameters; consequently, even where direct unbiased evaluations of the predictions made by the estimated models are possible, these evaluations understate the performance of the model in the class that most closely matches the true SRF. The difficulties can be illustrated in the context of the classical statistical measure of the fraction of variance explained by a model, the coefficient of determination or statistic. This is the ratio of the reduction in variance achieved by the regression model (the total variance of the outputs minus the variance of the residuals) to the total variance of the outputs. The total variance of the outputs includes contributions from the noise, and so an of 1 is an unrealistic target, and the actual maximum achievable value is unclear. Moreover, the reduction of variance on the training data, which appears in the numerator of the , includes some “explanation” of noise due to overfitting. The extent to which this happens is difficult to estimate; if the reduction in variance is evaluated on test data, estimation errors in the model will lead to an underestimate of the performance of the best model in the class. Hypothesis tests based on compensate for these shortcomings in answering questions of model sufficiency. However, these tests do not provide a way to assess the extent of partial validity of a model class; indeed, it is well known that even the failure of a hypothesis test to reject a specific model class is not sufficient evidence to regard the model as fully adequate. One proposed method for obtaining a more quantitative measure of model performance is to compare the correlation (or, equivalently, squared distance) between the model prediction and a new response measurement to that between two successive responses to the same stimulus [9, 11]; as acknowledged in those proposals, however, this yardstick underestimates the response reliability even after considerable averaging, and so the comparison will tend to overestimate the validity of the SRF model. Measures like that are based on the fractional variance (or, for time series, the power) explained by a model do have some advantages; for example, contributions from independent sources are additive. Here, we develop analytic techniques that overcome the systematic noise-related biases in the usual variance measures1, and thus obtain, for a population of neurons, a quantitative estimate of the fraction of stimulus-related response captured by a given class of models. This statistical framework may be applicable to analysis of response functions for many types of neural data, ranging from intracellular recordings to imaging measurements. We apply it to extracellular recordings from rodent auditory cortex, quantifying the degree to which STRF models can account for neuronal responses to dynamic random chord stimuli. We find that on average less than half of the reliable stimulus-related power in these responses can be captured by spectrogram-linear STRF models. 2 Signal power The analysis assumes that the data consist of spike trains or other neural measurements continuously recorded during presentation of a long, complex, rapidly varying stimulus. This stimulus is treated as a discrete-time process. In the auditory experiment considered here, the discretization was set by the duration of regularly clocked sound pulses of fixed length; in a visual experiment, the discretization might be the frame rate of a movie. The neural response can then be measured with the same level of precision, counting action potentials (or integrating measurements) to estimate a response rate for each time bin, to obtain a response vector
. We propose to measure model performance in terms of the fraction of response power predicted successfully, where “power” is used in the sense of average squared deviation from the mean: ( denoting 1An alternative would be to measure information or conditional entropy rates. However, the question of how much relevant information is preserved by a model is different from the question of how accurate a model’s prediction is. For example, an information theoretic measure would not distinguish between a linear model and the same linear model cascaded with an invertible non-linearity. averages over time). As argued above, only some part of the total response power is predictable, even in principle; fortunately, this signal power can be estimated by combining repeated responses to the same stimulus sequence. We present a method-of-moments [12] derivation of the relevant estimator below. Suppose we have responses , where is the common, stimulusdependent component (signal) in the response and is the (zero-mean) noise component of the response in the
th trial. The expected power in each response is given by (where the symbol means “equal in expectation”). This simple relationship depends only on the noise component having been defined to have zero mean, and holds even if the variance or other property of the noise depends on the signal strength. We now construct two trial-averaged quantities, similar to the sum-of-squares terms used in the analysis of variance (ANOVA) [12]: the power of the average response, and the average power per response. Using to indicate trial averages: and Assuming the noise in each trial is independent (although the noise in different time bins within a trial need not be), we have: . Thus solving for suggests the following estimator for the signal power: (1) (A similar estimator for the noise power is obtained by subtracting this expression from .) This estimator is unbiased, provided only that the noise distribution has defined first and second moments and is independent between trials, as can be verified by explicitly calculating its expected value. Unlike the sum-of-squares terms encountered in an ANOVA, it is not a variate even when the noise is normally distributed (indeed, it is not necessarily positive). However, since each of the power terms in (1) is the mean of at least numbers, the central limit theorem suggests that will be approximately normally distributed for recordings that are considerably longer than the time-scale of noise correlation (in the experiment considered here, "!#$## ). Its variance is given by: %"& (' *) ,+ .-0/. 21 4365 -7 36893;: 1 + Tr <=/>/@? A1 B5 5 8 :C (2) where / is the ( ED ) covariance matrix of the noise, 5 is a vector formed by averaging each column of / , 8 is the average of all the elements of / and 3 is the time-average of the mean . Thus, %GF & ' depends only on the first and second moments of the response distribution; substitution of data-derived estimates of these moments into (2) yields a standard error bar for the estimator. In this way we have obtained an estimate (with corresponding uncertainty) of the maximum possible signal power that any model could accurately predict, without having assumed any particular distribution or time-independence of the noise. 3 Extrapolating Model Performance To compare the performance of an estimated SRF model to this maximal value, we must determine the amount of response power successfully predicted by the model. This is not necessarily the power of the predicted response, since the prediction may be inaccurate. Instead, the residual power in the difference between a measured response and the predicted response H to the same stimulus, H , is taken as an estimate of the error power. (The measured response used for this evaluation, and the stimulus which elicited it, may or may not also have been used to identify the parameters of the SRF model being evaluated; see explanation of training and test predictive powers below.) The difference between the power in the observed response and the error power gives the predictive power of the model; it is this value that can be compared to the estimated signal power . To be able to describe more than one neuron, an SRF model class must contain parameters that can be adapted to each case. Ideally, the power of the model class to describe a population of neurons would be judged using parameters that produced models closest to the true SRFs (the ideal models), but we do not have a priori knowledge of those parameters. Instead, the parameters must be tuned in each case using the measured neural responses. One way to choose SRF model parameters is to minimize the mean squared error (MSE) between the neural response in the training data and the model prediction for the same stimulus; for example, the Wiener kernel minimizes the MSE for a model based on a finite impulse response filter of fixed length. This MSE is identical to the error power that would be obtained when the training data themselves are used as the reference measured response . Thus, by minimizing the MSE, we maximize the predictive power evaluated against the training data. The resulting maximum value, hereafter the training predictive power, will overestimate the predictive ability of the ideal model, since the minimum-MSE parameters will be overfit to the training data. (Overfitting is inevitable, because model estimates based on finite data will always capture some stimulus-independent response variability.) More precisely, the expected value of the training predictive power is an upper bound on the true predictive power of the model class; we therefore refer to the training predictive power itself as an upper estimate of the SRF model performance. We can also obtain a lower estimate, defined similarly, by empirically measuring the generalization performance of the model by cross-validation. This provides an unbiased estimate of the average generalization performance of the fitted models; however, since these models are inevitably overfit to their training data, the expected value of this cross-validation predictive power bounds the true predictive power of the ideal model from below, and thereby provides the desired lower estimate. For any one recording, the predictive power of the ideal SRF model of a particular class can only be bracketed between these upper and lower estimates (that is, between the training and cross-validation predictive powers). As the noise in the recording grows, the model parameters will overfit more and more to the noise, and hence both estimates will grow looser. Indeed, in high-noise conditions, the model may primarily describe the stimulusindependent (noise) part of the training data, and so the training predictive power might exceed the estimated signal power ( ), while the cross-validation predictive power may fall below zero (that is, the model’s predictions may become more inaccurate than simply predicting a constant response). As such, the estimates may not usefully constrain the predictive power on a particular recording. However, assuming that the predictive power of a single model class is similar for a population of similar neurons, the noise dependence can be exploited to tighten the estimates when applied to the population as a whole, by extrapolating within the population to the zero noise point. This extrapolation allows us to answer the sort of question posed at the outset: how well, in an absolute sense, can a particular SRF model class account for the responses of a population of neurons? 4 Experimental Methods Extracellular neural responses were collected from the primary auditory cortex of rodents during presentation of dynamic random chord stimuli. Animals (6 CBA/CaJ mice and 4 Long-Evans rats) were anaesthetized with either ketamine/medetomidine or sodium pentobarbital, and a skull fragment over auditory cortex was removed; all surgical and experimental procedures conformed to protocols approved by the UCSF Committee on Animal Research. An ear plug was placed in the left ear, and the sound field created by the freefield speakers was calibrated near the opening of the right pinna. Neural responses (205 recordings collected from 68 recording sites) were recorded in the thalamo-recipient layers 0 0.5 1 1.5 2 2.5 3 3.5 4 0 0.1 0.2 0.3 0.4 Noise power (spikes2/bin) Signal power (spikes2/bin) 0 50 100 150 Number of recordings Figure 1: Signal power in neural responses. of the left auditory cortex while the stimulus (see below) was presented to the right ear. Recordings often reflected the activity of a number of neurons; single neurons were identified by Bayesian spike-sorting techniques [13, 14] whenever possible. All analyses pool data from mice and rats, barbiturate and ketamine/medetomidine anesthesia, high and low frequency stimulation, and single-unit and multi-unit recordings; each group individually matched the aggregate behaviour described here. The dynamic random chord stimulus used in the auditory experiments was similar to that used in a previous study [15], except that the intensity of component tone pulses was variable. Tone pulses were 20 ms in length, ramped up and down with 5 ms cosine gates. The times, frequencies and sound intensities of the pulses were chosen randomly and independently from 20 ms bins in time, 1/12 octave bins covering either 2–32 or 25–100 kHz in frequency, and 5 dB SPL bins covering 25–70 dB SPL in level. At any time point, the stimulus averaged two tone pulses per octave, with an expected loudness of approximately 73 dB SPL for the 2–32 kHz stimulus and 70 dB SPL for the 25–100 kHz stimulus. The total duration of each stimulus was 60 s. At each recording site, the 2–32 kHz stimulus was repeated 20 times, and the 25–100 kHz stimulus was repeated 10 times. Neural responses were binned at 20 ms, and STRFs fit by linear regression of the average spike rate in each bin onto vectors formed from the amplitudes of tone pulses falling within the preceding 300 ms of the stimulus (15 pulse-widths, starting with pulses coincident with the target spike-rate bin). The regression parameters thus included a single filter weight for each frequency-time bin in this window, and an additional offset (or bias) weight. A Bayesian technique known as automatic relevance determination (ARD) [16] was used to improve the STRF estimates. In this case, an additional parameter reflecting the average noise in the response was also estimated. Models incorporating static output non-linearities were fit by kernel regression between the output of the linear model (fit by ARD) and the training data. The kernel employed was Gaussian with a half-width of 0.05 spike/bin; performance at this width was at least as good as that obtained by selecting widths individually for each recording by leave-one-out cross-validation. Cross-validation for lower estimates on model predictive power used 10 disjoint splits into 9/10 training data and 1/10 test data. Extrapolation of the predictive powers in the population, shown in Figs. 2 and 3, was performed using polynomial fits. The degree of the polynomial, determined by leave-one-out cross-validation, was quadratic for the lower estimates in Fig. 3 and linear in all other cases. 5 Results We used the techniques described above to ask how accurate a description of auditory cortex responses could be provided by the STRF. Recordings were binned to match the discretization rate of the stimulus and the signal power estimated using equation (1). Fig. 1 shows the distribution of signal powers obtained, as a scatter plot against the estimated noise power and as a histogram. The error bars indicate standard error intervals based on the estimated variances obtained from equation (2). A total of 92 recordings in the data set (42 from mouse, 50 from rat), shown by filled circles and histogram bars in Fig. 1, had signal power greater than one standard error above zero. The subsequent analysis was confined to these stimulus-responsive recordings. For each such recording we estimated an STRF model by minimum-MSE linear regression, which is equivalent to obtaining the Wiener kernel for the time-series. The training predictive power of this model provided the upper estimate for the predictive power of the model class. The minimum-MSE solution generalizes poorly, and so generates overly pessimistic lower estimates in cross-validation. However, the linear regression literature provides alternative parameter estimation techniques with improved generalization ability. In particular, we used a Bayesian hyperparameter optimization technique known as Automatic Relevance Determination [16] (ARD) to find an optimized prior on the regression parameters, and then chose parameters which optimized the posterior distribution under this prior and the training data (this and other similar techniques are discussed in Sahani and Linden, “Evidence Optimization Techniques for Estimating Stimulus-Response Functions”, this volume). The cross-validation predictive power of these estimates served as the lower estimates of the model class performance. Fig. 2 shows the upper ( ) and lower ( ) estimates for the predictive power of the class of linear STRF models in our population of rodent auditory cortex recordings, as a function of the estimated noise level in each recording. The divergence of the estimates at higher noise levels, described above, is evident. At low noise levels the estimates do not converge perfectly, the extrapolated values being # ) # 1 # #$# for the upper estimate and # # ## for the lower (intervals are standard errors). This gap is indicative of an SRF model class that is insufficiently powerful to capture the true stimulus-response relationship; even if noise were absent, the trained model from the class would only be able to approximate the true SRF in the region of the finite amount of data used for training, and so would perform better on those training data than on test data drawn from outside that region. Fig. 3 shows the same estimates for simulations derived from linear fits to the cortical data. Simulated data were produced by generating Poisson spike trains with mean rates as predicted by the ARD-estimated models for real cortical recordings, and rectifying so that negative predictions were treated as zero. Simulated spike trains were then binned and analyzed in the same manner as real spike trains. Since the simulated data are spectrogramlinear by construction apart from the rectification, we expect the estimates to converge to a value very close to 1 with little separation. This result is evident in Fig. 3. Thus, the analysis correctly reports that virtually all of the response power in these simulations is linearly 0 20 40 60 −0.5 0 0.5 1 1.5 Normalized linearly predictable power Normalized noise power 0 10 20 30 0 0.5 1 Figure 2: Evaluation of STRF predictive power in auditory cortex. 0 50 100 −0.5 0 0.5 1 1.5 2 2.5 3 Normalized linearly predictable power Normalized noise power 0 10 20 30 0.5 1 1.5 Figure 3: Evaluation of linearity in simulated data. predictable from the stimulus spectrogram, attesting to the reliability of the extrapolated estimates for the real data in Fig. 2. Some portion of the scatter of the points about the population average lines in Fig. 2 reflects genuine variability in the population, and so the extrapolated scatter at zero noise is also of interest. Intervals containing at least 50% of the population distribution for the cortical data are # 1 C # ) for the upper estimate and # # 1 ! C # ! ) # for the lower estimate (assuming normal scatter). These will be overestimates of the spread in the underlying population distribution because of additional scatter from estimation noise. The variability of STRF predictive power in the population appears unimodal, and the hypothesis that the distributions of the deviations from the regression lines are zero-mean normal in both cases cannot be rejected (Kolmogorov-Smirnov test, # ). Thus the treatment of these recordings as coming from a single homogeneous population is reasonable. In Fig. 3, there is a small amount of downward bias and population scatter due to the varying amounts of rectification in the simulations; however, most of the observed scatter is due to estimation error resulting from the incorporation of Poisson noise. The linear model is not constrained to predict non-negative firing rates. To test whether including a static output non-linearity could improve predictions, we also fit models in which the prediction from the ARD-derived STRF estimates was transformed time-point by time-point by a non-parametric non-linearity (see Experimental Methods) to obtain a new firing rate prediction. The resulting cross-validation predictive powers were compared to those of the spectrogram-linear model (data not shown). The addition of a static output nonlinearity contributed very little to the predictive power of the STRF model class. Although the difference in model performance was significant ( # #$# , Wilcoxon signed rank test), the mean normalized predictive power increase with the addition of a static output non-linearity was very small (0.031). 6 Conclusions We have demonstrated a novel way to evaluate the fraction of response power in a population of neurons that can be captured by a particular class of SRF models. The confounding effects of noise on evaluation of model performance and estimation of model parameters are overcome by two key analytic steps. First, multiple measurements of neural responses to the same stimulus are used to obtain an unbiased estimate of the fraction of the response variance that is predictable in principle, against which the predictive power of a model may be judged. Second, Bayesian regression techniques are employed to lessen the effects of noise on linear model estimation, and the remaining noise-related bias is eliminated by exploiting the noise-dependence of parameter-estimation-induced errors in the predictive power to extrapolate model performance for a population of similar recordings to the zero noise point. This technique might find broad applicability to regression problems in neuroscience and elsewhere, provided certain essential features of the data considered here are shared: repeated measurements must be made at the same input values in order to estimate the signal power; both inputs and repetitions must be numerous enough for the signal power estimate, which appears in the denominator of the normalized powers, to be wellconditioned; and finally we must have a group of different regression problems, with different normalized noise powers, that might be expected to instantiate the same underlying model class. Data with these features are commonly encountered in sensory neuroscience, where the sensory stimulus can be reliably repeated. The outputs modelled may be spike trains (as in the present study) or intracellular recordings; local-field, evoked-potential, or optical recordings; or even fMRI measurements. Applying this technique to analysis of the primary auditory cortex we find that spectrogramlinear response components can account for only 18% to 40% (on average) of the power in extracellular responses to dynamic random chord stimuli. Further, elaborated models that append a static output non-linearity to the linear filter are barely more effective at predicting responses to novel stimuli than is the linear model class alone. Previous studies of auditory cortex have reached widely varying conclusions regarding the degree of linearity of neural responses. Such discrepancies may indicate that response properties are critically dependent on the statistics of the stimulus ensemble [6, 5, 10], or that cortical response linearity differs between species. Alternatively, as previous measures of linearity have been biased by noise, the divergent estimates might also have arisen from variation in the level of noise power across studies. Our approach represents the first evaluation of auditory cortex response predictability that is free of this potential noise confound. The high degree of response non-linearity we observe may well be a characteristic of all auditory cortical responses, given the many known non-linearities in the peripheral and central auditory systems [17]. Alternatively, it might be unique to auditory cortex responses to noisy sounds like dynamic random chord stimuli, or else may be general to all stimulus ensembles and all sensory cortices. Current and future work will need to be directed toward measurement of auditory cortical response linearity using different stimulus ensembles and in different species, and toward development of non-linear classes of models that predict auditory cortex responses more accurately than spectrogram-linear models. References [1] Aertsen, A. M. H. J, Johannesma, P. I. M, & Hermes, D. J. (1980) Biol Cybern 38, 235–248. [2] Eggermont, J. J, Johannesma, P. M, & Aertsen, A. M. (1983) Q Rev Biophys 16, 341–414. [3] Kowalski, N, Depireux, D. A, & Shamma, S. A. (1996) J Neurophysiol 76, 3524–3534. [4] Shamma, S. A & Versnel, H. (1995) Aud Neurosci 1, 255–270. [5] Nelken, I, Rotman, Y, & Yosef, O. B. (1999) Nature 397, 154–157. [6] Rotman, Y, Bar-Yosef, O, & Nelken, I. (2001) Hear Res 152, 110–127. [7] Nelken, I, Prut, Y, Vaadia, E, & Abeles, M. (1994) Hear Res 72, 206–222. [8] Calhoun, B. M & Schreiner, C. E. (1998) Eur J Neurosci 10, 926–940. [9] Eggermont, J. J, Aertsen, A. M, & Johannesma, P. I. (1983) Hear Res 10, 167–190. [10] Theunissen, F. E, Sen, K, & Doupe, A. J. (2000) J. Neurosci. 20, 2315–2331. [11] Nelken, I, Prut, Y, Vaadia, E, & Abeles, M. (1994) Hear Res 72, 223–236. [12] Lindgren, B. W. (1993) Statistical Theory. (Chapman & Hall), 4th edition. ISBN: 0412041812. [13] Lewicki, M. S. (1994) Neural Comp 6, 1005–1030. [14] Sahani, M. (1999) Ph.D. thesis (California Institute of Technology, Pasadena, California). [15] deCharms, R. C, Blake, D. T, & Merzenich, M. M. (1998) Science 280, 1439–1443. [16] MacKay, D. J. C. (1994) ASHRAE Transactions 100, 1053–1062. [17] Popper, A & Fay, R, eds. (1992) The Mammalian Auditory Pathway: Neurophysiology. (Springer, New York).
|
2002
|
108
|
2,112
|
Incremental Gaussian Processes Joaquin Qui˜nonero-Candela Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Lyngby, Denmark jqc@imm.dtu.dk Ole Winther Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Lyngby, Denmark owi@imm.dtu.dk Abstract In this paper, we consider Tipping’s relevance vector machine (RVM) [1] and formalize an incremental training strategy as a variant of the expectation-maximization (EM) algorithm that we call Subspace EM (SSEM). Working with a subset of active basis functions, the sparsity of the RVM solution will ensure that the number of basis functions and thereby the computational complexity is kept low. We also introduce a mean field approach to the intractable classification model that is expected to give a very good approximation to exact Bayesian inference and contains the Laplace approximation as a special case. We test the algorithms on two large data sets with O(103 −104) examples. The results indicate that Bayesian learning of large data sets, e.g. the MNIST database is realistic. 1 Introduction Tipping’s relevance vector machine (RVM) both achieves a sparse solution like the support vector machine (SVM) [2, 3] and the probabilistic predictions of Bayesian kernel machines based upon a Gaussian process (GP) priors over functions [4, 5, 6, 7, 8]. Sparsity is interesting both with respect to fast training and predictions and ease of interpretation of the solution. Probabilistic predictions are desirable because inference is most naturally formulated in terms of probability theory, i.e. we can manipulate probabilities through Bayes theorem, reject uncertain predictions, etc. It seems that Tipping’s relevance vector machine takes the best of both worlds. It is a GP with a covariance matrix spanned by a small number of basis functions making the computational expensive matrix inversion operation go from O(N 3), where N is the number of training examples to O(M 2N) (M being the number of basis functions). Simulation studies have shown very sparse solutions M ≪N and good test performance [1]. However, starting the RVM learning with as many basis functions as examples, i.e. one basis function in each training input point, leads to the same complexity as for Gaussian processes (GP) since in the initial step no basis functions are removed. That lead Tipping to suggest in an appendix in Ref. [1] an incremental learning strategy that starts with only a single basis function and adds basis functions along the iterations, and to formalize it very recently [9]. The total number of basis functions is kept low because basis functions are also removed. In this paper we formalize this strategy using straightforward expectation-maximization (EM) [10] arguments to prove that the scheme is the guaranteed convergence to a local maximum of the likelihood of the model parameters. Reducing the computational burden of Bayesian kernel learning is a subject of current interest. This can be achieved by numerical approximations to matrix inversion [11] and suboptimal projections onto finite subspaces of basis functions without having an explicit parametric form of such basis functions [12, 13]. Using mixtures of GPs [14, 15] to make the kernel function input dependent is also a promising technique. None of the Bayesian methods can currently compete in terms of speed with the efficient SVM optimization schemes that have been developed, see e.g. [3]. The rest of the paper is organized as follows: In section 2 we present the extended linear models in a Bayesian perspective, the regression model and the standard EM approach. In section 3, a variation of the EM algorithm, that we call the Subspace EM (SSEM) is introduced that works well with sparse solution models. In section 4, we present the second main contribution of the paper: a mean field approach to RVM classification. Section 5 gives results for the Mackey-Glass time-series and preliminary results on the MNIST hand-written digits database. We conclude in section 6. 2 Regression An extended linear model is built by transforming the input space by an arbitrary set of basis functions φj : RD →R that performs a non-linear transformation of the D-dimensional input space. A linear model is applied to the transformed space whose dimension is equal to the number of basis functions M: y(xi) = M X j=1 ωj φj(xi) = Φ(xi) · ωωω (1) where Φ(xi) ≡[φ1(xi), . . . , φM(xi)] denotes the ith row of the design matrix Φ and ωωω = (ω1, . . . , ωN)T is the weights vector. The output of the model is thus a linear superposition of completely general basis functions. While it is possible to optimize the parameters of the basis functions for the problem at hand [1, 16], we will in this paper assume that they are given. The simplest possible regression learning scenario can be described as follows: a set of N input-target training pairs {xi, ti}N i=1 are assumed to be independent and contaminated with Gaussian noise of variance σ2. The likelihood of the parameters ωωω is given by p(t|ωωω, σ2) = 2πσ2−N/2 exp −1 2σ2 ∥t −Φωωω∥2 (2) where t = (t1, . . . , tN)T is the target vector. Regularization is introduced in Bayesian learning by means of a prior distribution over the weights. In general, the implied prior over functions is a very complicated distribution. However, choosing a Gaussian prior on the weights the prior over functions also becomes Gaussian, i.e. a Gaussian process. For the specific choice of a factorized distribution with variance α−1 j : p(ωj|αj) = rαj 2π exp −1 2αj ω2 j (3) the prior over functions p(y|ααα) is N(0, ΦA−1ΦT ), i.e. a Gaussian process with covariance function given by Cov(xi, xj) = M X k=1 1 αk φk(xi)φk(xj) (4) where ααα = (α0, . . . , αN)T and A = diag(α0, . . . , αN). We can now see how sparseness in terms of the basis vectors may arise: if α−1 k = 0 the kth basis vector Φk ≡[φk(x1), . . . , φk(xN)]T , i.e. the kth column in the design matrix, will not contribute to the model. Associating a basis function with each input point may thus lead to a model with a sparse representations in the inputs, i.e. the solution is only spanned by a subset of all input points. This is exactly the idea behind the relevance vector machine, introduced by Tipping [17]. We will see in the following how this also leads to a lower computational complexity than using a regular Gaussian process kernel. The posterior distribution over the weights–obtained through Bayes rule–is a Gaussian distribution p(ωωω|t,ααα, σ2) = p(t|ωωω, σ2)p(ωωω|ααα) p(t|ααα, σ2) = N(ωωω|µµµ, Σ) (5) where N(t|µµµ, Σ) is a Gaussian distribution with mean µµµ and covariance Σ evaluated at t. The mean and covariance are given by µµµ = σ−2ΣΦT t (6) Σ = (σ−2ΦT Φ + A)−1 (7) The uncertainty about the optimal value of the weights captured by the posterior distribution (5) can be used to build probabilistic predictions. Given a new input x∗, the model gives a Gaussian predictive distribution of the corresponding target t∗ p(t∗|x∗,ααα, σ2) = Z p(t∗|x∗,ωωω, σ2) p(ωωω|t,ααα, σ2) dωωω = N(t∗|y∗, σ2 ∗) (8) where y∗ = Φ(x∗) · µµµ (9) σ2 ∗ = σ2 + Φ(x∗) · Σ · Φ(x∗)T (10) For regression it is natural to use y∗and σ∗as the prediction and the error bar on the prediction respectively. The computational complexity of making predictions is thus O(M 2P + M 3 + M 2N), where M is the number of selected basis functions (RVs) and P is the number of predictions. The two last terms come from the computation of Σ in eq. (7). The likelihood distribution over the training targets (2) can be “marginalized” with respect to the weights to obtain the marginal likelihood, which is also a Gaussian distribution p(t|ααα, σ2) = Z p(t|ωωω, σ2) p(ωωω|ααα) dωωω = N(t|0, σ2I + ΦA−1ΦT ) . (11) Estimating the hyperparameters {αj} and the noise σ2 can be achieved by maximizing (11). This is naturally carried out in the framework of the expectation-maximization (EM) algorithm since the sufficient statistics of the weights (that act as hidden variables) are available for this type of model. In other cases e.g. for adapting the length scale of the kernel [4], gradient methods have to be used. For regression, the E-step is exact (the lower bound on the marginal likelihood is made equal to the marginal likelihood) and consists in estimating the mean and variance (6) and (7) of the posterior distribution of the weights (5). For classification, the E-step will be approximate. In this paper we present a mean field approach for obtaining the sufficient statistics. The M-step corresponds to maximizing the expectation of the log marginal likelihood with respect to the posterior, with respect to σ2 and ααα, which gives the following update rules: αnew j = 1 ⟨ω2 j ⟩p(ωωω|t,ααα,σ2) = 1 µ2 j+Σjj , and (σ2)new = 1 N (||t −Φ µ||2 + (σ2)old P j γj), where the quantity γj ≡1−αjΣjj is a measure of how “well-determined” each weight ωj is by the data [18, 1]. One can obtain a different update rule that gives faster convergence. Although it is suboptimal in the EM sense, we have never observed it decrease the lower bound on the marginal log-likelihood. The rule, derived in [1], is obtained by differentiation of (11) and by an arbitrary choice of independent terms as is done by [18]. It makes use of the terms {γj}: αnew j = γj µ2 j (σ2)new = ||t −Φ µ||2 N −P j γj . (12) In the optimization process many αj grow to infinity, which effectively deletes the corresponding weight and basis function. Note that the EM update and the Mackay update for αj only implicitly depend upon the likelihood. This means that it is also valid for the classification model we shall consider below. A serious limitation of the EM algorithm and variants for problems of this type is that the complexity of computing the covariance of the weights (7) in the E-step is O(M 3+M 2N). At least in the first iteration where no basis functions have been deleted M = N and we are facing the same kind of complexity explosion that limits the applicability of Gaussian processes to large training set. This has lead Tipping [1] to consider a constructive or incremental training paradigm where one basis function is added before each E-step and since basis functions are removed in the M-step, it turns out in practice that the total number of basis functions and the complexity remain low [9]. In the following section we introduce a new algorithm that formalizes this procedure that can be proven to increase the marginal likelihood in each step. 3 Subspace EM We introduce an incremental approach to the EM algorithm, the Subspace EM (SSEM), that can be directly applied to training models like the RVM that rely on a linear superposition of completely general basis functions, both for classification and for regression. Instead of starting with a full model, i.e. where all the basis functions are present with finite α values, we start with a fully pruned model with all αj set to infinity. Effectively, we start with no model. The model is grown by iteratively including some αj previously set to infinity to the active set of α’s. The active set at iteration n, Rn, contains the indices of the basis vectors with α less than infinity: R1 = 1 Rn = {i | i ∈Rn−1 ∧αi ≤L} ∪{n} (13) where L is a finite very large number arbitrarily defined. Observe that Rn contains at most one more element (index) than Rn−1. If some of the α’s indexed by Rn−1 happen to reach L at the n-th step, Rn can contain less elements than Rn−1. In figure 1 we give a schematic description of the SSEM algorithm. At iteration n the E-step is taken only in the subspace spanned by the weights whose indexes are in Rn. This helps reducing the computational complexity of the M-step to O(M 3), where M is the number of relevance vectors. Since the initial value of αj is infinity for all j, for regression the E-step yields always an equality between the log marginal likelihood and its lower bound. At any step n, the posterior can be exactly projected on to the space spanned by the weights ωj such that j ∈Rn, because the αk = ∞for all k not in Rn. Hence in the regression case, the SSEM never decreases the log marginal likelihood. Figure 2 illustrates the convergence process of the SSEM algorithm compared to that of the EM algorithm for regression. 1. Set αj = L for all j. (L is a very large number) Set n = 1 2. Update the set of active indexes Rn 3. Perform an E-step in subspace ωj such that j ∈Rn 4. Perform the M-step for all αj such that j ∈Rn 5. If visited all basis functions, end, else go to 2. Figure 1: Schematics of the SSEM algorithm. 0 20 40 60 80 100 120 −400 −200 0 200 400 600 800 1000 1200 CPU time (seconds) Log marginal likelihood Likelihood vs. CPU time SSEM standard EM 0 20 40 60 80 100 120 0 50 100 150 200 250 300 350 400 450 CPU time (seconds) Number of RVs Number of RVs vs. CPU time standard EM SSEM Figure 2: Training on 400 samples of the Mackey-Glass time series, testing on 2000 cases. Log marginal likelihood as a function of the elapsed CPU time (left) and corresponding number of relevance vectors (right) for both SSEM and EM. We perform one EM step for each time a new basis function is added to the active set. Once all the examples have been visited, we switch to the batch EM algorithm on the active set until some convergence criteria has been satisfied, for example until the relative increase in the likelihood is smaller than a certain threshold. In practice some 50 of these batch EM iterations are enough. 4 Classification Unlike the model discussed above, analytical inference is not possible for classification models. Here, we will discuss the adaptive TAP mean field approach–initially proposed for Gaussian processes [8]–that are readily translated to RVMs. The mean field approach has the appealing features that it retains the computational efficiency of RVMs, is exact for the regression and reduces to the Laplace approximation in the limit where all the variability comes from the prior distribution. We consider binary t = ±1 classification using the probit likelihood with ’input’ noise σ2 p(t|y(x)) = erf ty(x) σ , (14) where Dz ≡e−z2/2dz/ √ 2π and erf(x) ≡ R x −∞Dz is an error function (or cumulative Gaussian distribution). The advantage of using this sigmoid rather than the commonly used 0/1-logistic is that we under the mean field approximation can derive an analytical expression for the predictive distribution p(t∗|x∗, t) = R p(t∗|y)p(y|x∗, t)dy needed for making Bayesian predictions. Both a variational and the advanced mean field approach– used here–make a Gaussian approximation for p(y|x∗, t) [8] with mean and variance given by regression results y∗and σ2 ∗−ˆσ2, and y∗and σ2 ∗given by eqs. (9) and (10). This leads to the following approximation for the predictive distribution p(t∗|x∗, t) = Z erf t∗ y σ p(y|x∗, t) dy = erf t∗ y∗ σ∗ . (15) However, the mean and covariance of the weights are no longer found by analytical expressions, but has to be obtained from a set of non-linear mean field equations that also follow from equivalent assumptions of Gaussianity for the training set outputs y(xi) in averages over reduced (or cavity) posterior averages. In the following, we will only state the results which follows from combining the RVM Gaussian process kernel (4) with the results of [8]. The sufficient statistics of the weights are written in terms of a set of O(N) mean field parameters µµµ = A−1ΦTτττ (16) Σ = A + ΦT ΩΦ −1 (17) where τi ≡ ∂ ∂yc i ln Z(yc i , V c i + σ2) and Z(yc i , V c i + σ2) ≡ Z p(ti|yc i + z q V c i + σ2) Dz = erf ti yc i p V c i + σ2 ! . (18) The last equality holds for the likelihood eq. (14) and yc i and V c i are the mean and variance of the so called cavity field. The mean value is yc i = Φ(xi) · µ −V c i τi. The distinction between the different approximation schemes is solely in the variance V c i : V c i = 0 is the Laplace approximation, V c i = ΦA−1ΦT ii is the so called naive mean field theory and an improved estimate is available from the adaptive TAP mean field theory [8]. Lastly, the diagonal matrix Ωis the equivalent of the noise variance in the regression model (compare eqs. (17) and (7) and is given by Ωi = −∂τi ∂yc i /(1+V c i ∂τi ∂yc i ) . This set of non-linear equations are readily solved (i.e. fast and stable) by making Newton-Raphson updates in µµµ treating the remaining quantities as help variables: ∆µµµ = (I + A−1ΦT ΩΦ)−1(A−1ΦTτττ −µµµ) = Σ(ΦTτττ −Aµµµ) (19) The computational complexity of the E-step for classification is augmented with respect to the regression case by the fact that it is necessary to construct and invert a M × M matrix usually many times (typically 20), once for each step of the iterative Newton method. 5 Simulations We illustrate the performance of the SSEM for regression on the Mackey-Glass chaotic time series, which is well-known for its strong non-linearity. In [16] we showed that the RVM has an order of magnitude superior performance than carefully tuned neural networks for time series prediction on the Mackey-Glass series. The inputs are formed by L = 16 samples spaced 6 periods from each other xk = [z(k −6), z(k −12), . . . , z(k −6L)] and the targets are chosen to be tk = z(k) to perform six steps ahead prediction (see [19] for details). We use Gaussian basis functions of fixed variance ν2 = 10. The test set comprises 5804 examples. We perform prediction experiments for different sizes of the training set. We perform in each case 10 repetitions with different partitions of the data sets into training and test. We compare the test error, the number of RVs selected and the computer time needed for the batch and the SSEM method. We present the results obtained with the growth method relative to the results obtained with the batch method in figure 3. As expected, the relative 0 500 1000 1500 2000 0 0.5 1 1.5 2 2.5 3 Mackey−Glass data Number of training examples Etegrowth/Etebatch Tcpugrowth/Tcpubatch NRVgrowth/NRVbatch 0 200 400 600 800 −0.5 0 0.5 1 1.5 Iteration Classification on MNIST digits Training error prob. Test error prob. Scaled loglik Figure 3: Left: Regression, mean values over 10 repetitions of relative test error, number of RVs and computer time for the Mackey-Glass data, up to 2400 training examples and 5804 test examples. Right: Classification, Log marginal likelihood, test and training errors while training on one class against all the others, 60000 training and 10000 test examples. computer time of the growth method compared with the batch method decreases with size of the training set. For a few thousand examples the SSEM method is an order of magnitude faster than the batch method. The batch method proved only to be faster for 100 training examples, and could not be used with data sets of thousands of examples on the machine on which we run the experiments because of its high memory requirements. This is the reason why we only ran the comparison for up to 2400 training example for the Mackey-Glass data set. Our experiments for classification are at the time of sending this paper to press very premature: we choose a very large data set, the MNIST database of handwritten digits [20], with 60000 training and 10000 test images. The images are of size 28 × 28 pixels. We use PCA to project them down to 16 dimensional vectors. We only perform a preliminary experiment consisting of a one against all binary classification problem to illustrate that Bayesian approaches to classification can be used on very large data sets with the SSEM algorithm. We train on 13484 examples (the 6742 one’s and another 6742 random non-one digits selected at random from the rest) and we use 800 basis functions for both the batch and Subspace EM. In figure 3 we show the convergence of the SSEM in terms of the log marginal likelihood and the training and test probabilities of error. The test probability of error we obtain is 0.74 percent with the SSEM algorithm and 0.66 percent with the batch EM. Under the same conditions the SSEM needed 55 minutes to do the job, while the batch EM needed 186 minutes. The SSEM gives a machine with 28 basis functions and the batch EM one with 31 basis functions. 6 Conclusion We have presented a new approach to Bayesian training of linear models, based on a subspace extension of the EM algorithm that we call Subspace EM (SSEM). The new method iteratively builds models from a potentially big library of basis functions. It is especially well-suited for models that are constructed such that they yield a sparse solution, i.e. the solution is spanned by small number M of basis functions, which is much smaller than N, the number of examples. A prime example of this is Tipping’s relevance vector machine that typically produces solutions that are sparser than those of support vector machines. With the SSEM algorithm the computational complexity and memory requirement decrease from O(N 3) and O(N 2) to O(M 2N) (somewhat higher for classification) and O(NM). For classification, we have presented a mean field approach that is expected to be a very good approximation to the exact inference and contains the widely used Laplace approximation as an extreme case. We have applied the SSEM algorithm to both a large regression and a large classification data sets. Although preliminary for the latter, we believe that the results demonstrate that Bayesian learning is possible for very large data sets. Similar methods should also be applicable beyond supervised learning. Acknowledgments JQC is funded by the EU Multi-Agent Control Research Training Network - EC TMR grant HPRNCT-1999-00107. We thank Lars Kai Hansen for very useful discussions. References [1] Michael E. Tipping, “Sparse bayesian learning and the relevance vector machine,” Journal of Machine Learning Research, vol. 1, pp. 211–244, 2001. [2] Vladimir N. Vapnik, Statistical Learning Theory, Wiley, New York, 1998. [3] Bernhard Sch¨olkopf and Alex J. Smola, Learning with Kernels, MIT Press, Cambridge, 2002. [4] Carl E. Rasmussen, Evaluation of Gaussian Processes and Other Methods for Non-linear Regression, Ph.D. thesis, Dept. of Computer Science, University of Toronto, 1996. [5] Chris K. I. Williams and Carl E. Rasmussen, “Gaussian Proceses for Regression,” in Advances in Neural Information Processing Systems, 1996, number 8, pp. 514–520. [6] D. J. C. Mackay, “Gaussian Processes: A replacement for supervised Neural Networks?,” Tech. Rep., Cavendish Laboratory, Cambridge University, 1997, Notes for a tutorial at NIPS 1997. [7] Radford M. Neal, Bayesian Learning for Neural Networks, Springer, New York, 1996. [8] Manfred Opper and Ole Winther, “Gaussian processes for classification: Mean field algorithms,” Neural Computation, vol. 12, pp. 2655–2684, 2000. [9] Michael Tipping and Anita Faul, “Fast marginal likelihood maximisation for sparse bayesian models,” in International Workshop on Artificial Intelligence and Statistics, 2003. [10] N. M. Dempster, A.P. Laird, and D. B. Rubin, “Maximum likelihood from incomplete data via the EM algorithm,” J. R. Statist. Soc. B, vol. 39, pp. 185–197, 1977. [11] Chris Williams and Mathias Seeger, “Using the Nystr¨om method to speed up kernel machines,” in Advances in Neural Information Processing Systems, 2001, number 13, pp. 682–688. [12] Alex J. Smola and Peter L. Bartlett, “Sparse greedy gaussian process regression,” in Advances in Neural Information Processing Systems, 2001, number 13, pp. 619–625. [13] Lehel Csat´o and Manfred Opper, “Sparse representation for gaussian process models,” in Advances in Neural Information Processing Systems, 2001, number 13, pp. 444–450. [14] Volker Tresp, “Mixtures of gaussian processes,” in Advances in Neural Information Processing Systems, 2000, number 12, pp. 654–660. [15] Carl E. Rasmussen and Zoubin Ghahramani, “Infinite mixtures of gaussian process experts,” in Advances in Neural Information Processing Systems, 2002, number 14. [16] Joaquin Qui˜nonero-Candela and Lars Kai Hansen, “Time series prediction based on the relevance vector machine with adaptive kernels,”in International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2002. [17] Michael E. Tipping, “The relevance vector machine,” in Advances in Neural Information Processing Systems, 2000, number 12, pp. 652–658. [18] David J. C. MacKay, “Bayesian interpolation,” Neural Computation, vol. 4, no. 3, pp. 415–447, 1992. [19] Claus Svarer, Lars K. Hansen, Jan Larsen, and Carl E. Rasmussen, “Designer networks for time series processing,” in IEEE NNSP Workshop, 1993, pp. 78–87. [20] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” in Poceedings of the IEEE, 1998, vol. 86, pp. 2278–2324.
|
2002
|
109
|
2,113
|
Fast Transformation-Invariant Factor Analysis Anitha Kannan Nebojsa Jojic Brendan Frey University of Toronto, Toronto, Canada anitha, frey @psi.utoronto.ca Microsoft Research, Redmond, WA, USA jojic@microsoft.com Abstract Dimensionality reduction techniques such as principal component analysis and factor analysis are used to discover a linear mapping between high dimensional data samples and points in a lower dimensional subspace. In [6], Jojic and Frey introduced mixture of transformation-invariant component analyzers (MTCA) that can account for global transformations such as translations and rotations, perform clustering and learn local appearance deformations by dimensionality reduction. However, due to enormous computational requirements of the EM algorithm for learning the model, O( ) where is the dimensionality of a data sample, MTCA was not practical for most applications. In this paper, we demonstrate how fast Fourier transforms can reduce the computation to the order of log . With this speedup, we show the effectiveness of MTCA in various applications - tracking, video textures, clustering video sequences, object recognition, and object detection in images. 1 Introduction Dimensionality reduction techniques such as principal component analysis [7] and factor analysis [1] linearly map high dimensional data samples onto points in a lower dimensional subspace. In factor analysis, this mapping is defined by subspace origin and the subspace bases stored in the columns of the factor loading matrix, . A mixture of factor analyzers learn to place the data into several learned subspaces. In computer vision, this approach has been widely used in face modeling for learning facial expressions (e.g. [2] , [12] ). When the variability in the data is due, in part, to small transformations such as translations, scales and rotations, factor analyzer learns a linearized transformation manifold which is often sufficient ( [4], [11]). However, for large transformations present in the data, linear approximation is insufficient. For instance, a factor analyzer trained on a video sequence of a person walking tries to capture a linearized model of large translations (fig. 2a.) as opposed to learning local deformations such as motion of legs and hands (fig. 2c.). In [6], it was shown that a discrete hidden transformation variable , enables clustering and learning subspaces within clusters, invariant to global transformations. However, experiments were done on images of very low resolution due to enormous computational cost of EM algorithm used for learning the model. It is known that fast Fourier transform(FFT) Figure 1: Mixture of transformed component analyzers (MTCA). (a) The generative model with cluster index c, subspace coordinates , latent image +noise; transformation
and generated final image
+noise; (b) An example of the generative process, where subspace coordinates , and image position
,
are inferred from a captured video sequence is very useful in dealing with transformations in images ( [3], [13]). The main purpose of this work is to show that under very mild assumptions, we can have an effective implementation of MTCA that reduces the complexity from to log , where is the number of factors, N is the size of input, is the set of all possible transformations. This means that for 256x256 images, the current implementation will be 4000 times faster. We present experimental results showing the effectiveness of MTCA in various applications - tracking, video textures, clustering video sequences, object recognition and detection. 2 Fast inference and learning in MTCA Mixture of transformation-invariant component analyzers (MTCA) is used for transformation-invariant clustering and learning a subspace representation within each cluster. The set of transformations, , to which the model is invariant is specified a priori. Fig. 1a. shows a generative model for MTCA. The vector is a dimensional Gaussian 0 I random variable. Cluster index, c is a C-valued discrete random variable with probability distribution, "! . The dimensional ( $#%#& )latent image, ' has mean, !)( *!+ , and diagonal covariance, ,-! ; the x matrix .! is the factor loading matrix for class c. An observation / is obtained by applying a transformation 10 , (with distribution 243 ) on the latent image ' and adding independent Gaussian noise, 5 . Fig. 1b illustrates this generative process for a one class MTCA. The subspace coordinates , are used to generate a latent image, ' (without noise), and the horizontal and vertical image position 76 and 98 are used to shift the latent image to obtain / . In fact, , .6 and 98 shown in the figure are actually inferred from the captured video sequence / (see sec. 3). The joint distribution over all variables is [6], : ;=<>'? @/ACB : D : ' ;<E : / '? 7+F 7+F <E B ;G 0 I '?G !)( .!;H,-!H /IG 9'?H5JK2 3 L! Figure 2: Means and components in learned using (a) FA, (b) FA applied on data normalized using a correlation tracker, and (c) transformed component analysis (TCA) applied directly on data. Performing inference on transformations and class, F <> /A requires evaluating the joint, F /I=<> * B F / < 7+F <E+F 7 B /IG ! *! !( ,.!H ( 5 2 3 ! (1) The likelihood of / is : /A B ! 3 /IG ! .! ! ( ,.!H ( 5 K2 3 ! (2) The parameters of the model are learned from i.i.d training examples by maximizing their likelihood ( : /
) using an exact EM algorithm. The only inputs to the EM are the training examples, the number of factors, , the number of clusters, , and the set of all possible transformations, . Starting at a random initialization, EM algorithm for MTCA iterates between E step, where it probabilistically fills in for hidden variables by finding the exact posterior : ;=' =<> /A and M step in which it updates the parameters. The likelihood of the data (eqn. 2) requires summing over all possible transformations and is very expensive. In fact, each of inference and update equations in [6] has a complexity of . In this section, we show how these equations can be derived and evaluated in Fourier domain at a considerably lower computational cost of log . We focus on inferring the means of : @< +/A and : ' <>/A as examples for efficient computation. Similar mathematical manipulations will result in the inferences provided in the appendix. We assume that data is represented in a coordinate system in which transformations are discrete shifts with wrap-around. For translations, it is 2D rectangular grid, while for rotations and scales it is shifts in a radial grid (c.f. [3] [13]). We also assume that the posttransformation noise is isotropic, 5 B , so that covariances matrices,COV ' @< +/ and COV @<>/ become independent of . In fact, for isotropic 5 , it is possible to preset (in our experiments we set it to .001). By presetting the sensor noise, , to small value, if the actual value in the data is larger, it can be accounted for in , . First, we describe the notation that simplifies expressions for transformation that corresponds to a shift in input coordinates. Define to be an integer vector in the coordinate system in which input is measured. For 2D nxn image, ( B ), / is the element where 0 ! B#"%$&$'$(I B"%$'$&$( . Vectors in the input coordinate system such as /I ' are defined this way. For diagonal matrices such as , , , defines the element corresponding to pixel at coordinate . This notation enables treating transformations corresponding to a shift be represented as a vector in the same system, so that a shift of ' by is represented as ' ( 7 such that ( B (*) mod I (*) mod A . Figure 3: Transformation invariant clustering with and without a subspace model: (a) Parameters of a three-cluster TMG [6], and a three-cluster MTCA (b) Frames from the video sequence , corresponding TMG mean and the object appearance in the corresponding subspace of MTCA; (c) An illustration of the role of components for the first class. Factor tends to model lighting variation and tends to model small out-of-plane rotations In the appendix, we show that all expensive operations in inference and learning involve computing correlation ( ), or convolution( ). These operations is only
, i.e.
for all shifts in frequency domain, while it is in the pixel domain. For notational ease, we represent column and row of a matrix, , by J
and J
respectively. Also, diag( ) extracts the diagonal elements of matrix and defines an element wise product between and . In principal component analysis (PCA), where there is no noise, the data is projected to subspace through the projection matrix. Similarly, in MTCA, we can derive that when B and 5 B , the projection matrix is ! B#"$ 7!&%! and it accounts for variances. % B , ! ( 5(' is the inverse of noise variances in input space, and " B ! % ! ( E)'+* is the inverse of the noise variances in the projected subspace. The mean of subspace for a given /
, c, and is obtained by subtracting the mean of the latent image ! from the transformation-normalized /
and applying the projection matrix: , /
@< "B- /
/. ! . For each factor, , it reduces to , /
@< "B 0 1 1 /
. 7 . ! +;B /
. ! As the summation over for all is a correlation, it can be efficiently computed for all at the same time in the frequency domain in log time. The inference on the latent image ' is given by its expected value: , ' <>+/
B-2.! -! ! ( ,.!= ' !)( 2-! 5 ' 3% F < +/
/
where 2 ! B COV ' @=<>+/
is independent of /
and . The first term, dictated by the model can be easily computed. 3 3 F < +/
/
is a convolution of /
with the probability map F <>/
defined for all , as a particular element in the sum Figure 4: Comparison of FA applied on data normalized for translations using correlation tracker and TCA. (a)Frames from sequence. (b) shift normalized frames, using correlation-based tracker and obtained through factor analysis model. (c) and for the TCA model. Figure 5: Simulated walk sequence synthesized after training an AR model on the subspace and image motion parameters. The sequence enlarged for better viewing of translations. The first row contains a few frames from the sequence simulated directly from the model. The second row contains a few frames from the video texture generated by picking the frames in the original sequence for which the recent subspace trajectory was similar to the one generated by the AR model. is 3 3 F <>+/
/
. 7 . We can efficiently compute this sum for all : , ' <>/
"B 2-! .! !( ,.!H ' !D( 2-! 5 ' F < +/
/
Note that multiplication with x matrices above can be done efficiently by factorizing them and applying a sequence of vector multiplication from right to left. 3 Experimental Results Clustering face poses. In Fig. 3b the first column shows examples from a video sequence of one person with different facial poses walking across cluttered background. We trained a transformation-invariant mixture of Gaussians (TMG) [6] with 3 clusters that learned means shown in Fig. 3b. TMG captures the three generic poses in the data.However, due to presence of lighting variations and small out-of-plane rotations in addition to big pose changes, it is difficult for TMG to learn these variations without many more classes. We trained a MTCA model with 3 classes and 2 factors, initializing the parameters to those learned by TMG. Fig. 3a compares TMG means and components to those learned using MTCA. The MTCA model learns to capture small variations around the cluster means. For example, for the first cluster, the two subspace coordinates tend to model out-of-plane rotations and illumination changes (Fig. 3c). In Fig. 3b, we compare , ! ( ! / , ( < B
!=F < /A ), of TMG and MTCA for various training examples, illustrating better tracking and appearance modelling of MTCA. Figure 6: Clustering faces extracted from a personal database prepared using face detector. (a) Training examples (b) Means, variances and components for two classes learned using MTCA. (c) column contains several photos in which the detector [8] failed to find the face. column contains central 100x100 portion of . column contains central 100x100 portion of - . Modeling a walking person. Fig. 4a. shows three 165x285 frames from a video sequence of a person walking. For effective summarization, we need to learn a compact representation for the dynamically and periodically changing hand and leg movements. A regular PCA or FA will learn a representation that focuses more on learning linearized shifts, and less on the more interesting motion of hands and legs (Fig. 2a.). The traditional approach is to track the object using, for example,a correlation tracker and then learn the subspace model on normalized images. The parameters learned in this fashion are shown in Fig. 2b. Without previously learning a good model, the tracker fails to provide the perfect tracking necessary for precise subspace modelling of limb motion and thus the inferred subspace projection is blurred. (Fig. 2b). As TCA performs tracking and learns appearance model at the same time, not only does it avoids the tracker initialization that plagues the ”tracking first” approaches, but also provides perfectly aligned , ' / and infers a much cleaner projection , ( 7 / . The TCA model can be used to create video textures based on frame reshuffling similar to [10]. However, instead of shuffling frames based directly on pixel similarity, we use the subspace position and image position
generated from an AR process [9], and for each t find the best frame u in the original video /
for which the window , /
, /
' , /
' is the most similar to
' . Then, generated transformation is applied on the normalized image , ' /
. The result is shown in fig. 5b and contains a bit sharper images than the ones simulated directly from the generative model, fig. 5a. We let the simulated walk last longer than in the original sequence letting MTCA live on twice as wide frames. Clustering and face recognition We used a standard face detector [8] to obtain 85 32x32 images of faces of 2 persons, from a personal photo database of a mother and her daughter. In fig. 6a. we present examples from the training set. We learned a MTCA model with 2 classes and 4 factors. To model global lighting variation, we preset one of the factors to be uniform at .01 (see fig. 6b.). This handles linearized version of ambient lighting condition. We also preset another factor to be smoothly varying in brightness (see fig. 6b.) to capture side illumination. The other two components are learned and they model slight appearance deformation such as facial expressions. The model learned to cluster faces in the training example with " accuracy. An interesting application is to use the learned representation of the faces to detect and recognize faces in the original photos. For instance, the face detector did not recognize faces in many photographs (for eg.,fig. 6c), which we were able to using the learned model (fig. 6c). We increased the resolution of model parameters ! .!E ,7! to match the resolution of photos (640x480), padding around the original parameters with uniform mean, zero factors and high variance. Then, we performed inference, inferring most likely class,c, most likely, for that class and , ' /I=< . We also incorporated 3 rotations and 4 scales as possible transformations, in addition to all possible shifts. In fig. 6c , we present three examples which were not in the training set and the face detector we used failed. In all three cases MTCA detected and recognized the face correctly as belonging to class 2. 4 Conclusion Mixture of transformation-invariant component analyzers is a technique for modeling visual data that finds major clusters and major transformations in the data and learns subspace models of appearance. In this paper, we have described how a fast implementation of learning the model is possible through efficient use of identities from matrix algebra and the use of fast Fourier transforms. Appendix: EM updates Before performing inference in Estep, we pre-compute,for all classes the quantities that are independent of training examples:
" I
I ! " $#% & '( For each training example *) +, , .- /) +',10+
032 546 1-87 91:; /) +, # 6< is evaluated and saved. Computing posteriors over
and c, requires evaluating => ?) +,
@012A (eqn. 1). To compute this distribution, we require the determinant of covariance matrix, COV ) +,
032 and the Mahalanobis distance between input and the transformed latent image. The determinant is simplified to COV *) +',
@012
B DC The Mahalanobis distance between ?) +', and the latent image is
) +',
) +, # E
) +, #
@0 ) +',
@0 ) +', E GFH0K
I KJ 1-87 98: ) +, GL?M 7 9 7 ! 1 & 1-N7 9N:J ) +', OQP EQR 2S0K ) +', OQP EQR # OQP EQR UTWV X -ZY 9 7 -[T\ 9 7 OQP EQR ]T\ G^_
280 ) +', U`a ) +', OQP EQR UTcb # ed V X -'Y Gf 9 7 -]g G^_
) +, E GFH0
1]`J ) +',6h V X -'Y Gf 9 7 -/4_V X i Y Gf 9 7 i X M?jlk ^_
) +, E GFH0K
E Gm60
6<on The summation over
takes only p time, after E GFo0K
qF and
is computed in r p log s time. ) +', 0(2 ! 4 ^_
280 ) +', U`J ) +, < g 4 ^_
2S0 ) +, U` ) +, < h 7 + ) +', 0(2 7 + X M?jlk G^_
280 ) +', UT ) +', 0K
@012 T + ) +, 0+
@0(2 OQP EQR # # ' # # ) +, 012 OQP EQR 280K ) +', > TI V X -'Y 9 7 T@g ) +', 0(2 9 7 # 2S0K ) +', h # E T 2S0K ) +', 2S0 ) +, .T 2S0K ) +', K & ! V X -ZY ^_
280 ) +', ' GL?M 7 9 7 g V X -ZY ^_
2S0 ) +, ' GL?M 7 9 7 -[` ) +, h 280 ) +', 4 4 280K ) +', # 2S0K ) +, < Defining,
)
,
)
, , in the Mstep, parameters are updated according to 2S0K ) +, # 2S0K ) +, OQP EQR # # >' # # > ) +', 0 2 2S0 ) +', # 280 ) +', ? 280K ) +, References [1] Everitt, B.S. An Introduction to Latent variable models Chapman and Hall, New York NY 1984 [2] Frey, B.J. , Colmenarez, A. & Huang, T.S. Mixtures of local linear subspaces for face recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1998 IEEE Computer Society Press: Los Alamitos, CA. [3] Frey, B.J. & Jojic, N. Fast, large-scale transformation-invariant clustering. In Advances in Neural Information Processing Systems 14. Cambridge, MA: MIT Press 2002 [4] Ghahramani, Z. & Hinton, G. The EM Algorithm for Mixtures of Factor Analyzers University of Toronto Technical Report CRG-TR-96-1, 1996 [5] Hinton, G., Dayan, P. & Revow, M. Modeling the manifolds of images of handwritten digits In IEEE Transactions on Neural Networks 1997 [6] Jojic, N. & Frey, B.J. Topographic transformation as a discrete latent variable In Advances in Neural Information Processing Systems 13. Cambridge, MA: MIT Press 1999 [7] Jolliffe,I.T. Principal Component Analysis Springer-Verlag, New York NY, 1986. [8] Li, S.Z., Zhu.L , Zhang, Z.Q. & Zhang,H.J. Learning to Detect Multi-View Faces in Real-Time In Proceedings of the 2nd International Conference on Development and Learning, June, 2002. [9] Neumaier, A. & Schneider,T. Estimation of parameters and eigenmodes of multivariate autoregressive models In ACM Transactions on Math Software 2001. [10] Schdl,A. Szeliski,R.,Salesin,D.& Irfan Essa Video textures In Proceedings of SIGGRAPH2000 [11] Simard, P. , LeCun, Y. & Denker, J. Efficient pattern recognition using a new transformation distance In Advances in Neural Information Processing Systems 1993 [12] Turk, M. & Pentland, A. Face recognition using eigenfaces In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Maui, Hawaii, 1991 [13] Wolberg,G. & Zokai,S. Robust image registration using log-polar transform In Proceedings IEEE Intl. Conference on Image Processing, Canada 2000.
|
2002
|
11
|
2,114
|
Dynamic Bayesian Networks with Deterministic Latent Tables David Barber Institute for Adaptive and Neural Computation Edinburgh University 5 Forrest Hill, Edinburgh, EH1 2QL, U.K. dbarber@anc.ed.ac.uk Abstract The application of latent/hidden variable Dynamic Bayesian Networks is constrained by the complexity of marginalising over latent variables. For this reason either small latent dimensions or Gaussian latent conditional tables linearly dependent on past states are typically considered in order that inference is tractable. We suggest an alternative approach in which the latent variables are modelled using deterministic conditional probability tables. This specialisation has the advantage of tractable inference even for highly complex non-linear/non-Gaussian visible conditional probability tables. This approach enables the consideration of highly complex latent dynamics whilst retaining the benefits of a tractable probabilistic model. 1 Introduction Dynamic Bayesian Networks are a powerful framework for temporal data models with widespread application in time series analysis[10, 2, 5]. A time series of length T is a sequence of observation vectors V = {v(1), v(2), . . . , v(T)}, where vi(t) represents the state of visible variable i at time t. For example, in a speech application V may represent a vector of cepstral coefficients through time, the aim being to classify the sequence as belonging to a particular phonene[2, 9]. The power in the Dynamic Bayesian Network is the assumption that the observations may be generated by some latent (hidden) process that cannot be directly experimentally observed. The basic structure of these models is shown in fig(1)[a] where network states are only dependent on a short time history of previous states (the Markov assumption). Representing the hidden variable sequence by H = {h(1), h(2), . . . , h(T)}, the joint distribution of a first order Dynamic Bayesian Network is p(V, H) = p(v(1))p(h(1)|v(1)) T −1 Y t=1 p(v(t+1)|v(t), h(t))p(h(t+1)|v(t), v(t+1), h(t)) This is a Hidden Markov Model (HMM), with additional connections from visible to hidden units[9]. The usage of such models is varied, but here we shall concentrate on unsupervised sequence learning. That is, given a set of training sequences h(1) h(2) h(t) v(1) v(2) v(t) (a) Bayesian Network h(1), h(2) h(2), h(3) h(t −1), h(t) (b) Hidden Inference Figure 1: (a) A first order Dynamic Bayesian Network containing a sequence of hidden (latent) variables h(1), h(2), . . . , h(T) and a sequence of visible (observable) variables v(1), v(2), . . . , v(T). In general, all conditional probability tables are stochastic – that is, more than one state can be realised. (b) Conditioning on the visible units forms an undirected chain in the hidden space. Hidden unit inference is achieved by propagating information along both directions of the chain to ensure normalisation. V1, . . . , VP we aim to capture the essential features of the underlying dynamical process that generated the data. Denoting the parameters of the model by Θ, learning can be achieved using the EM algorithm which maximises a lower bound on the likelihood of a set of observed sequences by the procedure[5]: Θnew = arg max Θ P X µ=1 p(Hµ|Vµ, Θold) log p(Hµ, Vµ, Θ). (1) This procedure contains expectations with respect to the distribution p(H|V) – that is, to do learning, we need to infer the hidden unit distribution conditional on the visible variables. p(H|V) is represented by the undirected clique graph, fig(1)[b], in which each node represents a function (dependent on the clamped visible units) of the hidden variables it contains, with p(H|V) being the product of these clique potentials. In order to do inference on such a graph, in general, it is necessary to carry out a message passing type procedure in which messages are first passed one way along the undirected graph, and then back, such as in the forward-backward algorithm in HMMs [5]. Only when messages have been passed along both directions of all links can the normalised conditional hidden unit distribution be numerically determined. The complexity of calculating messages is dominated by marginalisation of the clique functions over a hidden vector h(t). In the case of discrete hidden units with S states, this complexity is of the order S2, and the total complexity of inference is then O(TS2). For continuous hidden units, the analogous marginalisation requires integration of a clique function over a hidden vector. If the clique function is very low dimensional, this may be feasible. However, in high dimensions, this is typically intractable unless the clique functions are of a very specific form, such as Gaussians. This motivates the Kalman filter model[5] in which all conditional probability tables are Gaussian with means determined by a linear combination of previous states. There have been several attempts to generalise the Kalman filter to include non-linear/non-Gaussian conditional probability tables, but most rely on using approximate integration methods based on either sampling[3], perturbation or variational type methods[5]. In this paper we take a different approach. We consider specially constrained networks which, when conditioned on the visible variables, render the hidden unit h(1) h(2) h(t) v(1) v(2) v(t) (a) Deterministic Hiddens vout(1) vout(2) vout(t) h(1) h(2) h(t) vin(1) vin(2) vin(t) (b) Input-Output HMM h(1) h(2) h(t) (c) Hidden Inference v(1) v(2) v(3) v(4) (d) Visible Representation Figure 2: (a) A first order Dynamic Bayesian Network with deterministic hidden CPTs (represented by diamonds) – that is, the hidden node is certainly in a single state, determined by its parents. (b) An input-output HMM with deterministic hidden variables. (c) Conditioning on the visible variables forms a directed chain in the hidden space which is deterministic. Hidden unit inference can be achieved by forward propagation alone. (d) Integrating out hidden variables gives a cascade style directed visible graph, shown here for only four time steps. distribution trivial. The aim is then to be able to consider non-Gaussian and nonlinear conditional probability tables (CPTs), and hence richer dynamics in the hidden space. 2 Deterministic Latent Variables The deterministic latent CPT case, fig(2)[a] defines conditional probabilities p(h(t + 1)|v(t + 1), v(t), h(t)) = δ (h(t + 1) −f (v(t + 1), v(t), h(t), θh)) (2) where δ(x) represents the Dirac delta function for continuous hidden variables, and the Kronecker delta for discrete hidden variables. The vector function f parameterises the CPT, itself having parameters θh. Whilst the restriction to deterministic CPTs appears severe, the model retains some attractive features : The marginal p(V) is non-Markovian, coupling all the variables in the sequence, see fig(2)[d]. The marginal p(H) is stochastic, whilst hidden unit inference is deterministic, as illustrated in fig(2)[c]. Although not considered explicitly here, input-output HMMs[7], see fig(2)[b], are easily dealt with by a trivial modification of this framework. For learning, we can dispense with the EM algorithm and calculate the log likelihood of a single training sequence V directly, L(θv, θh|V) = log p(v(1)|θv) + T −1 X t=1 log p(v(t + 1)|v(t), h(t), θv) (3) where the hidden unit values are calculated recursively using h(t + 1) = f (v(t + 1), v(t), h(t), θh) (4) The adjustable parameters of the hidden and visible CPTs are represented by θh and θv respectively. The case of training multiple independently generated sequences Vµ, µ = 1, . . . P is straightforward and has likelihood P µ L(θv, θh|Vµ). To maximise the log-likelihood, it is useful to evaluate the derivatives with respect to the model parameters. These can be calculated as follows : dL dθv = ∂log p(v(1)|θv) ∂θv + T −1 X t=1 ∂ ∂θv log p(v(t + 1)|v(t), h(t), θv) (5) dL dθh = T −1 X t=1 ∂ ∂h(t) log p(v(t + 1)|v(t), h(t), θv)dh(t) dθh (6) dh(t) dθh = ∂f(t) ∂θh + ∂f(t) ∂h(t −1) dh(t −1) dθh (7) where f(t) ≡f(v(t), v(t −1), h(t −1), θh). Hence the derivatives can be calculated by deterministic forward propagation of errors and highly complex functions f and CPTs p(v(t + 1)|v(t), h(t)) may be used. Whilst the training of such networks resembles back-propagation in neural networks [1, 6], the models have a stochastic interpretation and retain the benefits inherited from probability theory, including the possibility of a Bayesian treatment. 3 A Discrete Visible Illustration To make the above framework more explicit, we consider the case of continuous hidden units and discrete, binary visible units, vi(t) ∈{0, 1}. In particular, we restrict attention to the model: p(v(t+1)|v(t), h(t)) = VY i=1 σ (2vi(t + 1) −1) X j wijφj(t) , hi(t+1) = X j uijψj(t) where σ(x) = 1/(1 + e−x) and φj(t) and ψj(t) represent fixed functions of the network state (h(t), v(t)). Normalisation is ensured since 1 −σ(x) = σ(−x). This model generalises a recurrent stochastic heteroassociative Hopfield network[4] to include deterministic hidden units dependent on past network states. The derivatives of the log likelihood are given by : dL dwij = X t (1 −σi(t)) (2vi(t+1)−1)φj(t), dL duij = X t,k,l (1 −σk(t)) (2vk(t+1)−1)wklφ′ l(t)dhl(t) duij where σi(t) ≡σ((2vi(t + 1) −1)P j wijφj(t)), φ′ l(t) ≡dφl(t)/dt and the hidden unit derivatives are found from the recursions dhl(t + 1) duij = X k ulk dψk(t) duij + δilψj(t), dψk(t) duij = X m ∂ψk(t) ∂hm(t) dhm(t) duij We considered a network with the simple linear type influences, Ψ(t) ≡Φ(t) ≡ h(t) v(t) , and restricted connectivity W = A 0 0 B , U = C 0 0 D , where the h(t) h(t + 1) v(t) v(t + 1) (a) Network (b) original (c) recalled Figure 3: (a) A temporal slice of the network. (b) The training sequence consists of a random set vectors (V = 3) over T = 10 time steps. (c) The reconstruction using H = 7 hidden units. The initial state v(t = 1) for the recalled sequence was set to the correct initial training value albeit with one of the values flipped. Note how the dynamics learned is an attractor for the original sequence. parameters to learn are the matrices A, B, C, D. A slice of the network is illustrated in fig(3)[a]. We can easily iterate the hidden states in this case to give h(t + 1) = Ah(t) + Bv(t) = Ath(1) + t−1 X t′=0 At′Bv(t −t′) which demonstrates how the hidden state depends on the full past history of the observations. We trained the network using 3 visible units and 7 hidden units to maximise the likelihood of the binary sequence in fig(3)[b]. Note that this sequence contains repeated patterns and therefore could not be recalled perfectly with a model which does not contain hidden units. We tested if the learned model had captured the dynamics of the training sequence by initialising the network in the first visible state in the training sequence, but with one of the values flipped. The network then generated the following hidden and visible states recursively, as plotted in fig(3)[c]. The learned network is an attractor with the training sequence as a stable point, demonstrating that such models are capable of learning attractor recurrent networks more powerful than those without hidden units. Learning is very fast in such networks, and we have successfully applied these models to cases of several hundred hidden and visible unit dimensions. 3.1 Recall Capacity What effect have the hidden units on the ability of Hopfield networks to recall sequences? By recall, we mean that a training sequence is correctly generated by the network given that only the initial state of the training sequence is presented to the trained network. For the analysis here, we will consider the retrieval dynamics to be completely deterministic, thus if we concatenate both hidden h(t) and visible variables v(t) into the vector x(t) and consider the deterministic hidden function f(y) ≡thresh(y) which is 1 if y > 0 and zero otherwise, then xi(t + 1) = thresh X j Mijxj(t). (8) Here Mij are the elements of the weight matrix representing the transitions from time t to time t + 1. A desired sequence ˜x(1), . . . , ˜x(T) can be recalled correctly if we can find a matrix M and real numbers ϵi(t) such that M [˜x(1), . . . , x(T −1)] = [ϵ(2), . . . , ϵ(T)] where the ϵi(t) are arbitrary real numbers for which thresh(ϵi(t)) = ˜xi(t). This system of linear equations can be solved if the matrix [˜x(1), . . . , ˜x(T −1)] has rank T −1. The use of hidden units therefore increases the length of temporal sequences that we can store by forming, during learning, appropriate hidden representations h(t) such that the vectors h(2) v(2) , . . . , h(T) v(T) form a linearly independent set. Such vectors are clearly possible to generate if the matrix U is full rank. Thus recall can be achieved if (V + H) ≥T −1. The reader might consider forming from a set of linearly dependent patterns v(1), . . . , v(T) a linearly independent is by injecting the patterns into a higher dimensional space, v(t) →ˆv(t) using a non-linear mapping. This would appear to dispense with the need to use hidden units. However, if the same pattern in the training set is repeated at different times in the sequence (as in fig(3)[b]), no matter how complex this non-linear mapping, the resulting vectors ˆv(1), . . . , ˆv(T) will be linearly dependent. This demonstrates that hidden units not only solve the linear dependence problem for non-repeated patterns, they also solve it for repeated patterns. They are therefore capable of sequence disambiguation since the hidden unit representations formed are dependent on the full history of the visible units. 4 A Continuous Visible Illustration To illustrate the use of the framework to continuous visible variables, we consider the simple Gaussian visible CPT model p(v(t + 1)|v(t), h(t)) = exp −1 2σ2 [v(t + 1) −g (Ah(t) −Bv(t))]2 /(2πσ2)V/2 h(t + 1) = f (Ch(t) + Dv(t)) (9) where the functions f and g are in general non-linear functions of their arguments. In the case that f(x) ≡x, and g(x) ≡x this model is a special case of the Kalman filter[5]. Training of these models by learning A, B, C, D (σ2 was set to 0.02 throughout) is straightforward using the forward error propagation techniques outlined earlier in section (2). 4.1 Classifying Japanese vowels This UCI machine learning test problem consists of a set of multi-dimensional times series. Nine speakers uttered two Japanese vowels /ae/ successively to form discrete time series with 12 LPC cepstral coefficients. Each utterance forms a time series V whose length is in the range T = 7 to T = 29 and each vector v(t) of the time series contains 12 cepstral coefficients. The training data consists of 30 training utterances for each of the 9 speakers. The test data contains 370 time series, each uttered by one of the nine speakers. The task is to assign each of the test utterances to the correct speaker. We used the special settings f(x) ≡x and g(x) ≡x to see if such a simple network would be able to perform well. We split the training data into a 2/3 train and a 1/3 validation part, training then a set of 10 models for each of the 9 speakers, with hidden unit dimensions taking the values H = 1, 2, . . . , 10 and using 20 training iterations of conjugate gradient learning[1]. For simplicity, we used the same number of hidden units for each of the nine speaker models. To classify a test utterance, we chose the speaker model which had the highest likelihood of generating the test utterance, using an error of 0 if the utterance was assigned to the correct speaker and an error of 1 otherwise. The errors on the validation set for these 10 models 0 5 10 15 20 25 30 35 40 −2 0 2 0 5 10 15 20 25 30 35 40 −2 0 2 0 5 10 15 20 25 30 35 40 −2 0 2 0 5 10 15 20 25 30 35 40 −2 0 2 0 5 10 15 20 25 30 35 40 −2 0 2 0 5 10 15 20 25 30 35 40 −2 0 2 0 5 10 15 20 25 30 35 40 −2 0 2 0 5 10 15 20 25 30 35 40 −2 0 2 0 5 10 15 20 25 30 35 40 −2 0 2 0 5 10 15 20 25 30 35 40 −2 0 2 Figure 4: (Left)Five sequences from the model v(t) = sin(2(t−1)+ϵ1(t))+0.1ϵ2(t). (Right) Five sequences from the model v(t) = sin(5(t −1) + ϵ3(t)) + 0.1ϵ4(t), where ϵi(t) are zero mean unit variance Gaussian noise samples. These were combined to form a training set of 10 unlabelled sequences. We performed unsupervised learning by fitting a two component mixture model. The posterior probability p(i = 1|Vµ) of the 5 sequences on the left belonging to class 1 are (from above) 0.99, 0.99, 0.83, 0.99, 0.96 and for the 5 sequences on the right belonging to class 2 are (from above) 0.95, 0.99, 0.97, 0.97, 0.95, in accord with the data generating process. were 6, 6, 3, 5, 5, 5, 4, 5, 6, 3. Based on these validation results, we retrained a model with H = 3 hidden units on all available training data. On the final independent test set, the model achieved an accuracy of 97.3%. This compares favourably with the 96.2% reported for training using a continuous-output HMM with 5 (discrete) hidden states[8]. Although our model is not powerful in being able to reconstruct the training data, it does learn sufficient information in the data to be able to make reliable classification. This problem serves to illustrate that such simple models can perform well. An interesting alternative training method not explored here would be to use discriminative learning[7]. Also, not explored here, is the possibility of using Bayesian methods to set the number of hidden dimensions. 5 Mixture Models Since our models are probabilistic, we can apply standard statistical generalisations to them, including using them as part of a M component mixture model p(V|Θ) = M X i=1 p (V|Θi, i) p (i) (10) where p(i) denotes the prior mixing coefficients for model i, and each time series component model is represented by p (V|Θi, i). Training mixture models by maximum likelihood on a set of sequences V1, . . . , VP is straightforward using the standard EM recursions [1]: pnew(i) = PP µ=1 p(Vµ|i, Θold i )pold(i) PM i=1 PP µ=1 p(Vµ|i, Θold i )pold(i) (11) Θnew i = arg max Θi P X µ=1 p(Vµ|i, Θold i ) log p(Vµ|i, Θi) (12) To illustrate this on a simple example, we trained a mixture model with component models of the form described in section (4). The data is a series of 10 one dimensional (V = 1) time series each of length T = 40. Two distinct models were used to generate 10 training sequences, see fig(4). We fitted a two component mixture model using mixture components of the form (9) (with linear functions f and g) each model having H = 3 hidden units. After training, the model priors were found to be roughly equal 0.49, 0.51 and it was satisfying to find that the separation of the unlabelled training sequences is entirely consistent with the data generation process, see fig(4). An interesting observation is that, whilst the true data generating process is governed by effectively stochastic hidden transitions, the deterministic hidden model still performs admirably. 6 Discussion We have considered a class of models for temporal sequence processing which are a specially constrained version of Dynamic Bayesian Networks. The constraint was chosen to ensure that inference would be trivial even in high dimensional continuous hidden/latent spaces. Highly complex dynamics may therefore be postulated for the hidden space transitions, and also for the hidden to the visible transitions. However, unlike traditional neural networks the models remain probabilistic (generative models), and hence the full machinery of Bayesian inference is applicable to this class of models. Indeed, whilst not explored here, model selection issues, such as assessing the relevant hidden unit dimension, are greatly facilitated in this class of models. The potential use of this class of such models is therefore widespread. An area we are currently investigating is using these models for fast inference and learning in Independent Component Analysis and related areas. In the case that the hidden unit dynamics is known to be highly stochastic, this class of models is arguably less appropriate. However, stochastic hidden dynamics is often used in cases where one believes that the true hidden dynamics is too complex to model effectively (or, rather, deal with computationally) and one uses noise to ‘cover’ for the lack of complexity in the assumed hidden dynamics. The models outlined here provide an alternative in the case that a potentially complex hidden dynamics form can be assumed, and may also still provide a reasonable solution even in cases where the underlying hidden dynamics is stochastic. This class of models is therefore a potential route to computationally tractable, yet powerful time series models. References [1] C.M. Bishop, Neural Networks for Pattern Recognition, Oxford University Press, 1995. [2] H.A. Bourlard and N. Morgan, Connectionist Speech Recognition. A Hybrid Approach., Kluwer, 1994. [3] A. Doucet, N. de Freitas, and N. J. Gordon, Sequential Monte Carlo Methods in Practice, Springer, 2001. [4] J. Hertz, A. Krogh, and R. Palmer, Introduction to the theory of neural computation., Addison-Wesley, 1991. [5] M. I. Jordan, Learning in Graphical Models, MIT Press, 1998. [6] J.F. Kolen and S.C. Kramer, Dynamic Recurrent Networks, IEEE Press, 2001. [7] A. Krogh and S.K. Riis, Hidden Neural Networks, Neural Computation 11 (1999), 541–563. [8] M. Kudo, J. Toyama, and M. Shimbo, Multidimensional Curve Classification Using Passing-Through Regions, Pattern Recognition Letters 20 (1999), no. 11-13, 1103– 1111. [9] L.R. Rabiner and B.H. Juang, An introduction to hidden Markov models, IEEE Transactions on Acoustics Speech, Signal Processing 3 (1986), no. 1, 4–16. [10] M. West and J. Harrison, Bayesian forecasting and dynamic models, Springer, 1999.
|
2002
|
110
|
2,115
|
Generalized2 Linear2 Models Geoffrey J. Gordon ggordon@es.emu.edu Abstract We introduce the Generalized2 Linear2 Model, a statistical estimator which combines features of nonlinear regression and factor analysis. A (GL)2M approximately decomposes a rectangular matrix X into a simpler representation j(g(A)h(B)). Here A and Bare low-rank matrices, while j, g, and h are link functions. (GL)2Ms include many useful models as special cases, including principal components analysis, exponential-family peA, the infomax formulation of independent components analysis, linear regression, and generalized linear models. They also include new and interesting special cases, one of which we describe below. We also present an iterative procedure which optimizes the parameters of a (GL)2M. This procedure reduces to well-known algorithms for some of the special cases listed above; for other special cases, it is new. 1 Introduction Let the m x n matrix X contain an independent sample from some unknown distribution. Each column of X represents a training example, and each row represents a measured feature of the examples. It is often reasonable to assume that some of the features are redundant, that is, that there exists a reduced set of I features which contains all or most of the information in X. If the reduced features are linear functions of the original features and the distributions of the elements of X are Gaussian, redundancy means we can write X as the product of two smaller matrices U and V with small sum of squared errors. This factorization is essentially a singular value decomposition: U must span the first I dimensions of the left principal subspace of X, while V T must span the first I dimensions of the right principal subspace. (Since the above requirements do not uniquely determine U and V, the SVD traditionally imposes additional restrictions which we will ignore in this paper.) The SVD has a long list of successes in machine learning, including information retrieval applications such as latent semantic analysis [1] and link analysis [2]; pattern recognition applications such as "eigenfaces" [3]; structure from motion algorithms [4]; and data compression tools [5]. Unfortunately, the SVD makes two assumptions which can limit its accuracy as a learning tool. The first assumption is the use of the sum of squared errors between X and UV as a loss function. Squared error loss means that predicting 1000 when the answer is 1010 is as bad as saying -7 when the answer is 3. The second assumption is that the reduced features are linear functions of the original features. Instead, X might be a nonlinear function of UV, and U and V might be nonlinear functions of some other matrices A and B. To address these shortcomings, we propose the model x = f(g(A)h(B)) (1) for the expected value of X. We also propose allowing non-quadratic loss functions for the error (X - X) and the parameter matrices A and B. The fixed functions are called link functions. By analogy to generalized linear models [6], we call equation (1) a Generalized2 Linear2 Model: generalized2 because it uses link functions for the parameters A and B as well as the prediction X, and linear2 because like the SVD it is bilinear. As long as we choose link and loss functions that match each other (see below for the definition of matching link and loss), there will exist efficient algorithms for finding A and B given X, f, g, and h. Because (1) is a generalization of the SVD, (GL)2Ms are drop-in replacements for SVDs in all of the applications mentioned above, with better reconstruction performance when the SVD's error model is inaccurate. In addition, they open up new applications (see section 7 for one) where an SVD would have been unable to provide a sufficiently accurate reconstruction. 2 Matching link and loss functions Whenever we try to optimize the predictions of a nonlinear model, we need to worry about getting stuck in local minima. One example of this problem is when we try to fit a single sigmoid unit with parameters (J E lRd to training inputs Xi E lRd and target outputs Yi E lR under squared error loss: Yi = 10git(Zi) Zi = Xi . (J Even for small training sets, the number of local minima of L can grow exponentially with the dimension d [7]. On the other hand, if we optimize the same predictions Yi under the logarithmic loss function ~i[Yi log Yi + (1 - Vi) 10g(1 - Yi)] instead of squared error, our optimization problem is convex. Because the logistic link works with the log loss to produce a convex optimization problem, we say they match each other [7]. Matching link-loss pairs are important because minimizing a convex loss function is usually far easier than minimizing a non convex one. We can use any convex function F(z) to generate a matching pair of link and loss functions. The loss function which corresponds to F is (2) where F*(y) is defined so that minz DF(Z I y) = O. (F* is the convex dual of F [8], and D F is the generalized Bregman divergence from Z to Y [9].) Expression (2) is nonnegative, and it is globally convex in all of the ZiS (and therefore also in (J since each Zi is a linear function of (J). If we write f for the gradient of F, the derivative of (2) with respect to Zi is f(Zi) - Vi. So, (2) will be zero if and only if Yi = f(Zi) for all i; in other words, using the loss (2) implies that Yi = f(z;) is our best prediction of Vi, and f is therefore our matching link function. We will need two facts about convex duals below. The first is that F* is always convex, and the second is that the gradient of F* is equal to f - l. (Also, convex duality is defined even when F, G, and H aren't differentiable. If they are not, replace derivatives by subgradients below.) 3 Loss functions for (G L )2Ms In (GL)2Ms, matching loss functions will be particularly important because we need to deal with three separate nonlinear link functions. We will usually not be able to avoid local minima entirely; instead, the overall loss function will be convex in some groups of parameters if we hold the remaining parameters fixed. We will specify a (GL)2M by picking three link functions and their matching loss functions. We can then combine these individual loss functions into an overall loss function as described in section 4; fitting a (GL)2M will then reduce to minimizing the overall loss function with respect to our parameters. Each choice of links results in a different (G L)2M and therefore potentially a different decomposition of X. The choice of link functions is where we should inject our domain knowledge about what sort of noise there is in X and what parameter matrices A and B are a priori most likely. Useful link functions include f (x) = x (corresponding to squared error and Gaussian noise), f (x) = log x (unnormalized KL-di vergence and Poisson noise), and f(x) = (1 + e- x) - l (log-loss and Bernoulli noise). The loss functions themselves are only necessary for the analysis; all of our algorithms need only the link functions and (in some cases) their derivatives. So, we can pick the loss functions and differentiate to get the matching link functions; or, we can pick the link functions directly and not worry about the corresponding loss functions. In order for our analysis to apply, the link functions must be derivatives of some (possibly unknown) convex functions. Our loss functions are D F, DG, and DH where G : lRmxl H lR are convex functions. We will abuse notation and call F, G, and H loss functions as well: F is the prediction loss, and its derivative f is the prediction link; it provides our model of the noise in X. G and H are the parameter losses, and their derivatives g and h are the parameter links; they tell us which values of A and B are a priori most likely. By convention, since F takes an m x n matrix argument, we will say that the input and output to f are also m x n matrices (similarly for g and h). 4 The model and its fixed point equations We will define a (GL)2M by specifying an overall loss function which relates the parameter matrices A and B to the data matrix X. If we write U = g(A) and V = h(B), the (GL)2M loss function is L(U, V) = F(UV) - X 0 UV + G*(U) + H*(V) (3) Here A 0 B is the "matrix dot product," often written tr(AT B). Expression (3) is a sum of three Bregman divergences, ignoring terms which don't depend on U and V: it is DF(UV I X)+DG(O I U) +DH(O I V). The F-divergence tends to pull UV towards X, while the G- and H-divergences favor small U and V. To further justify (3), we can examine what happens when we compute its derivatives with respect to U and V and set them to O. The result is a set of fixed-point equations that the optimal parameter settings must satisfy: UT(X - f(UV)) (X - f(UV))VT B A (4) (5) To understand these equations, we can consider two special cases. First, if we let G* go to zero (so there is no pressure to keep U and V small), (4) becomes UT(X - f(UV)) = 0 (6) which tells us that each column of the error matrix must be orthogonal to each column of U. Second, if we set the prediction link to be f(UV) = UV, (6) becomes UTUV = UTX which tells us that for a given U, we must choose V so that UV reconstructs X with the smallest possible sum of squared errors. 5 Algorithms for fitting (GL)2Ms We could solve equations (4- 5) with any of several different algorithms. For example, we could use gradient descent on either U, V or A, B. Or, we could use the generalized gradient descent [9] update rule (with learning rate a): A +-", (X - f(UV))VT B +-", UT(X - f(UV)) The advantage of these algorithms is that they are simple to implement and don't require additional assumptions on F , G, and H. They can even work when F, G, and Hare nondifferentiable by using subgradients. In this paper, though, we will focus on a different algorithm. Our algorithm is based on Newton's method, and it reduces to well-known algorithms for several special cases of (GL)2Ms. Of course, since the end goal is solving (4-5), this algorithm will not always be the method of choice; instead, any given implementation of a (GL)2M should use the simplest algorithm that works. For our Newton algorithm we will need to place some restrictions on the prediction and parameter loss functions. (These restrictions are only necessary for the Newton algorithm; more general loss functions still give valid (GL)2Ms, but require different algorithms.) First, we will require (4-5) to be differentiable. Second, we will restrict F(Z) = LFij (Zij) ij H(B) = L Hj(B. j ) j These definitions fix most of the second derivatives of L(U, V) to be zero, simplifying and speeding up computation. Write fij , gi, and hj for the respective derivatives. With these restrictions, we can linearize (4) and (5) around our current guess at the parameters, then solve the resulting equations to find updated parameters. To simplify notation, we can think of (4) as j separate equations, one for each column of V. Linearizing with respect to Vj gives: (UT DjU + Hj)(Vrw - Vj) = UT(X.j - f.j(UV j )) - B.j where the l x l matrix H j is the Hessian of Hi at V j ' or equivalently the inverse of the Hessian of Hj at B.j; and the m x m diagonal matrix Dj contains the second derivatives of F with respect to the jth column of its argument. That is, Now, collecting terms involving Vjew yields: We can recognize (7) as a weighted least squares problem with weights VJ5j, prior precision H j , prior mean Vj + Hj1 B-j , and target outputs UVj + Dj1(x.j - f(UV j )) Similarly, we can linearize with respect to rows of U to find the equation UreW(VDiVT + G i ) = ((Xi. - j;.(Ui.V))Di1 + Ui.V)DiVT + Ui. Gi - Ai. (8) where G i is the Hessian of Gi and Di contains the second derivatives of F with respect to the ith row of its argument. We can solve one copy of (7) simultaneously for each column of V, then replace V by vnew. Next we can solve one copy of (8) simultaneously for each row of U, then replace U by unew. Alternating between these two updates will tend to reduce (3).1 6 Related models There are many important special cases of (GL)2Ms. We derive two in this section; others include principal components analysis, "sensible" PCA, linear regression, generalized linear models, and the weighted majority algorithm. (Our Newton algorithm turns into power iteration for PCA and iteratively-reweighted least squares for GLMs.) (GL)2Ms are related to generalized bilinear models; the latter include some of the above special cases, but not ICA, weighted majority, or the example of section 7. There are natural generalizations of (GL)2Ms to multilinear interactions. Finally, some models such as non-negative matrix factorization [10] and generalized low-rank approximation [11] are cousins of (GL)2Ms: they use a loss function which is convex in either factor with the other fixed but which is not a Bregman divergence. 6.1 Independent components analysis In ICA, we assume that there is a hidden matrix V (the same size as X) of independent random variables, and that X was generated from V by applying a square matrix U. We seek to recover the mixing matrix U and the sources V; in other words, we want to decompose X = UV so that the elements of V are as nearly independent as possible. The info max algorithm for ICA assumes that the elements of V follow distributions with heavy tails (i.e., high kurtosis). This assumption helps us find independent components because the sum of two heavy-tailed random variables tends to have lighter tails, so we can search for U by trying to make the elements of V follow a heavy-tailed distribution. In our notation, the fixed point of the info max algorithm for ICA is _ UT = tanh(V)XT (9) (see, e.g., equation (11) or (13) of [12]). To reproduce (9), we will let the prediction link f be the identity, and we will let the duals of the parameter loss functions be G*(U) -dogdet U H*(V) E L log cosh Vij ij iTo guarantee convergence, we can check that (3) decreases and reduce our step size if we encounter problems. (Since UT D j U, H j , V Di V T, and G i are all positive definite, the Newton update directions are descent directions; so, there always exists a small enough step size.) We have not found this check necessary in practice. where f is a small positive real number. Then equations (4) and (5) become UT(X - UV) (X - UV)VT ttanh(V) -fU- T (10) (11) since the derivative of log cosh v is tanh v and the derivative of log det U is U- T . Right-multiplying (10) by (UV)T and substituting in (11) yields _uT = tanh(V)(UV)T (12) Now since UV -+ X as f -+ 0, (12) is equivalent to (9) in the limit of vanishing f. 6.2 Exponential family peA To duplicate exponential family PCA [13], we can set the prediction link f arbitrarily and let the parameter links 9 and h be large multiples of the identity. Our Newton algorithm is applicable under the assumptions of [13], and (7) becomes (13) Equation (13) along with the corresponding modification of (8) should provide a much faster algorithm than the one proposed in [13], which updates only part of U or V at a time and keeps updating the same part until convergence before moving on to the next one. 7 Example: robot belief states Figure 1 shows a map of a corridor in the CMU CS building. A robot navigating in this corridor can sense both side walls and compute an accurate estimate of its lateral position. Unless it is near an observable feature such the lab door near the middle of the corridor, however, it can't accurately resolve its position along the corridor and it can't tell whether it is pointing left or right. In order to plan to achieve a goal in this environment, the robot must maintain a belief state (a probability distribution representing its best information about the unobserved state variables). The map shows the robot's starting belief state: it is at one end of the corridor facing in, but it doesn't know which end. We collected a training set of 400 belief states by driving the robot along the corridor and feeding its sensor readings to a belief tracker [14]. To simulate a larger environment with greater uncertainty, we artificially reduced sensor range and increased error. Figure 1 shows two of the collected beliefs. Planning is difficult because belief states are high-dimensional: even in this simple world there are 550 states (275 positions at lOcm intervals along the corridor x 2 orientations), so a belief is a vector in ]R550. Fortunately, the robot never encounters most belief states. This regularity can make planning tractable: if we can identify a few features which extract the important information from belief states, we can plan in low-dimensional feature space instead of high-dimensional belief space. We factored the matrix of belief states using feature space ranks l = 3,4, 5. For the prediction link f(Z) we used exp(Z) (componentwise); this link ensures that the predicted beliefs are positive, and treats errors in small probabilities as proportionally more important than errors in large ones. (The matching loss for f is a Poisson log-likelihood or unnormalized KL-divergence.) For the parameter link h we used 1012 I, corresponding to H* = lO- 12 11V11 2/2 (a weak bias towards small V). ~,~I ~,A~""~ , _ -----=-----' ~,~I -------c:Lj~ \ .~ \ _ -----=-----,1 ~,:------------c. 1 -------::::c,f\~"'~\ _ ~ ~t ,. A. 1 ~,~ I -------c:L,. A-----"-----..t ,/____,________=_\ -----=-----' Figure 1: Belief states. Left panel: overhead map of corridor with initial belief b1 ; belief state bso (just before robot finds out which direction it's pointing); belief bgo (just after finding out). Right panel: reconstruction of bso with 3, 4, and 5 features. We set G* = 1O- 1211U112 j2 +6..(U), where 6.. is 0 when the first column of U contains all Is and 00 otherwise. This loss function fixes the first column of U, representing our knowledge that one feature should be a normalizing constant so that each belief sums to 1. The subgradient of G* is 1O- 12U + [k, 0], so equation (5) becomes (X - exp(UV))VT = 1O- 12U + [k, 0] Here [k,O] is a matrix with an arbitrary first column and all other elements 0; this matrix has enough degrees of freedom to compensate for the constraints on U. Our Newton algorithm handles this modified fixed point equation without difficulty. So, this (GL)2M is a principled and efficient way to decompose a matrix of probability distributions. So far as we know this model and algorithm have not been described in the literature. Figure 1 shows our reconstructions of a representative belief state using I = 3,4,5 features (one of which is a normalizing constant that can be discarded for planning). The I = 5 reconstruction is consistently good across all 400 beliefs, while the I = 4 reconstruction has minor artifacts for some beliefs. A small number of restarts is required to achieve good decompositions for I = 3 where the optimization problem is most constrained. For comparison, a traditional SVD requires a matrix of rank about 25 to achieve the same mean-squared reconstruction error as our rank-3 decomposition. It requires rank about 85 to match our rank-5 decomposition. Examination of the learned U matrix (not shown) for I = 4 reveals that the corridor is mapped into two smooth curves in feature space, one for each orientation. Corresponding states with opposite orientations are mapped into similar feature vectors for one half of the corridor (where the training beliefs were sometimes confused about orientation) but not the other (where there were no training beliefs that indicated any connection between orientations). Reconstruction artifacts occur when a curve nearly self-intersects and causes confusion between states. This selfintersection happens because of local minima in the loss function; with more flexibility (I = 5) the optimizer is able to untangle the curves and avoid self-intersection. Our success in compressing the belief state translates directly into success in planning; see [15] for details. By comparison, traditional SVD on either the beliefs or the log beliefs produces feature sets which are unusable for planning because they do not achieve sufficiently good reconstruction with few enough features. 8 Discussion We have introduced a new general class of nonlinear regression and factor analysis model, presenting a derivation and algorithm, reductions to previously-known special cases, and a practical example. The model is a drop-in replacement for PCA, but can provide much better reconstruction performance in cases where the PCA error model is inaccurate. Future research includes online algorithms for parameter adjustment; extensions for missing data; and exploration of new link functions. Acknowledgments Thanks to Nick Roy for helpful comments and for providing the data analyzed in section 7. This work was supported by AFRL contract F30602-01-C-0219, DARPA's MICA program, and by AFRL contract F30602- 98- 2- 0137, DARPA's CoABS program. The opinions and conclusions are the author's and do not reflect those of the US government or its agencies. References [1] T. K. Landauer, P. W . Foltz, and D. Laham. Introduction to latent semantic analysis. Discourse Processes, 25:259- 284, 1998. [2] Jon M. Kleinberg. Authoritative sources in a hyperlinked environment. Journal of the ACM, 46(5) :604-632, 1999. [3] M. Turk and A. Pentland. Eigenfaces for recognition. Journal of Cognitive Neuroscience, 3(1) :71-86, 1991. [4] Carlo Tomasi and Takeo Kanade. Shape and motion from image streams under orthography: a factorization method. Int. J. Computer Vision, 9(2):137- 154, 1992. [5] D. P. O'Leary and S. Peleg. Digital image compression by outer product expansion. IEEE Trans. Communications, 31:441-444, 1983. [6] P. McCullagh and J. A. Neider. Generalized Linear Models. Chapman & Hall, London, 2nd edition, 1983. [7] Peter Auer, Mark Hebster, and Manfred K. Warmuth. Exponentially many local minima for single neurons. In NIPS, vol. 8. MIT Press, 1996. [8] R. Tyrell Rockafellar. Convex Analysis. Princeton University Press, New Jersey, 1970. [9] Geoffrey J. Gordon. Approximate Solutions to Markov Decision Processes. PhD thesis, Carnegie Mellon University, 1999. [10] Daniel Lee and H. Sebastian Seung. Algorithms for nonnegative matrix factorization. In NIPS, vol. 13. MIT Press, 2001. [11] Nathan Srebro. Personal communication, 2002. [12] Anthony J . Bell and Terrence J. Sejnowski. The 'independent components' of natural scenes are edge filters. Vision Research, 37(23):3327- 3338, 1997. [13] Michael Collins, Sanjoy Dasgupta, and Robert Schapire. A generalization of principal component analysis to the exponential family. In NIPS, vol. 14. MIT Press, 2002. [14] D. Fox, W. Burgard, F . Dellaert, and S. Thrun. Monte Carlo localization: Efficient position estimation for mobile robots. In AAAI, 1999. [15] Nicholas Roy and Geoffrey J. Gordon. Exponential family PCA for belief compression in POMDPs. In NIPS, vol. 15. MIT Press, 2003. [16] Sam Roweis. EM algorithms for PCA and SPCA. In NIPS, vol. 10. MIT Press, 1998.
|
2002
|
111
|
2,116
|
Spectro-Temporal Receptive Fields of Subthreshold Responses in Auditory Cortex Christian K. Machens, Michael Wehr, Anthony M. Zador Cold Spring Harbor Laboratory One Bungtown Rd Cold Spring Harbor, NY 11724 machens, wehr, zador @cshl.edu Abstract How do cortical neurons represent the acoustic environment? This question is often addressed by probing with simple stimuli such as clicks or tone pips. Such stimuli have the advantage of yielding easily interpreted answers, but have the disadvantage that they may fail to uncover complex or higher-order neuronal response properties. Here we adopt an alternative approach, probing neuronal responses with complex acoustic stimuli, including animal vocalizations and music. We have used in vivo whole cell methods in the rat auditory cortex to record subthreshold membrane potential fluctuations elicited by these stimuli. Whole cell recording reveals the total synaptic input to a neuron from all the other neurons in the circuit, instead of just its output—a sparse binary spike train—as in conventional single unit physiological recordings. Whole cell recording thus provides a much richer source of information about the neuron’s response. Many neurons responded robustly and reliably to the complex stimuli in our ensemble. Here we analyze the linear component—the spectrotemporal receptive field (STRF)—of the transformation from the sound (as represented by its time-varying spectrogram) to the neuron’s membrane potential. We find that the STRF has a rich dynamical structure, including excitatory regions positioned in general accord with the prediction of the simple tuning curve. We also find that in many cases, much of the neuron’s response, although deterministically related to the stimulus, cannot be predicted by the linear component, indicating the presence of as-yet-uncharacterized nonlinear response properties. 1 Introduction In their natural environment, animals encounter highly complex, dynamically changing stimuli. The auditory cortex evolved to process such complex sounds. To investigate a system in its normal mode of operation, it therefore seems reasonable to use natural stimuli. The linear response of an auditory neuron can be described in terms of its spectro-temporal receptive field (STRF). The cortical STRF has been estimated using a variety of stimulus ensembles1, including tone pips [1] and dynamic ripples [2]. However, while natural stimuli have long been used to probe cortical responses [3, 4], and have been widely used in other preparations to compute STRFs [5], they have only rarely been used to compute STRFs from cortical neurons [6]. Here we present estimates of the STRF using in vivo whole cell recording. Because whole cell recording measures the total synaptic input to a neuron, rather than just its output— a sparse binary spike train—as in conventional single unit physiological recordings, this technique provides a much richer source of information about the neuron’s response. Whole cell recording also has a different sampling bias from conventional extracellular recording: instead of recording from active neurons with large action potentials (i.e. those that are most easily isolated on the electrode), whole cell recording selects for neurons solely on the basis of the experimenter’s ability to form a gigaohm seal. Using these novel methods, we investigated the computations performed by single neurons in the auditory cortex A1 of rats. 2 Spike responses and subthreshold activity We first used cell-attached methods to obtain well-isolated single unit recordings. We found that many cells in auditory cortex responded only very rarely to the natural stimulus ensemble, making it difficult to characterize the neuron’s input-output relationship effectively. An example of this problem is shown in Fig. 1(b) where a natural stimulus (here, the call of a nightingale) leads to an average of about five spikes during the eight-second-long presentation. Such sparse responses are not surprising, since it is well known that many cortical neurons are selective for stimulus transients [7, 8]. One way to circumvent this difficulty is to present stimuli that elicit high firing rates. For example, using dynamic ripple stimuli, an STRF can be constructed with about spikes collected over minutes (average firing rate of approximately spikes/second, or about -fold higher than the rate elicited by the natural stimulus in Fig. 1(b)) [9]. However, such stimuli have, by design, a simple correlational structure, and therefore preclude the investigation of nonlinear response properties driven by higher-order stimulus characteristics. We have therefore adopted an alternative approach based on in vivo whole cell recording, exploiting the fact that although these neurons spike only rarely, they feature strong subthreshold activity. A set of subthreshold voltage traces, obtained by a whole-cell recording where spikes were blocked (only in the neuron being recorded from) with the intracellular sodium channel blocker QX-314 (see Methods), is shown in Fig. 1(c). The responses feature robust stimulus-locked fluctuations of membrane potential, as well as some spontaneous activity. Both the spontaneous and stimulus-locked voltage fluctuations are due to the synchronous arrival of many excitatory postsynaptic potentials (EPSPs). (Note that if spikes had not been blocked pharmacologically, some of the larger EPSPs would have triggered spikes). Not only do these whole cell recordings avoid the problem of sparse spiking responses, they also provide insight into the computations performed by the input to the neuron’s spike generating mechanism. 1Because cortical neurons respond poorly to white noise, this stimulus has not been used to estimate cortical STRFs. (a) (b) (c) 0 1 2 3 4 5 6 7 8 2 4 6 8 10 trial no. 0 1 2 3 4 5 6 7 8 time (sec) trial no. Figure 1: (a) Spectrogram of the song of a nightingale. (b) Spike raster plots recorded in cell-attached mode during ten repetitions of the nightingale song from a single neuron in auditory cortex A1. (c) Voltage traces recorded in whole-cell-mode during ten repetitions from another neuron in A1. 3 Reliability of responses A key step in the characterization of the neuron’s responses is the separation of the stimulus-locked activity from the stimulus-independent activity (“background noise”). A sample average trace is compared with a single trial in Fig. 2(a). To quantify the amount of stimulus-locked activity, we computed the coherence function between a single response trace and the average over the remaining traces. The coherence measures the frequency-resolved correlation of two time series. This function is shown in Fig. 2b for responses to several natural stimuli from the same cell. The coherence function demonstrates that the stimulus-dependent activity is confined to lower frequencies ( Hz). Note that the coherence function provides merely an average over the complete trace; in reality, the coherence can locally be much higher (when all traces feature the same stimulus-locked excursion in membrane potential) or much lower (for instance in the absence of stimulus-locked activity). On average, however, the coherence is approximately the same for all the natural stimuli presented, indicating that all stimuli feature approximately the same level of background activity. 0 50 100 150 0 0.2 0.4 0.6 0.8 1 frequency (Hz) coherence 10 11 12 13 14 15 −5 0 5 10 15 20 time (sec) voltage (mV) mean response single trial (a) (b) Figure 2: (a) Mean response compared to single trial for a natural stimulus (jaguar mating call). (b) Coherence functions between mean response and single trial for different stimuli. All natural stimuli yield approximately the same relation between signal and noise. 4 Spectro-temporal receptive field Having established the mean over trials as a reliable estimate of the stimulus-dependent activity, we next sought to understand the computations performed by the neurons. To mimic the cochlear transform, it has proven useful to describe the stimulus in the timefrequency domain [2]. Discretizing both time and frequency, we describe the stimulus power in the -th time bin and the -th frequency bin by
. To compute the time-frequency representation, we used the spectrogram method which requires a certain choice for the time-frequency tradeoff [10]; several choices were used independently of each other, essentially yielding the same results. In all cases, stimulus power is measured in logarithmic units. The simplest and most widely used model is a linear transform between the stimulus (as represented by the spectrogram) and the response, given by the formula est "! (1) where is a constant offset and the parameters represent the spectro-temporal receptive field (STRF) of the neuron. Note, though, that the response is usually taken to be the average firing rate [2, 11]; here the response is given by the subthreshold voltage trace. The parameters can be fitted by minimizing the mean-square error between the measured response # and the estimated response est # . This problem is solved by multi-dimensional linear regression. However, a direct, “naive” estimate as obtained by the solution to the regression equations, will usually fail since the stimulus does not properly sample all dimensions in stimulus space. In general, this leads to strong overfitting of the poorly sampled dimensions and poor predictive power of the model. The overfitting can be seen in the noisy structure of the STRF shown in Fig. 3(a). A simple alternative is to penalize the improperly sampled directions which can be done using ridge regression [12]. Ridge regression minimizes the mean-square-error between measured and estimated response while placing a constraint on the sum of the regression coefficients. Choosing the constraint such that the predictive power of the model is maximized, we obtained the STRF shown in Fig. 3(b). Note that ridge regression operates on all coefficients uniformly (ie the constraint is global), so that observed smoothness in the estimated STRF represents structure in the data; no local smoothness constraint was applied. −0.3 −0.2 −0.1 0 100 200 400 800 1600 3200 6400 12800 time (sec) frequency (Hz) naive estimate −0.3 −0.2 −0.1 0 100 200 400 800 1600 3200 6400 12800 time (sec) frequency (Hz) ridge estimate (a) (b) Figure 3: (a) Naive estimate of the STRF via linear regression. Darker pixels denote timefrequency bins with higher power. (b) Estimate of the STRF via ridge regression. The STRF displays the neuron’s frequency-sensitivity, centered around 800–1600 Hz. This range of frequencies matches the neuron’s tuning curve which is measured with short sine tones. The STRF suggests that the neuron essentially integrates frequencies within this range and a time constant of about 100 ms. These types of computations have been previously reported for neurons in auditory cortex [1, 2]. 4.1 Spectral analysis of error How well does the simple linear model predict the subthreshold responses? To assess the predictive power of the model, the STRF was estimated from data obtained for ten different natural stimuli and then tested on an eleventh stimulus. A sample prediction is shown in Fig. 4(a). While the predicted trace roughly captures the occurrence of the EPSPs, it fails to predict their overall shape. This observation can be quantified by spectrally resolving the prediction success. For that purpose, we again used the coherence function which measures the correlation between the actual response and the predicted response at each frequency. This function is shown in Fig. 4(b). Clearly, the model fails to predict any response fluctuations faster than Hz. As a comparison, recall that the response is reliable up to about Hz (Fig. 2). 0 5 10 15 20 25 0 0.2 0.4 0.6 0.8 1 frequency (Hz) coherence 10 11 12 13 14 15 −5 0 5 10 15 20 time (sec) voltage (mV) mean response prediction (a) (b) Figure 4: (a) Mean response and prediction for a natural stimulus (jaguar mating call). The STRF captures the gross features of the response, but not the fine details. (b) Coherence function between measured and predicted response. THT BHW SLB JMC HBW KF TF JHP SJF CWM BGC 0 0.2 0.4 0.6 0.8 1 stimulus no. correlation coefficient ^2 Figure 5: Squared Correlation coefficients between the mean of the measured responses and the predicted response. Linear prediction with the STRF is more effective for some stimuli than others. 4.2 Errors across stimuli Some of the natural stimuli elicited highly reliable responses that were not at all predicted by the STRF, see Fig. 5. In fact, the example shown in Fig. 4 is one of the best predictions achieved by the model. The failure to predict the responses of some stimuli cannot be attributed to the absence of stimulus-locked activity; as the coherence functions in Fig. 2(a) have shown, all stimuli feature approximately the same proportion of stimulus-locked activity to noise. Rather, such responses indicate a high degree of nonlinearity that dominates the response to some stimuli. This observation is in accord with previous work on neurons in the auditory forebrain of zebrafinches [11], where neurons show a high degree of feature selectivity. The nonlinearities seen in subthreshold responses of A1 neurons can partly be attributed to adaptation, to interactions between frequencies [13, 14], and also to off-responses 2. In general, the linear model performs best if the stimuli are slowly modulated in both time and frequency. 5 Discussion We have used whole cell patch clamp methods in vivo to record subthreshold membrane potential fluctuations elicited by natural sounds. Subthreshold responses were reliable and (in contrast to the suprathreshold spiking responses) sufficiently rich and robust to permit rapid and efficient estimation of the linear predictor of the neuron’s response (the STRF). The present manuscript represents the first analysis of subthreshold responses elicited by natural stimuli in the cortex, or to our knowledge in any system. STRFs estimated from natural sounds were in general agreement, with respect to gross characteristics such as frequency tuning, with those obtained directly from pure tone pips. The STRFs from complex sounds, however, provided a much more complete view of the neuron’s dynamics, so that it was possible to compare the predicted and experimentally measured responses. In many cases the prediction was poor (cf. Fig. 6), indicating strong nonlinearities in the neuron’s responses. These nonlinearities include adaptation, two-tone interactions, and 2Off-responses are excitatory responses that occur at the termination of stimuli in some neurons. Because they have the same sign as the on-response, they represent a form of rectifying nonlinearity. Further complications arise because on- and off-responses interact, depending on their spectrotemporal relations [14]. 0.1 0.2 0.3 0.4 0.5 0.6 1 2 3 number of cells average over squared correlation coefficients Figure 6: Summary figure. Altogether cells were recorded in whole cell mode. Shown are the squared correlation coefficients, averaged over all stimuli for a given cell. For many cells, the linear model worked rather poorly as indicated by low cross correlations. off-responses. Explaining these nonlinearities represents an exciting challenge for future research. 6 Methods Sprague-Dawley rats (p18-21) were anesthetized with ketamine (30 mg/kg) and medetomidine (0.24 mg/kg). Whole cell recordings and single unit recordings were made with glass microelectrodes ( ! M ) from primary auditory cortex (A1) using standard methods appropriately modified for the in vivo preparation. During whole cell recordings, sodium action potentials were blocked using the sodium channel blocker QX-314. All natural sounds were taken from an audio CD, sampled at 44,100 Hz. Animal vocalizations were from “The Diversity of Animal Sounds,” available from the Cornell Laboratory of Ornithology. Additional stimuli included pure tones and white noise bursts with 25 ms duration and 5 ms ramp (sampled at 97.656 kHz), and Purple Haze by Jimi Hendrix. Sounds were delivered by a TDT RP2 at 97.656 kHz to a calibrated TDT electrostatic speaker and presented free field in a double-walled sound booth. References [1] R. C. deCharms and M. M. Merzenich. Primary cortical representation of sounds by the coordination of action- potential timing. Nature, 381(6583):610–3., 1996. [2] D. J. Klein, D. A. Depireux, J. Z. Simon, and S. A. Shamma. Robust spectrotemporal reverse correlation for the auditory system: optimizing stimulus design. J Comput Neurosci, 9(1):85–111., 2000. [3] O. Creutzfeldt, F. C. Hellweg, and C. Schreiner. Thalamocortical transformation of responses to complex auditory stimuli. Exp Brain Res, 39(1):87–104, 1980. [4] I. Nelken, Y. Rotman, and O. Bar Yosef. Responses of auditory-cortex neurons to structural features of natural sounds. Nature, 397:154–157, 1999. [5] F. E. Theunissen, S. V. David, N. C. Singh, A. Hsu, W. E. Vinje, and J. L. Gallant. Estimating spatio-temporal receptive fields of auditory and visual neurons from their responses to natural stimuli. Network, 12(3):289–316., 2001. [6] J. F. Linden, R. C. Liu, M. Kvale, C. E. Schreiner, and M. M. Merzenich. Reversecorrelation analysis of receptive fields in mouse and rat auditory cortex. Society for Neuroscience Abstracts, 27(2):1635, 2001. [7] P. Heil. Auditory cortical onset responses revisited. ii. response strength. J Neurophysiol, 77(5):2642–60., 1997. [8] S. L. Sally and J. B. Kelly. Organization of auditory cortex in the albino rat: sound frequency. J Neurophysiol, 59(5):1627–38., 1988. [9] D. A. Depireux, J. Z. Simon, D. J. Klein, and S. A. Shamma. Spectro-temporal response field characterization with dynamic ripples in ferret primary auditory cortex. J Neurophysiol, 85(3):1220–34., 2001. [10] L. Cohen. Time-frequency Analysis. Prentice Hall, 1995. [11] F. E. Theunissen, K. Sen, and A. J. Doupe. Spectral-temporal receptive fields of nonlinear auditory neurons obtained by using natural sounds. J. Neurosci., 20(6):2315– 2331, 2000. [12] T. Hastie, R. Tibshirani, and J. Friedman. The elements of statistical learning theory. Springer, 2001. [13] M. Brosch and C. E. Schreiner. Time course of forward masking tuning curves in cat primary auditory cortex. J Neurophysiol, 77(2):923–43., 1997. [14] L. Tai and A. Zador. In vivo whole cell recording of synaptic responses underlying two-tone interactions in rat auditory cortex. Society for Neuroscience Abstracts, 27(2):1634, 2001.
|
2002
|
112
|
2,117
|
Bayesian Models of Inductive Generalization Neville E. Sanjana & Joshua B. Tenenbaum Department of Brain and Cognitive Sciences Massachusetts Institute of Technology Cambridge, MA 02139 nsanjana, jbt @mit.edu Abstract We argue that human inductive generalization is best explained in a Bayesian framework, rather than by traditional models based on similarity computations. We go beyond previous work on Bayesian concept learning by introducing an unsupervised method for constructing flexible hypothesis spaces, and we propose a version of the Bayesian Occam’s razor that trades off priors and likelihoods to prevent under- or over-generalization in these flexible spaces. We analyze two published data sets on inductive reasoning as well as the results of a new behavioral study that we have carried out. 1 Introduction The problem of inductive reasoning — in particular, how we can generalize after seeing only one or a few specific examples of a novel concept — has troubled philosophers, psychologists, and computer scientists since the early days of their disciplines. Computational approaches to inductive generalization range from simple heuristics based on similarity matching to complex statistical models [5]. Here we consider where human inference falls on this spectrum. Based on two classic data sets from the literature and one more comprehensive data set that we have collected, we will argue for models based on a rational Bayesian learning framework [10]. We also confront an issue that has often been side-stepped in previous models of concept learning: the origin of the learner’s hypothesis space. We present a simple, unsupervised clustering method for creating hypotheses spaces that, when applied to human similarity judgments and embedded in our Bayesian framework, consistently outperforms the best alternative models of inductive reasoning based on similarity-matching heuristics. We focus on two related inductive generalization tasks introduced in [6], which involve reasoning about the properties of animals. The first task is to judge the strength of a generalization from one or more specific kinds of mammals to a different kind of mammal: given that animals of kind and have property , how likely is it that an animal of kind also has property ? For example, might be chimp, might be squirrel, and might be horse. is always a blank predicate, such as “is susceptible to the disease blicketitis”, about which nothing is known outside of the given examples. Working with blank predicates ensures that people’s inductions are driven by their deep knowledge about the general features of animals rather than the details they might or might not know about any one particular property. Stimuli are typically presented in the form of an argument from premises (examples) to conclusion (the generalization test item), as in Chimps are susceptible to the disease blicketitis. Squirrels are susceptible to the disease blicketitis. Horses are susceptible to the disease blicketitis. and subjects are asked to judge the strength of the argument — the likelihood that the conclusion (below the line) is true given that the premises (above the line) are true. The second task is the same except for the form of the conclusion. Instead of asking how likely the property is to hold for another kind of mammal, e.g., horses, we ask how likely it is to hold for all mammals. We refer to these two kinds of induction tasks as the specific and general tasks, respectively. Osherson et al. [6] present data from two experiments using these tasks. One data set contains human judgments for the relative strengths of 36 specific inferences, each with a different pair of mammals given as examples (premises) but the same test species, horses. The other set contains judgments of argument strength for 45 general inferences, each with a different triplet of mammals given as examples and the same test category, all mammals. Osherson et al. also published subjects’ judgments of similarity for all 45 pairs of the 10 mammals used in their generalization experiments, which they (and we) use to build models of generalization. 2 Previous approaches There have been several attempts to model the data in [6]: the similarity-coverage model [6], a feature-based model [8], and a Bayesian model [3]. The two factors that determine the strength of an inductive generalization in Osherson et al.’s model [6] are (i) similarity of the animals in the premise(s) to those in the conclusion, and (ii) coverage, defined as the similarity of the animals in the premise(s) to the larger taxonomic category of mammals, including all specific animal types in this domain. To see the importance of the coverage factor, compare the following two inductive generalizations. The chance that horses can get a disease given that we know chimps and squirrels can get that disease seems higher than if we know only that chimps and gorillas can get the disease. Yet simple similarity favors the latter generalization: horses are judged to be more similar to gorillas than to chimps, and much more similar to either primate species than to squirrels. Coverage, however, intuitively favors the first generalization: the set chimp, squirrel “covers” the set of all mammals much better than does the set chimp, gorilla , and to the extent that a set of examples supports generalization to all mammals, it should also support generalization to horses, a particular type of mammal. Similarity and coverage factors are mixed linearly to predict the strength of a generalization. Mathematically, the prediction is given by
all mammals , where is the set of examples (premises), is the test set (conclusion), is a free parameter, and is a setwise similarity metric defined to be the sum of each element’s maximal similarity to the elements: ! #"%$'&)(* "+ . For the specific arguments, the test set has just one element, , horse, so + is just the maximum similarity of horses to the example animal types in . For the general arguments, - all mammals, which is approximated by the set of all mammal types used in the experiment (see Figure 1). Osherson et al. [6] also consider a sum-similarity model, which replaces the maximum with a sum: . /0 " $'&)(* "+ . Summed similarity has more traditionally been used to model human concept learning, and also has a rational interpretation in terms of nonparametric density estimation, but Osherson et al. favor the max-similarity model based on its match to their intuitions for these particular tasks. We examine both models in our experiments. Sloman [8] developed a feature-based model that encodes the shared features between the premise set and the conclusion set as weights in a neural network. Despite some psychological plausibility, this model consistently fit the two data sets significantly worse than the max-similarity model. Heit [3] outlines a Bayesian framework that provides qualitative explanations of various inductive reasoning phenomena from [6]. His model does not constrain the learner’s hypothesis space, nor does it embody a generative model of the data, so its predictions depend strictly on well-chosen prior probabilities. Without a general method for setting these prior probabilities, it does not make quantitative predictions that can be compared here. 3 A Bayesian model Tenenbaum & colleagues have previously introduced a Bayesian framework for learning concepts from examples, and applied it to learning number concepts [10], word meanings [11], as well as other domains. Formally, for the specific inference task, we observe positive examples
of the concept and want to compute the probability that a particular test stimulus , belongs to the concept given the observed examples : , . These generalization probabilities , are computed by averaging the predictions of a set of hypotheses weighted by their posterior probabilities: , , !"# $ % (1) Hypotheses pick out subsets of stimuli — candidate extensions of the concept — and , is just 1 or 0 depending on whether the test stimulus , falls under the subset . In the general inference task, we are interested in computing the probability that a whole test category falls under the concept : '& " (*)+ % (2) A crucial component in modeling both tasks is the structure of the learner’s hypothesis space , . 3.1 Hypothesis space Elements of the hypothesis space , represent natural subsets of the objects in the domain — subsets likely to be the extension of some novel property or concept. Our goal in building up , is to capture as many hypotheses as possible that people might employ in concept learning, using a procedure that is ideally automatic and unsupervised. One natural way to begin is to identify hypotheses with the clusters returned by a clustering algorithm [11][7]. Here, hierarchical clustering seems particularly appropriate, as people across cultures appear to organize their concepts of biological species in a hierarchical taxonomic structure [1]. We applied four standard agglomerative clustering algorithms [2] (single-link, complete-link, average-link, and centroid) to subjects’ similarity judgments for all pairs of 10 animals given in [6]. All four algorithms produced the same output (Figure 1), suggesting a robust cluster structure. We define the base set of clusters to consist of all 19 clusters in this tree. The most straightforward way to define a hypothesis space for Bayesian concept learning is to take , - ; each hypothesis consists of one base cluster. We refer to , as the “taxonomic hypothesis space”. It is clear that , alone is not sufficient. The chance that horses can get a disease given that we know cows and squirrels can get that disease seems much higher than if we know only Horse Cow Elephant Rhino Chimp Gorilla Mouse Squirrel Dolphin Seal Figure 1: Hierarchical clustering of mammals based on similarity judgments in [6]. Each node in the tree corresponds to one hypothesis in the taxonomic hypothesis space , . that chimps and squirrels can get the disease, yet the taxonomic hypotheses consistent with the example sets cow, squirrel and chimp, squirrel are the same. Bayesian generalization with a purely taxonomic hypothesis space essentially depends only on the least similar example (here, squirrel), ignoring more fine-grained similarity structure, such as that one example in the set cow, squirrel is very similar to the target horse even if the other is not. This sense of fine-grained similarity has a clear objective basis in biology, because a single property can apply to more than one taxonomic cluster, either by chance or through convergent evolution. If the disease in question could afflict two distinct clusters of animals, one exemplified by cows and the other by squirrels, then it is much more likely also to afflict horses (since they share most taxonomic clusters with cows) than if the disease afflicted two distinct clusters exemplified by chimps and squirrels. Thus we consider richer hypothesis subspaces , , consisting of all pairs of taxonomic clusters (i.e., all unions of two clusters from Figure 1, except those already included in , ), and , , consisting of all triples of taxonomic clusters (except those included in lower layers). We stop with , because we have no behavioral data beyond three examples. Our total hypothesis space is then the union of these three layers, , , , , . The notion that the hypothesis space of candidate concepts might correspond to the power set of the base clusters, rather than just single clusters, is broadly applicable beyond the domain of biological properties. If the base system of clusters is sufficiently fine-grained, this framework can parameterize any logically possible concept. It is analogous to other general-purpose representations for concepts, such as disjunctive normal form (DNF) in PAC-Learning, or class-conditional mixture models in density-based classification [5]. 3.2 The Bayesian Occam’s razor: balancing priors and likelihoods Given this hypothesis space, Bayesian generalization then requires assigning a prior and likelihood for each hypothesis , . Let be the number of base clusters, and be a hypothesis in the th layer of the hypothesis space , , corresponding to a union of base clusters. A simple but reasonable prior assigns to a sequence of i. i. d. Bernoulli variables with successes and parameter , with probability .
(3) Intuitively, this choice of prior is like assuming a generative model for hypotheses in which each base cluster has some small independent probability of expressing the concept ; the correspondence is not exact because each hypothesis may be expressed as the union of base clusters in multiple ways, and we consider only the minimal union in defining . For , instantiates a preference for simpler hypotheses — that is, hypotheses consisting of fewer disjoint clusters (smaller ). More complex hypotheses receive exponentially lower probability under , and the penalty for complexity increases as becomes smaller. This prior can be applied with any set of base clusters, not just those which are taxonomically structured. We are currently exploring a more sophisticated domain-specific prior for taxonomic clusters defined by a stochastic mutation process over the branches of the tree. Following [10], the likelihood is calculated by assuming that the examples are a random sample (with replacement) of instances from the concept to be learned. Let , the number of examples, and let the size $ of each hypothesis be simply the number of animal types it contains. Then follows the size principle, if includes all examples in if does not include all examples in (4) assigning greater likelihood to smaller hypotheses, by a factor that increases exponentially as the number of consistent examples observed increases. Note the tension between priors and likelihoods here, which implements a form of the Bayesian Occam’s razor. The prior favors hypotheses consisting of few clusters, while the likelihood favors hypotheses consisting of small clusters. These factors will typically trade off against each other. For any set of examples, we can always cover them under a single cluster if we make the cluster large enough, and we can always cover them with a hypothesis of minimal size (i.e., including no other animals beyond the examples) if we use only singleton clusters and let the number of clusters equal the number of examples. The posterior probability , proportional to the product of these terms, thus seeks an optimal tradeoff between over- and under-generalization. 4 Model results We consider three data sets. Data sets 1 and 2 come from the specific and general tasks in [6], described in Section 1. Both tasks drew their stimuli from the same set of 10 mammals shown in Figure 1. Each data set (including the set of similarity judgments used to construct the models) came from a different group of subjects. Our models of the probability of generalization for specific and general arguments are given by Equations 1 and 2, respectively, letting be the example set that varies from trial to trial and , or (respectively) be the fixed test category, horses or all mammals. Osherson at al.’s subjects did not provide an explicit judgment of generalization for each example set, but only a relative ranking of the strengths of all arguments in the general or specific sets. Hence we also converted all models’ predictions to ranks for each data set, to enable the most natural comparisons between model and data. Figure 3 shows the (rank) predictions of three models, Bayesian, max-similarity and sumsimilarity, versus human subjects’ (rank) confirmation judgments on the general (row 1) and specific (row 2) induction tasks from [6]. Each model had one free parameter ( in the Bayesian model, in the similarity models), which was tuned to the single value that maximized rank-order correlation between model and data jointly over both data sets. The best correlations achieved by the Bayesian model in both the general and specific tasks were greater than those achieved by either the max-similarity or sum-similarity models. The sum-similarity model is far worse than the other two — it is actually negatively correlated with the data on the general task — while max-similarity consistently scores slightly worse than the Bayesian model. 4.1 A new experiment: Varying example set composition In order to provide a more comprehensive test of the models, we conducted a variant of the specific experiment using the same 10 animal types and the same constant test category, horses, but with example sets of different sizes and similarity structures. In both data sets 1 and 2, the number of examples was constant across all trials; we expected that varying the number of examples would cause difficulty for the max-similarity model because it is not explicitly sensitive to this factor. For this purpose, we included five three-premise arguments, each with three examples of the same animal species (e.g., chimp, chimp, chimp ), and five one-premise arguments with the same five animals (e.g., chimp ). We also included three-premise arguments where all examples were drawn from a low-level cluster of species in Figure 1 (e.g., chimp, gorilla, chimp ). Because of the increasing preference for smaller hypotheses as more examples are observed, Bayes will in general make very different predictions in these three cases, but max-similarity will not. This manipulation also allowed us to distinguish the predictions of our Bayesian model from alternative Bayesian formulations [5][3] that do not include the size principle, and thus do not predict differences between generalization from one example and generalization from three examples of the same kind. We also changed the judgment task and cover story slightly, to match more closely the natural problem of inductive learning from randomly sampled examples. Subjects were told that they were training to be veterinarians, by observing examples of particular animals that had been diagnosed with novel diseases. They were required to judge the probability that horses could get the same disease given the examples observed. This cover story made it clear to subjects that when multiple examples of the same animal type were presented, these instances referred to distinct individual animals. Figure 3 (row 3) shows the model’s predicted generalization probabilities along with the data from our experiment: mean ratings of generalization from 24 subjects on 28 example sets, using either , or examples and the same test species (horses) across all arguments. Again we show predictions for the best values of the free parameters and . All three models fit best at different parameter values than in data sets 1 and 2, perhaps due to the task differences or the greater range of stimuli here. cow chimp mouse dolphin elephant 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 Premise category Argument strength 1 example 3 examples Figure 2: Human generalization to the conclusion category horse when given one or three examples of a single premise type. Again, the max-similarity model comes close to the performance of the Bayesian model, but it is inconsistent with several qualitative trends in the data. Most notably, we found a difference between generalization from one example and generalization from three examples of the same kind, in the direction predicted by our Bayesian model. Generalization to the test category of horses was greater from singleton examples (e.g., chimp ) than from three examples of the same kind (e.g., chimp, chimp, chimp ), as shown in Figure 2. This effect was relatively small but it was observed for all five animal types tested and it was statistically significant ( ) in a 2 5 (number of examples animal type) ANOVA. The max-similarity model, however, predicts no effect here, as do Bayesian accounts that do not include the size principle [5][3]. It is also of interest to ask whether these models are sufficiently robust as to make reasonable predictions across all three experiments using a single parameter setting, or to make good predictions on held-out data when their free parameter is tuned on the remaining data. On these criteria, our Bayesian model maintains its advantage over max-similarity. At the single value of , Bayes achieves correlations of % , and on the three data sets, respectively, compared to , and for max-similarity at its single best parameter value ( ). Using Monte Carlo cross validation [9] to estimate (1000 runs for each data set, 80%-20% training-test splits), Bayes obtains average test-set correlations of and on the three data sets, respectively, compared to and for max-similarity using the same method to tune . 5 Conclusion Our Bayesian model offers a moderate but consistent quantitative advantage over the best similarity-based models of generalization, and also predicts qualitative effects of varying sample size that contradict alternative approaches. More importantly, our Bayesian approach has a principled rational foundation, and we have introduced a framework for unsupervised construction of hypothesis spaces that could be applied in many other domains. In contrast, the similarity-based approach requires arbitrary assumptions about the form of the similarity measure: it must include both “similarity” and “coverage” terms, and it must be based on max-similarity rather than sum-similarity. These choices have no a priori justification and run counter to how similarity models have been applied in other domains, leading us to conclude that rational statistical principles offer the best hope for explaining how people can generalize so well from so little data. Still, the consistently good performance of the max-similarity model raises an important question for future study: whether a relatively small number of simple heuristics might provide the algorithmic machinery implementing approximate rational inference in the brain. We would also like to understand how people’s subjective hypothesis spaces have their origin in the objective structure of their environment. Two plausible sources for the taxonomic hypothesis space used here can both be ruled out. The actual biological taxonomy for these 10 animals, based on their evolutionary history, looks quite different from the subjective taxonomy used here. Substituting the true taxonomic clusters from biology for the base clusters of our model’s hypothesis space leads to dramatically worse predictions of people’s generalization behavior. Taxonomies constructed from linguistic co-occurrences, by applying the same agglomerative clustering algorithms to similarity scores output from the LSA algorithm [4], also lead to much worse predictions. Perhaps the most likely possibility has not yet been tested. It may well be that by clustering on simple perceptual features (e.g., size, shape, hairiness, speed, etc.), weighted appropriately, we can reproduce the taxonomy constructed here from people’s similarity judgments. However, that only seems to push the problem back, to the question of what defines the appropriate features and feature weights. We do not offer a solution here, but merely point to this question as perhaps the most salient open problem in trying to understand the computational basis of human inductive inference. Acknowledgments Tom Griffiths provided valuable help with statistical analysis. Supported by grants from NTT Communication Science Laboratories and MERL and an HHMI fellowship to NES. References [1] S. Atran. Classifying nature across cultures. In An Invitation to Cognitive Science, volume 3. MIT Press, 1995. [2] R. Duda, P. Hart, and D. Stork. Pattern Classification. Wiley, New York, NY, 2001. [3] E. Heit. A Bayesian analysis of some forms of induction. In Rational Models of Cognition. Oxford University Press, 1998. [4] T. Landauer and S. Dumais. A solution to Plato’s problem: The Latent Semantic Analysis theory of the acquisition, induction, and representation of knowledge. Psychological Review, 104:211–240, 1997. [5] T. Mitchell. Machine Learning. McGraw-Hill, Boston, MA, 1997. [6] D. Osherson, E. Smith, O. Wilkie, A. L´opez, and E. Shafir. Category-based induction. Psychological Review, 97(2):185–200, 1990. [7] N. Sanjana and J. Tenenbaum. Capturing property-based similarity in human concept learning. In Sixth International Conference on Cognitive and Neural Systems, 2002. [8] S. Sloman. Feature-based induction. Cognitive Psychology, 25:231–280, 1993. [9] P. Smyth. Clustering using Monte Carlo cross-validation. In Second International Conference on Knowledge Discovery and Data Mining, 1996. [10] J. Tenenbaum. Rules and similarity in concept learning. In S. Solla, T. Keen, and K.-R. M¨uller, editors, Advances in Neural Information Processing Systems 12, pages 59–65. MIT Press, 2000. [11] J. Tenenbaum and F. Xu. Word learning as Bayesian inference. In Proceedings of the 22nd Annual Conference of the Cognitive Science Society, 2000. 0 0.5 1 0 0.5 1 ρ = 0.94 Bayes 0 0.5 1 0 0.5 1 ρ = 0.87 Max−Similarity 0 0.5 1 0 0.5 1 ρ = 0.33 Sum−Similarity 0 0.5 1 0 0.5 1 ρ = 0.97 0 0.2 0.4 0.6 0.8 0 0.5 1 ρ = 0.91 0 0.5 1 0 0.5 1 ρ = 0.87 0 0.5 1 0 0.5 1 ρ = 0.97 0 0.2 0.4 0.6 0.8 0 0.5 1 ρ = 0.93 0 1 2 3 0 0.5 1 ρ = 0.39 General: mammals n=3 Specific: horse n=2 Specific: horse n=1,2,3 _ Figure 3: Model predictions ( -axis) plotted against human confirmation scores ( -axis). Each column shows the results for a particular model. Each row is a different inductive generalization experiment, where indicates the number of examples (premises) in the stimuli.
|
2002
|
113
|
2,118
|
Dynamical Constraints on Computing with Spike Timing in the Cortex Arunava Banerjee and Alexandre Pouget Department of Brain and Cognitive Sciences University of Rochester, Rochester, New York 14627 {arunavab, alex} @bcs.rochester.edu Abstract If the cortex uses spike timing to compute, the timing of the spikes must be robust to perturbations. Based on a recent framework that provides a simple criterion to determine whether a spike sequence produced by a generic network is sensitive to initial conditions, and numerical simulations of a variety of network architectures, we argue within the limits set by our model of the neuron, that it is unlikely that precise sequences of spike timings are used for computation under conditions typically found in the cortex. 1 Introduction Several models of neural computation use the precise timing of spikes to encode information. For example, Abeles et al. have proposed synchronous volleys of spikes (synfire chains) as a candidate for representing information in the cortex [1]. More recently, Maass has demonstrated how spike timing in general, not merely synfire chains, can be utilized to perform nonlinear computations [6]. For any of these schemes to function, the timing of the spikes must be robust to small perturbations; i.e., small perturbations of spike timing should not result in successively larger fluctuations in the timing of subsequent spikes. To use the terminology of dynamical systems theory, the network must not exhibit sensitivity to initial conditions. Indeed, reliable computation would simply be impossible if the timing of spikes is sensitive to the slightest source of noise, such as synaptic release variability, or thermal fluctuations in the opening and closing of ionic channels. Diesmann et al. have recently examined this issue for the particular case of synfire chains in feed-forward networks [4]. They have demonstrated that the propagation of a synfire chain over several layers of integrate-and-fire neurons can be robust to 2 Hz of random background activity and to a small amount of noise in the spike timings. The question we investigate here is whether this result generalizes to the propagation of any arbitrary spatiotemporal configuration of spikes through a recurrent network of neurons. This question is central to any theory of computation in cortical networks using spike timing since it is well known that the connectivity between neurons in the cortex is highly recurrent. Although there have been earlier attempts at resolving like issues, the applicability of the results are limited by the model of the neuron [8] or the pattern of propagated spikes [5] considered. Before we can address this question in a principled manner, however, we must confront a couple of confounding issues. First stands the problem of stationarity. As is well known, Lyapunov characteristic exponents of trajectories are limit quantities that are guaranteed to exist (almost surely) in classical dynamical systems that are stationary. In systems such as the cortex that receive a constant barrage of transient inputs, it is questionable whether such a concept bears much relevance. Fortunately, our simulations indicate that convergence or divergence of trajectories in cortical networks can occur very rapidly (within 200-300 msec). Assuming that external inputs do not change drastically over such short time scales, one can reasonably apply the results from analysis under stationary conditions to such systems. Second, the issues of how a network should be constructed so as to generate a particular spatiotemporal pattern of spikes as well as whether a given spatiotemporal pattern of spikes can be generated in principle, remain unresolved in the general setting. It might be argued that without such knowledge, any classification of spike patterns into sensitive and insensitive classes is inherently incomplete. However, as shall be demonstrated later, sensitivity to initial conditions can be inferred under relatively weak conditions. In addition, we shall present simulation results from a variety of network architectures to support our general conclusions. The remainder of the paper is organized as follows. In section 2, we briefly review relevant aspects of the dynamical system corresponding to a recurrent neuronal network as formulated in [2] and formally define "sensitivity to initial conditions". In Section 3, we present simulation results from a variety of network architectures. In Section 4, we interpret these results formally which in turn lead us to an additional set of experiments. In Section 5, we draw conclusions regarding the issue of computation using spike timing in cortical networks based on these results. 2 Spike dynamics A detailed exposition of an abstract dynamical system that models recurrent systems of biological neurons was presented in [2]. Here, we recount those aspects of the system that are relevant to the present discussion. Based on the intrinsic nature of the processes involved in the generation of postsynaptic potentials (PSP's) and of those involved in the generation of action potentials (spikes), it was shown that the state of a system of neurons can be specified by enumerating the temporal positions of all spikes generated in the system over a bounded past. For example, in Figure 1, the present state of the system is described by the positions of the spikes (solid lines) in the shaded region at t= 0 and the state of the system at a future time T is specified by the positions of the spikes (solid lines) in the shaded region at t= T. Each internal neuron i in the system is assigned a membrane potential function PJ) that takes as its input the present state and generates the instantaneous potential at the soma of neuron i. It is the particular instantiation of the set of functions PJ) that determines the nature of the neurons as well as their connectivity in the network. Consider now the network in Figure 1 initialized at the particular state described by the shaded region at t= O. Whenever the integration of the PSP's from all presynaptic spikes to a neuron combined with the hyperpolarizing effects of its own spikes (the precise nature of the union specified by PJ)) brings its membrane potential above threshold, the neuron emits a new spike. If the spikes in the shaded region at t= 0 were perturbed in time ( dotted lines), this would result in a perturbation on the new spike. The size of the new perturbation would depend upon the positions of the spikes in the shaded region, the nature of PJ) , and the sizes of the old perturbations. This scenario would in turn repeat to produce further perturbations on future spikes. In essence, any initial set of perturbations would propagate from spike to spike to produce a set of perturbations at any arbitrary future time t= T. : : I I: I: I: I: : I I I: I: : I I : I: I : : I : I I : I I: : I : I I : I I I I I I I I I I Past 1==0 I==T Future Figure 1: Schematic diagram of the spike dynamics of a system of neurons. Input neurons are colored gray and internal neurons black. Spikes are shown in solid lines and their corresponding perturbations in dotted lines. Note that spikes generated by the input neurons are not perturbed. Gray boxes demarcate a bounded past history starting at time t. The temporal position of all spikes in the boxes specify the state of the system at times t= 0 and t= T. It is of considerable importance to note at this juncture that while the specification of the network architecture and the synaptic weights determine the precise temporal sequence of spikes generated by the network, the relative size of successive perturbations are determined by the temporal positions of the spikes in successive state descriptions at the instant of the generation of each new spike. If it can be demonstrated that there are particular classes of state descriptions that lead to large relative perturbations, one can deduce the qualitative aspects of the dynamics of a network armed with only a general description of its architecture. A formal analysis in Section 4 will bring to light such a classification. Let column vectors ~ and y denote, respectively, perturbations on the spikes of internal neurons at times t=O and t=T. We pad each vector with as many zeroes as there are input spikes in the respective state descriptions. Let AT denote the matrix such that y = Ar~. Let Band C be the matrices as described in [3] that discard the rigid translational components from the final and initial perturbations. Then, the dynamics of the system is sensitive to initial conditions if lim T_ oo liB * AT * ell = 00 . If instead, limT_ oo liB * AT * ell = 0 , the dynamics is insensitive to initial conditions. A few comments are in order here. First, our interest lies not in the precise values of the Lyapunov characteristic exponents of trajectories (where they exist), but in whether the largest exponent is greater than or less than zero. Furthermore, the class of trajectories that satisfy either of the above criteria is larger (although not necessarily in measure) than the class of trajectories that have definite exponents. Second, input spikes are free parameters that have to be constrained in some manner if the above criteria are to be well-defined. By the same token, we do not consider the effects that perturbations of input spikes have on the dynamics of the system. 3 Simulations and results A typical column in the cortex contains on the order of 105 neurons, approximately 80% of which are excitatory and the rest inhibitory. Each neuron receives around 104 synapses, approximately half of which are from neurons in the same column and the rest from excitatory neurons in other columns and the thalamus. These estimates indicate that even at background rates as low as 0.1 Hz, a column generates on average 10 spikes every millisecond. Since perturbations are propagated from spikes to generated spikes, divergence and/or convergence of spike trajectories could occur extremely rapidly. We test this hypothesis in this section through model simulations. All experiments reported here were conducted on a system containing 1000 internal neurons (set to model a cortical column) and 800 excitatory input neurons (set to model the input into the column). Of the 1000 internal neurons, 80% were chosen to be excitatory and the rest inhibitory. Each internal neuron received 100 synapses from other (internal as well as input) neurons in the system. The input neurons were set to generate random uncorrelated Poisson spike trains at a fixed rate of 5 Hz. The membrane potential function P/) for each internal neuron was modeled as the sum of excitatory and inhibitory PSP's triggered by the arrival of spikes at synapses, and afterhyperpolarization potentials triggered by the spikes generated by the neuron. PSP's were modeled using the function "'.Ji e-"'i e-Y, where v, E and Twere set v t to mimic four kinds of synapses, NMDA, AMP A, GABA A , and GABAB . OJ was set for excitatory and inhibitory synapses so as to generate a mean spike rate of 5 Hz by excitatory and 15 Hz by inhibitory internal neurons. The parameters were then held constant over the entire system leaving the network connectivity and axonal delays as the only free parameters. After the generation of a spike, an absolute refractory period of 1 msec was introduced during which the neuron was prohibited from generating a spike. There was no voltage reset. However, each spike triggered an afterhyperpolarization potential with a decay constant of 30 msec that led to a relative refractory period. Simulations were performed in 0.1 msec time steps and the time bound on the state description, as related in Section 2, was set at 200 msec. The issue of correlated inputs was addressed by simulating networks of disparate architectures. On the one extreme was an ordered two layer ring network with input neurons forming the lower layer and internal neurons (with the inhibitory neurons placed evenly among the excitatory neurons) forming the upper layer. Each internal neuron received inputs from a sector of internal and input neurons that was centered on that neuron. As a result, any two neighboring internal neurons shared 96 of their 100 inputs (albeit with different axonal delays of 0.5-1.1 msec). This had the effect of output spike trains from neighboring internal neurons being highly correlated, with sectors of internal neurons producing synchronized bursts of spikes. On the other extreme was a network where each internal neuron received inputs from 100 randomly chosen neurons from the entire population of internal and input neurons. Several other networks where neighboring internal neurons shared an intermediate percentage of their inputs were also simulated. Here, we present results from the two extreme architectures. The results from all the other networks were similar. Figure 2(a) displays sample output spike trains from 100 neighboring internal neurons over a period of 450 msec for both architectures. In the first set of experiments, pairs of identical systems driven by identical inputs and initialized at identical states except for one randomly chosen spike that was perturbed by 1 msec, were simulated. In all cases, the spike trajectories diverged very rapidly. Figure 2(b) presents spike trains generated by the same 100 neighboring internal neurons from the two simulations from 200 to 400 msec after initialization, for both architectures. To further explore the sensitivity of the spike trajectories, we partitioned each trajectory into segments of 500 spike generations each. For each such segment, we then extracted the spectral norm liB * AT * ell after every 100 spike generations. Figure 2( c) presents the outcome of this analysis for both architectures. Although successive segments of 500 spike generations were found to be quite variable in their absolute sensitivity, each such segment was nevertheless found to be sensitive. We also simulated several other architectures (results not shown), such as systems with fixed axonal delays and ones with bursty behavior, with similar outcomes. (a) " : . • . '. o msec Ring Network (above) and Random Network (below) 450 msec (b) . : , '.~ .:~ ', ' ,", ., 200 msec 400 msec 200 msec 400 msec Ring Network Random Network (c) lO',.-----~~-~--~--~--__, 103 r--~~-~--~--~-----' 200 300 400 500 400 500 Figure 2: (a) Spike trains of 100 neighboring neurons for 450 msec from the ring and the random networks respectively. (b) Spike trains from the same 100 neighboring neurons (above and below) 200 msec after initialization. Note that the trains have already diverged at 200 msec. (c) Spectral norm of sensitivity matrices of 14 successive segments of 500 spike generations each, computed in steps of 100 spike generations for both architectures. 4 Analysis and further simulations The reasons behind the divergence of the spike trajectories presented in Section 3 can be found by considering how perturbations are propagated from the set of spikes in the current state description to a newly generated spike. As shown in [3], the perturbation in the new spike can be represented as a weighted sum of the perturbations of those spikes in the state description that contribute to the generation of the new spike. The weight assigned to a spike Xi is proportional to the slope of the PSP or that of the hyperpolarization triggered by that spike (apo/aXi in the general case), at the instant of the generation of the new spike. Intuitively, the larger the slope is, the greater is the effect that a perturbation of that spike can have on the total potential at the soma, and hence, the larger is the perturbation on the new spike. The proportionality constant is set so that the weights sum to 1. This constraint is reflected in the fact that if all spikes were to be perturbed by a fixed quantity, this would amount to a rigid displacement in time causing the new spike to be perturbed by the same quantity. We denote the slopes by Pi, and the weights by ai. Then, a = p.I" n p., where j ranges over all contributing spikes. i I ~ j"", l J We now assume that at the generation of each new spike, the p,'s are drawn independently from a stationary distribution (for both internal and input contributing spikes), and that the ratio of the number of internal to the total (internal plus input) spikes in any state description remains close to a fixed quantity f-l at all times. Note that this amounts to an assumed probability distribution on the likelihood of particular spike trajectories rather than one on possible network architectures and synaptic weights. The iterative construction of the matrix AT, based on these conditions, was described in detail in [3]. It was also shown that the statistic \I;I~l a i2 ) plays a central role in the determination of the sensitivity of the resultant spike trajectories. In a minor modification to the analysis in [3], we assume that AT represents the full perturbation (internal plus input) at each step of the process. While this merely entails the introduction of additional rows with zero entries to account for input spikes in each state, this alters the effect that B has on liB * AT * ell in a way that allows for a simpler as well as bidirectional bound on the norm. Since the analysis is identical to that in [3] and does not introduce any new techniques, we only report the result. If \I;I~l ai2 ) > (2 + ~(y") -1 (resp. \I;~ l a i2 ) < ~ -I} then the spike trajectories are almost surely sensitive (resp. insensitive) to initial conditions. m denotes the number of internal spikes in the state description. If we make the liberal assumption that input spikes account for as much as half the total number of spikes in state descriptions, noting that m is a very large quantity (greater than 103 in all our simulations), the above constraint requires (Ian> 3 for spike trajectories to be almost surely sensitive to initial conditions. From our earlier simulations, we extracted the value of L a i2 whenever a spike was generated, and computed the sample mean (I a i2 ) over all spike generations. The mean was larger than 3 in all cases (it was 69.6 for the ring and 11.3 for the random network). The above criterion enables us to peer into the nature of the spike dynamics of real cortical columns, for although simulating an entire column remains intractable, a single neuron can be simulated under various input scenarios, and the resultant statistic applied to infer the nature of the spike dynamics of a cortical column most of whose neurons operate under those conditions. An examination of the mathematical nature of L: a i2 reveals that its value rises as the size of the subset of p;'s that are negative grows larger. The criterion for sensitivity is therefore more likely to be met when a substantial portion of the excitatory PSP's are on their falling phase (and inhibitory PSP's on their rising phase) at the instant of the generation of each new spike. This corresponds to a case where the inputs into the neurons of a system are not strongly synchronized. Conversely, if spikes are generated soon after the arrival of a synchronized burst of spikes (all of whose excitatory PSP's are presumably on their rising phase), the criterion for sensitivity is less likely to be met. We simulated several combinations of the two input scenarios to identify cases where the corresponding spike trajectories in the system were not likely to be sensitive to initial conditions. We constructed a model pyramidal neuron with 10,000 synapses, 85% of which were chosen to be excitatory and the rest inhibitory. The threshold of the neuron was set at 15 mV above resting potential. PSP's were modeled using the function described earlier with values for the parameters set to fit the data reported in [7]. For excitatory PSP's the peak amplitudes ranged between 0.045 and 1.2 mV with the median around 0.15 mY, 10-90 rise times ranged from 0.75 to 3.35 msec and widths at half amplitude ranged from 8.15 to 18.5 msec. For inhibitory PSP's, the peak amplitudes were on average twice as large and the 10-90 rise times and widths at half amplitude were slightly larger. Whenever the neuron generated a new spike, the values of the p;'s were recorded and L: a} was computed. The mean (L: a i2 ) was then computed over the set of all spike generations. In order to generate conservative estimates, samples with value above 104 were discarded (they comprised about 0.1% of the data). The datasets ranged in size from 3000 to 15,000. Three experiments simulating various levels of uncorrelated input/output activity were conducted. In particular, excitatory Poisson inputs at 2, 20 and 40 Hz were balanced by inhibitory Poisson inputs at 6.3, 63 and 124 Hz to generate output rates of approximately 2, 20 and 40 Hz, respectively. We confirmed that the output in all three cases was Poisson-like (CV=O.77, 0.74, and 0.89, respectively). The mean (L: a i2 ) for the three experiments was 4.37, 5.66, and 9.52, respectively. Next, two sets of experiments simulating the arrival of regularly spaced synfire chains were conducted. In the first set the random background activity was set at 2 Hz and in the second, at 20 Hz. The synfire chains comprised of spike volleys that arrived every 50 msec. Four experiments were conducted within each set: volleys were composed of either 100 or 200 spikes (producing jolts of around 10 and 20 mV respectively) that were either fully synchronized or were dispersed over a Gaussian distribution with a=1 msec. The mean (Lan for the experiments was as follows. At 2 Hz background activity, it was 0.49 (200 spikes/volley, synchronized), 0.60 (200 spikes/volley, dispersed), 2.46 (100 spikes/volley, synchronized), and 2.16 (100 spikes/volley, dispersed). At 20 Hz background activity, it was 4.39 (200 spikes/volley, synchronized), 8.32 (200 spikes/volley, dispersed), 6.77 (100 spikes/volley, synchronized), and 6.78 (l00 spikes/volley, dispersed). Finally, two sets of experiments simulating the arrival of randomly spaced synfire chains were conducted. In the first set the random background activity was set at 2 Hz and in the second, at 20 Hz. The synfire chains comprised of a sequence of spike volleys that arrived randomly at a rate of 20 Hz. Two experiments were conducted within each set: volleys were composed of either 100 or 200 synchronized spikes. The mean (L ai2 ) for the experiments was as follows. At 2 Hz background activity, it was 4.30 (200 spikes/volley) and 4.64 (100 spikes/volley). At 20 Hz background activity, it was 5.24 (200 spikes/volley) and 6.28 (l00 spikes/volley). 5 Conclusion As was demonstrated in Section 3, senslllvlty to initial conditions transcends unstructured connectivity in systems of spiking neurons. Indeed, our simulations indicate that sensitivity is more the rule than the exception in systems modeling cortical networks operating at low to moderate levels of activity. Since perturbations are propagated from spike to spike, trajectories that are sensitive can diverge very rapidly in systems that generate a large number of spikes within a short period of time. Sensitivity therefore is an issue, even for schemes based on precise sequences of spike timing with computation occurring over short (hundreds of msec) intervals. Within the limits set by our model of the neuron, we have found that spike trajectories are likely to be sensitive to initial conditions in all scenarios except where large (100-200) synchronized bursts of spikes occur in the presence of sparse background activity (2 Hz) with sufficient but not too large an interval between successive bursts (50 msec). This severely restricts the possible use of precise spike sequences for reliable computation in cortical networks for at least two reasons. First, un synchronized activity can rise well above 2 Hz in the cortex, and second, the highly constrained nature of this dynamics would show in in vivo recordings. Although cortical neurons can have vastly more complex responses than that modeled in this paper, our conclusions are based largely on the simplicity and the generality of the constraints identified (the analysis assumes a general membrane potential function PO). Although a more refined model of the cortical neuron could lead to different values of the statistic computed, we believe that the results are unlikely to cross the noted bounds and therefore change our overall conclusions. We are however not arguing that computation with spike timing is impossible in general. There are neural structures, such as the nucleus laminaris in the barn owl and the electro sensory array in the electric fish, which have been shown to perform exquisitely precise computations using spike timing. Interestingly, these structures have very specialized neurons and network architectures. To conclude, computation using precise spike sequences does not appear to be likely in the cortex in the presence of Poisson-like activity at levels typically found there. References [1] Abeles, M., Bergman, H., Margalit, E. & Vaadia, E. (1993) Spatiotemporal firing patterns in the frontal cortex of behaving monkeys. Journal of Neurophysiology 70, pp. 1629-1638. [2] Banerjee, A. (2001) On the phase-space dynamics of systems of spiking neurons: I. model and experiments. Neural Computation 13, pp. 161-193. [3] Banerjee, A. (2001) On the phase-space dynamics of systems of spiking neurons: II. formal analysis. Neural Computation 13, pp. 195-225. [4] Diesmann, M., Gewaltig, M. O. & Aertsen, A. (1999) Stable propagation of synchronous spiking in cortical neural networks. Nature 402, pp. 529-533. [5] Gerstner, W., van Hemmen, J. L. & Cowan, J. D. (1996) What matters in neuronal locking. Neural Computation 8, pp. 1689-1712. [6] Maass, W. (1995) On the computational complexity of networks of spiking neurons. Advances in Neural Information Processing Systems 7, pp. 183-190. [7] Mason, A., Nicoll, A. & Stratford, K. (1991) Synaptic transmission between individual pyramidal neurons of the rat visual cortex in vitro. Journal of Neuroscience 11(1), pp. 72-84. [8] van Vreeswijk, c., & Sompolinsky, H. (1998) Chaotic balanced state in a model of cortical circuits. Neural Computation 10, pp. 1321-1372.
|
2002
|
114
|
2,119
|
Bayesian Estimation of Time-Frequency Coefficients for Audio Signal Enhancement Patrick J. Wolfe Department of Engineering University of Cambridge Cambridge CB2 1PZ, UK pjw47@eng.cam.ac.uk Simon J. Godsill Department of Engineering University of Cambridge Cambridge CB2 1PZ, UK sjg@eng.cam.ac.uk Abstract The Bayesian paradigm provides a natural and effective means of exploiting prior knowledge concerning the time-frequency structure of sound signals such as speech and music—something which has often been overlooked in traditional audio signal processing approaches. Here, after constructing a Bayesian model and prior distributions capable of taking into account the time-frequency characteristics of typical audio waveforms, we apply Markov chain Monte Carlo methods in order to sample from the resultant posterior distribution of interest. We present speech enhancement results which compare favourably in objective terms with standard time-varying filtering techniques (and in several cases yield superior performance, both objectively and subjectively); moreover, in contrast to such methods, our results are obtained without an assumption of prior knowledge of the noise power. 1 Introduction Natural sounds can be meaningfully represented as a superposition of translated and frequency-modulated versions of simple functions (atoms). As a result, so-called timefrequency representations are ubiquitous in audio signal processing. The focus of this paper is on signal enhancement via a regression in which time-frequency atoms form the regressors. This choice is motivated by the notion that an atomic time-frequency decomposition is the most natural way to split an audio waveform into its constituent parts—such as note attacks and steady pitches for music, voiced and unvoiced speech, and so on. Moreover, these features, along with prior knowledge concerning their generative mechanisms, are most easily described jointly in time and frequency through the use of Gabor frames. 1.1 Gabor Frames We begin by briefly reviewing the concept of Gabor systems; detailed results and proofs may be found in, for example, [1]. Consider a function whose time-frequency support Audio examples described in this paper, as well as Matlab code allowing for their reproduction, may be found at the author’s web page: http://www-sigproc.eng.cam.ac.uk/ pjw47. is centred about the origin, and let denote a time-shifted (translation by ) and frequency-shifted (modulation by ) version thereof; such a collection of shifts defines a sampling grid over the time-frequency plane. Then (roughly speaking) if is reasonably well-behaved and the lattice is sufficiently dense, the Gabor system
provides a (possibly non-orthogonal, or even redundant) series expansion of any function in a Hilbert space, and is thus said to generate a frame. More formally, a Gabor frame
is a dictionary of time-frequency shifted versions of a single basic window function , having the additional property that there exist constants (frame bounds) such that "! # $&% (') * ,+ "! - /.02143 where 3 is the Hilbert space of functions of interest and '65 5 + denotes the inner product. This property can be understood as an approximate Plancherel formula, guaranteeing completeness of the set of building blocks in the function space. That is, any signal /173 can be represented as an absolutely convergent infinite series of the , or in the finite case, a linear combination thereof. Such a representation is given by the following formula: 98 # $&% ': <; + , (1) where ; * is a dual frame for = . Dual frames exist for any frame; however, the canonical dual frame, guaranteeing minimal (two-)norm coefficients in the expansion of (1), is given by ; = 8?>A@B * , where > is the frame operator, defined by >C78ED $&% ') * F+ * . The notion of a frame thus incorporates bases as well as certain redundant representations; for example, an orthonormal basis is a tight frame ( G8 ) with G8 8IH ; the union of two orthonormal bases yields a tight frame with frame bounds J8 8JK . Importantly, a key result in time-frequency theory (the Balian-Low Theorem) implies that redundancy is a necessary consequence of good time-frequency localisation.1 However, even with redundancy, the frame operator may, in certain special cases, be diagonalised. If, furthermore, the = are normalised in such a case, then analysis and synthesis can take place using the same window and inversion of the frame operator is avoided completely. Accordingly, Daubechies et al. [2] term such cases ‘painless nonorthogonal expansions’. 1.2 Short-Time Spectral Attenuation The standard noise reduction method in engineering applications is actually such an expansion in disguise (see, e.g., [3]). In this method, known as short-time spectral attenuation, a time-varying filter is applied to the frequency-domain transform of a noisy signal, using the overlap-add method of short-time Fourier analysis and synthesis. The observed signal y is first divided into overlapping segments through multiplication by a smooth, ‘sliding’ window function, which is non-zero only for a duration on the order of tens of milliseconds. The Fourier transform is then taken on each lengthL interval (possibly zero-padded to length M ), and the resultant N vectors of spectral values
Y 1POQ $&RS B UTUTUT V @B can be plotted side by side to yield a time-frequency representation known as the Gabor transform, or sub-sampled short-time Fourier transform, the modulus of which is the wellknown spectrogram. The coefficients of this transform are attenuated to some degree in order to reduce the noise; as shown in Fig. 1, individual short-time intervals Y are then inverse-transformed, multiplied by a smoothing window, and added together in an appropriate manner to form a time-domain signal reconstruction Wx. 1There is, however, an exception for real signals, which will be explored in more detail in X 3.2. Figure 1: Short-time spectral attenuation This method of noise reduction, while being relatively fast and easily understood, exhibits several shortcomings: in its most basic form it ignores dependencies between the timedomain data in adjacent short-time blocks, and it assumes knowledge of the noise variance. Moreover, previous approaches in this vein have relied (either explicitly or implicitly) on independence assumptions amongst the time-frequency coefficients; see, e.g., [4]. Thus, with the aim of improving upon this popular class of audio noise reduction techniques, we have used these approaches as a starting point from which to proceed with a fully Bayesian analysis. As a step in this direction, we propose a Gabor regression model as follows. 2 Coefficient Shrinkage for Audio Signal Enhancement 2.1 Gabor Regression Let x 1 denote a sampled audio waveform, the observation of which has been corrupted by additive white Gaussian noise of variance , yielding the simple additive model y 8 x d. We consider regression in this case using a design matrix obtained from a Gabor frame.2 In our particular case, this choice of regressors is motivated by a desire for constant absolute bandwidth, as opposed to, e.g., the constant relative bandwidth of wavelets. We do not attempt to address here the relative merits of Gabor and wavelet frames per se; rather, we simply note that the changing frequency content of natural sound signals carries much of their information, and thus a time-frequency representation may well be more appropriate than a time-scale one. Moreover, audio signal enhancement results with wavelets have been for the most part disappointing (witness the dearth of literature in this area), whereas standard engineering practice has evolved to use time-varying filtering—which is inherently Gabor analysis. Although space does not permit a discussion of the relevance of Gabor-type transforms to auditory perception (see, e.g., [5]), as a final consideration it is interesting to note that Gabor’s original formulations [6]–[7] were motivated by psychoacoustic as well as information theoretic considerations. 2Technically, we consider the ring mod , under the assumption (without loss of generality) that the vector of sampled observations y has been extended to length in a proper way at its boundary before being periodically extended on . 2.2 Bayesian Model By the completeness property of Gabor frames, any x 1 can be represented as a linear combination of the elements of the frame. Thus, one has the model y 8 Gc d where the columns of G 17O form the Gabor synthesis atoms, and elements of c 19O represent the respective synthesis coefficients. To complete this model we assume an independent, identically distributed Gaussian noise vector, conditionally Gaussian coefficients, and inverted-Gamma conjugate priors: d
, I
K K c c
, diag
c
, (2) where diag
c denotes a diagonal matrix, the individual elements of which are assumed to be distributed as in (2) above, and ! and " are hyperparameters. We note that it is possible to obtain vague priors through the choice of these hyperparameters; alternatively, one may wish to incorporate genuine prior knowledge about audio signal behaviour through them. In # 3.2, we consider the case in which frequency-dependent coefficient priors are specified in order to exploit the time-frequency structure of natural sound signals. The choice of an inverted-Gamma prior for is justified by its flexibility; for instance, in many audio enhancement applications one may be able to obtain a good estimate of the noise variance, which may in turn be reflected in the choice of hyperparameters and . However, in order to demonstrate the performance of our model in the ‘worst-case’ scenario of little prior information, we assume here a diffuse prior
$%$ H for . 2.3 Implementation As a means of obtaining samples from the posterior distribution and hence the corresponding point estimates, we propose to sample from the posterior using Markov chain Monte Carlo (MCMC) methods [8]. By design, all model parameters may be easily sampled from their respective full conditional distributions, thus allowing the straightforward employment of a Gibbs sampler [9]. In all of the experiments described herein, a tight, normalised Hanning window was employed as the Gabor window function, and a regular time-frequency lattice was constructed to yield a redundancy of two (corresponding to the common practice of a 50% window overlap in the overlap-add method.) The arithmetic mean of the signal reconstructions from 1000 iterations (following 1000 iterations of ‘burn-in’, by which time the sampler appeared to have reached a stationary regime in each case) was taken to be the final result. As a further note, colour plots and representative audio examples may be found at the URL specified on the title page of this paper. While here we show results from random initialisations, with no attempt made to optimise parameters, we note that in practice it may be most efficient to initialise the sampler with the Gabor expansion of the noisy observation vector (such an initialisation will indeed be possible without inversion of the frame operator in the cases we consider here, which correspond to the overlap-add method described in # 1.2). It can also be expected that, where possible, convergence may be speeded by starting the sampler in regions of likely high posterior probability, via use of a preliminary noise reduction method to obtain a robust coefficient initialisation. 3 Simulations 3.1 Coefficient Shrinkage in the Overcomplete Case To test the noise reduction capabilities of the Gabor regression model, a speech signal of the short utterance ‘sound check’, sampled at 11.025 kHz, was artificially degraded with white Gaussian noise to yield signal-to-noise ratios (SNR) between 0 and 20 dB. At each SNR, ten runs of the sampler, at different random initialisations and using different pseudo-random number sequences, were performed as specified above. By way of comparison, three standard methods of short-time spectral attenuation (the Wiener filter, magnitude spectral subtraction, and the Ephraim and Malah suppression rule (EMSR) [4]) were also tested on the same data (noise variances were estimated from 5 seconds of the noise realisation in these cases); the results are shown in Fig. 2, along with estimates of the noise variance averaged over each of the ten runs. 0 5 10 15 20 0 5 10 15 20 25 Input SNR (dB) Output SNR (dB) − − Wiener filter rule − Gabor regression − . Magnitude spectral subtraction .. Ephraim and Malah rule (a) Gains and corresponding interpolants. Individual realisations corresponding to the ten sampler runs are so closely spaced as to be indistinguishable. 0 5 10 15 20 10 −5 10 −4 10 −3 10 −2 Input SNR (dB) log(σ2) True noise variance Estimated noise variance (b) True and estimated noise variances (each averaged over ten runs of the sampler) Figure 2: Noise reduction results for the Gabor regression experiment of # 3.1 As it is able to outperform many of the short-time methods over a wide range of SNR (despite its relative disadvantage of not being given the estimated noise variance), and is also able to accurately estimate the noise variance over this range, the results of Fig. 2 would seem to indicate the appropriateness of the Gabor regression scheme for audio signal enhancement. However, listening tests reveal that the algorithm, while improving upon the shortcomings of standard approaches discussed in # 1.2, still suffers from the same ‘musical’ residual noise. The EMSR, on the other hand, is known for its more colourless residual noise (although as can be seen from Fig. 2, it tends to exhibit severe over-smoothing at higher SNR); we address this issue in the following section. 3.2 Coefficient Shrinkage Using Wilson Bases In the case of a real signal, it is still possible to obtain good time-frequency localisation without incurring the penalty of redundancy through the use of Wilson bases (also known in the engineering literature as lapped transforms; see, e.g., [1]). As an example of incorporating basic prior knowledge about audio signal structure in a relatively simple and straightforward manner, now consider letting the scale factor " of (2) become an inverse function of frequency, so that elements of the inverted-Gamma-distributed coefficient variance vector c, although independent, are no longer identically distributed. To test the effects of such a frequency-dependentprior in the context of a Wilson regression model (in comparison with the diffuse priors employed in # 3.1), the speech signal of the previous example was degraded with white Gaussian noise of variance H K H @ , to yield an SNR of 10 dB. Once again, posterior mean estimates over the last 1000 iterations of a 2000-iteration Gibbs sampler run were taken as the final result. Figure 3 shows samples of the noise variance parameter in this case. While both the diffuse and frequency-dependent 1 500 1000 1500 2000 1 1.2 1.4 1.6 1.8 2 x 10 −4 Iteration σ2 Noise variance, identical prior case Noise variance, true value Frequency−dependent prior case Figure 3: Noise variance samples for the two Wilson regression schemes of # 3.2 prior schemes yield an estimate close to the true noise variance, and indeed give similar SNR gains of 3.07 and 2.85 dB, respectively, the corresponding restorations differ greatly in their perceptual quality. Figure 4 shows spectrograms of the clean and noisy test signal, as well as the resultant restorations; whereas Fig. 5 shows waveform and spectrogram plots of the corresponding residuals (for greater clarity, colour plots are provided on-line). It may be seen from Figs. 4 and 5 that the residual noise in the case of the frequencydependent priors appears less coloured, and in fact this restoration suffers much less from the so-called ‘musical noise’ artefact common to audio signal enhancement methods. It is well-known that a ‘whiter-sounding’ residual is perceptually preferable; in fact, some noise reduction methods have attempted this explicitly [10]. 4 Discussion Here we have presented a model for regression of audio signals, using elements of a Gabor frame as a design matrix. Note that in alternative contexts, others have also considered scale mixtures of normals as we do here (see, e.g., [11]–[12]); in fact, the priors discussed in [13] constitute special cases of those employed in the Gabor regression model. This model may also be extended to include indicator variables, thus allowing one to perform Bayesian model averaging [8]–[9]. In this case it may be desirable to employ an even larger Original Speech Signal 0 0.1 0.2 0.3 0.4 0 1000 2000 3000 4000 5000 Degraded Speech Signal 0 0.1 0.2 0.3 0.4 0 1000 2000 3000 4000 5000 Posterior Mean Reconstruction, Identical Prior Case Posterior Mean Reconstruction, Frequency−Dependent Prior Case 0 0.1 0.2 0.3 0.4 0 1000 2000 3000 4000 5000 0 0.1 0.2 0.3 0.4 0 1000 2000 3000 4000 5000 Figure 4: Spectrograms for the two Wilson regression schemes of # 3.2 in the case of diffuse vs. frequency-dependent priors (grey scale is proportional to log-amplitude) ‘dictionary’ of regressors, in order to obtain the most parsimonious representation possible.3 Multi-resolution wavelet-like schemes are one of many possibilities; for an example application in this vein we refer the reader to [14]. The strength of such a fully Bayesian approach lies largely in its extensibility to allow for more accurate signal and noise models; in this vein work is continuing on the development of appropriate conditional prior structures for audio signals, including the formulation of Markov random field models. The main weakness of this method at present lies in the computational intensity inherent in the sampling scheme; a comparison to more recent and sophisticated probabilistic methods (e.g., [15]–[16]) is now in order to determine whether the benefits to be gained from such an approach outweigh its computational drawbacks. References [1] Gr¨ochenig, K. (2001). Foundations of Time-Frequency Analysis. Boston: Birkh¨auser. [2] Daubechies, I., Grossmann, A., and Meyer, Y. (1986). Painless nonorthogonal expansions. J. Math. Phys. 27, 1271–1283. [3] D¨orfler, M. (2001). Time-frequency analysis for music signals: a mathematical approach. J. New Mus. Res. 30, 3–12. [4] Ephraim, Y. and Malah, D. (1984). Speech enhancement using a minimum mean-square error short-time spectral amplitude estimator. IEEE Trans. Acoust., Speech, Signal Processing ASSP-32, 1109–1121. 3It remains an open question as to whether the resultant variable selection problem would be amenable to approaches other than MCMC—for instance, a perfect sampling scheme. 1000 2000 3000 4000 5000 −0.04 −0.03 −0.02 −0.01 0 0.01 0.02 0.03 Residual, Identical Prior Case 1000 2000 3000 4000 5000 −0.04 −0.02 0 0.02 0.04 Residual, Frequency−Dependent Prior Case Sample Number Signal Amplitude 0 0.1 0.2 0.3 0.4 0 1000 2000 3000 4000 5000 Time (s) Frequency (Hz) 0 0.1 0.2 0.3 0.4 0 1000 2000 3000 4000 5000 Figure 5: Waveform and spectrogram plots of the Wilson regression residuals [5] Wolfe, P. J. and Godsill, S. J. (2001). Perceptually motivated approaches to music restoration. J. New Mus. Res. 30, 83–92. [6] Gabor, D. (1946). Theory of communication. J. IEE 93, 429–457. [7] Gabor, D. (1947). Acoustical quanta and the theory of hearing. Nature 159, 591–594. [8] Robert, C. P. and Casella, G. (1999). Monte Carlo Statistical Methods. New York: Springer. [9] Gilks, W. R., Richardson, S., and Spiegelhalter, D. J. (1996). Markov Chain Monte Carlo in Practice. London: Chapman & Hall. [10] Ephraim, Y. and Van Trees, H. L. (1995). A signal subspace approach for speech enhancement. IEEE Trans. Speech Audio Processing 3, 251–266. [11] Shepard, N. (1994). Partial non-Gaussian state space. Biometrika 81, 115–131. [12] Godsill, S. J. and Rayner, P. J. W. (1998). Digital Audio Restoration: A Statistical Model Based Approach. Berlin: Springer-Verlag. [13] Figueiredo, M. A. T. (2002). Adaptive sparseness using Jeffreys prior. In T. G. Dietterich, S. Becker, and Z. Ghahramani (eds.), Advances in Neural Information Processing Systems 14, pp. 697– 704. Cambridge, MA: MIT Press. [14] Wolfe, P. J., D¨orfler, M., and Godsill, S. J. (2001). Multi-Gabor dictionaries for audio timefrequency analysis. In Proc. IEEE Worksh. App. Signal Processing Audio Acoust., pp. 43–46. [15] H. Attias, L. Deng, A. Acero, and J. C. Platt (2001). A new method for speech denoising and robust speech recognition using probabilistic models for clean speech and for noise. In Proc. Eurospeech 2001, vol. 3, pp. 1903–1906. [16] H. Attias, J.C. Platt, A. Acero, and L. Deng (2001). Speech denoising and dereverberation using probabilistic models. In T. Leen (ed.), Advances in Neural Information Processing Systems 13, pp. 758–764. Cambridge, MA: MIT Press.
|
2002
|
115
|
2,120
|
Real-Time Monitoring of Complex Industrial Processes with Particle Filters Rub´en Morales-Men´endez Dept. of Mechatronics and Automation ITESM campus Monterrey Monterrey, NL M´exico rmm@itesm.mx Nando de Freitas and David Poole Dept. of Computer Science University of British Columbia Vancouver, BC, V6T 1Z4, Canada nando,poole @cs.ubc.ca Abstract This paper discusses the application of particle filtering algorithms to fault diagnosis in complex industrial processes. We consider two ubiquitous processes: an industrial dryer and a level tank. For these applications, we compared three particle filtering variants: standard particle filtering, Rao-Blackwellised particle filtering and a version of RaoBlackwellised particle filtering that does one-step look-ahead to select good sampling regions. We show that the overhead of the extra processing per particle of the more sophisticated methods is more than compensated by the decrease in error and variance. 1 Introduction Real-time monitoring is important in many areas such as robot navigation or diagnosis of complex systems [1, 2]. This paper considers online monitoring of complex industrial processes. The processes have a number of discrete states, corresponding to different combinations of faults or regions of qualitatively different dynamics. The dynamics can be very different based on the discrete states. Even if there are very few discrete states, exact monitoring is computationally unfeasible as the state of the system depends on the history of the discrete states. However there is a need to monitor these systems in real time to determine what faults could have occurred. This paper investigates the feasibility of using particle filtering (PF) for online monitoring. It also proposes some powerful variants of PF. These variants involve doing more computation per particle for each time step. We wanted to investigate whether we could do real-time monitoring and whether the extra cost of the more sophisticated methods was worthwhile in these real-world domains. 2 Classical approaches to fault diagnosis in dynamic systems Most existing model-based fault diagnosis methods use a technique called analytical redundancy [3]. Real measurements of a process variable are compared to analytically calculated Visiting Scholar (2000-2003) at The University of British Columbia. values. The resulting differences, named residuals, are indicative of faults in the process. Many of these methods rely on simplifications and heuristics [4, 5, 6, 7]. Here, we propose a principled probabilistic approach to this problem. 3 Processes monitored We analyzed two industrial processes: an industrial dryer and a level-tank. In each of these, we physically inserted a sequence of faults into the system and made appropriate measurements. The nonlinear models that we used in the stochastic simulation were obtained through open-loop step responses for each discrete state [8]. The parametric identification procedure was guided by the minimum squares error algorithm [9] and validated with the “Control Station” software [10]. The discrete-time state space representation was obtained by a standard procedure in control engineering [8]. 3.1 Industrial dryer An industrial dryer is a thermal process that converts electricity to heat. As shown in Figure 1, we measure the temperature of the output air-flow. Figure 1: Industrial dryer. Normal operation corresponds to low fan speed, open air-flow grill and clean temperature sensor (we denote this discrete state ). We induced 3 types of fault: faulty fan, faulty grill (the grill is closed), and faulty fan and grill. 3.2 Level tank Many continuous industrial processes need to control the amount of accumulated material using level measurement, such as evaporators, distillation columns or boilers. We worked with a level-tank system that exhibits the dynamic behaviour of these important processes, see Figure 2. A by-pass pipe and two manual valves (
and ) where used to induce typical faulty states. 4 Mathematical model We adopted the following jump Markov linear Gaussian model:
! #"$ &%' )( &*+ , )./ &012 )3 &*+54 Figure 2: Level-Tank where , denotes the measurements, denotes the unknown continuous states, * is a known control signal, 4
54 denotes the unknown discrete states (normal operations and faulty conditions). The noise processes are i.i.d Gaussian: %' 4 and 01 4 . The parameters 4 " 4 4 . 4 ('4 3 4
are identified matrices with . ./ for any . The initial states are ! #" 4%$ and . The important thing to notice is that for each realization of 1 , we have a single linear-Gaussian model. If we knew , we could solve for exactly using the Kalman filter algorithm. The aim of the analysis is to compute the marginal posterior distribution1 of the discrete states '& ,
& . This distribution can be derived from the posterior distribution ( '& 4 )& ,
& by standard marginalisation. The posterior density satisfies the following recursion * '& 4 )& ,
& * '&
4 '& 2
,
& 2
* , 54 * 4
4
* , ,
& 2
(1) This recursion involves intractable integrals. One, therefore, has to resort to some form of numerical approximation scheme. 5 Particle filtering In the PF setting, we use a weighted set of samples (particles) +-,/. '& 4 +/,/. )& 4 % +-,/. 10 ,/2
to approximate the posterior with the following point-mass distribution 3 0 ( 4'& 54 '& ,
& 0 5 ,-2
% +/,-. 768:9<;-= >@? AB 9<;-= >@? A ( 4'& 54 '& 4 where 6 8 9C;/= >? A B 9C;/= >? A D( )& 4 )& & denotes the Dirac-delta function. Given E particles F+/,/. )&
4 +/,-. '& 2
10 ,/2
at time GIH , approximately distributed according to 1NOTATION: For a generic vector J , we adopt the notation JLKM NFOIP#J1K%Q@JRQ%SS%S%Q@J'NUTWV to denote all the entries of this vector at time X . For simplicity, we use JN to denote both the random variable and its realisation. Consequently, we express continuous probability distributions using YZP\[LJ N T instead of ]_^ P#J'Na`[LJ'NDT and discrete distributions using YbP#J'NUT instead of ]_^ P#J'N_cdJNWT . If these distributions admit densities with respect to an underlying measure e (counting or Lebesgue), we denote these densities by fgP#JNWT . For example, when considering the space hji , we will use the Lebesgue measure, ekcl[LJ'N , so that YZP\[LJNWTFcmfgP#J'NUTL[LJN . ( +-,/. '&
4 +/,-. '& 2
,
& 2
, PF enables us to compute E particles +/,-. '& 4 +/,-. '& 0 ,-2
approximately distributed according to ( +/,/. )& 4 +/,-. '& ,
& , at time G . Since we cannot sample from the posterior directly, the PF update is accomplished by introducing an appropriate importance proposal distribution ( )& 54 '& from which we can obtain samples. The basic algorithm, Figure (3), consists of two steps: sequential importance sampling and selection (see [11] for a detailed derivation). This algorithm uses the transition priors as proposal distributions; '& 4 )& ,
&
4 2
. For the selection step, we used a state-of-the-art minimum variance resampling algorithm [12]. Sequential importance sampling step For c Q)SCSCS Q , sample from the transition priors
N Y P N
N K T and
N Y P\[ N N K Q
N T and set
M N Q M N O
N Q N Q
M N K Q
M N K S For c Q)SCSCS Q , evaluate and normalize the importance weights !
N#" f %$ N
N Q
N Selection step Multiply/Discard particles & M N Q
M N%')( * K with respect to high/low importance weights !
N to obtain particles &
M N Q
M N ' ( * K . Figure 3: PF algorithm at time G . 6 Improved Rao-Blackwellised particle filtering By considering the factorisation * '& 4 )& ,
& * )& ,
& 4 '& * )& ,
& , it is possible to design more efficient algorithms. The density * '& ,
& 4 '& is Gaussian and can be computed analytically if we know the marginal posterior density * )& 5 ,
& & . This density satisfies the alternative recursion * )& 5 ,
& * )&
,
&
* , ,
&
14 )& & *
* , ,
& 2
(2) If equation (1) does not admit a closed-form expression, then equation (2) does not admit one either and sampling-based methods are still required. (Also note that the term * , ,
&
14 )& & in equation (2) does not simplify to * , & because there is a dependency on past values through '& .) Now assuming that we can use a weighted set of samples +/,/. )& 4 % +/,-. 0 ,-2
to represent the marginal posterior distribution 3 0 '& ,
& 0 5 ,-2
% +/,/. 6 9<;-= >@? A '& &4 the marginal density of )& is a Gaussian mixture 3 * 0 4'& ,
& ,+ * 4'& '& 4 ,
& &@( '& ,
& 0 5 ,-2
% +/,-. * 4'& ,
& 4 +/,/. )& that can be computed efficiently with a stochastic bank of Kalman filters. That is, we use PF to estimate the distribution of and exact computations (Kalman filter) to estimate the mean and variance of . In particular, we sample +/,-. and then propagate the mean " +/,-. and covariance $ +/,/. of with a Kalman filter: e
N N K c P N TDe
N K P
N T N
N N K c P N T
N K P
N T P
N T P
N T
N c
P
N T
N N K
P
N T P
N T P N T $
N N K c
P
N TDe
N N K P
N T N e
N c e
N N K
N N K
P N T K
N P $ N $
N N K T
N c
N N K
N N K
P
N T K
N
P
N T
N N K Q where "
,
& 2
, " ,
& , ,
, ,
&
, $
0 ,
& 2
, $! 0 ,
& & and 0 , ,
&
. This is the basis of the RBPF algorithm that was adopted in [13]. Here, we introduce an extra improvement. Let us expand the expression for the importance weights: % * '& 5 ,
& & '& ,
& * '& 2
,
& * )&
,
&
* )& 2
4 ,
& 5 )& 2
4 ,
& (3) * , ,
& 2
4 '& * '&
4 ,
& 2
)&
4 ,
&
(4) The proposal choice, )& ,
& '& 2
4 ,
& * '& 2
,
& 2
, states that we are not sampling past trajectories. Sampling past trajectories requires solving an intractable integral [14]. We could use the transition prior as proposal distribution: '& 2
4 ,
& * )&
14 ,
&
5 *
. Then, according to equation (4), the importance weights simplify to the predictive density %' * , 5 ,
& 2
4 '& & "! , $# ,
4 %
(5) However, we can do better by noticing that according to equation (3), the optimal proposal distribution corresponds to the choice 1 )& 2
4 ,
& * )&
4 ,
& . This distribution satisfies Bayes rule: * '& 2
4 ,
& * , ,
& 2
4 '& * '&
4 ,
& 2
* , ,
& 2
4 '&
5 (6) and, hence, the importance weights simplify to %' * , ,
& 2
4 '&
5 '& 5 A 2
* , ,
& 2
4 '&
4 *
(7) When the number of discrete states is small, say 10 or 100, we can compute the distributions in equations (6) and (7) analytically. In addition to Rao-Blackwellisation, this leads to substantial improvements over standard particle filters. Yet, a further improvement can still be attained. Even when using the optimal importance distribution, there is a discrepancy arising from the ratio * '&
,
& )( * '& 2
,
& 2
in equation (3). This discrepancy is what causes the well known problem of sample impoverishment in all particle filters [11, 15]. To circumvent it to a significant extent, we note that the importance weights do not depend on (we are marginalising over this variable). It is therefore possible to select particles before the sampling step. That is, one chooses the fittest particles at time G H using the information at time G . This observation leads to an efficient algorithm (look-ahead RBPF), whose pseudocode is shown in Figure 4. Note that for standard PF, Figure 3, the importance weights depend on the sample +/,-. , thus not permitting selection before sampling. Selecting particles before sampling results in a richer sample set at the end of each time step. Kalman prediction step For i=1, . . . , N, and for N_c Q%S%SS)Q compute e
N N K P N T Q
N N K P N T Q $
N N K P N T Q
N P N T For i=1 , . . . , N , evaluate and normalize the importance weights ! N cmf P $ N $ KM N K Q
M N K TFc i & A * K P $
N N K P N T Q
N P N TT-f P N
N K T Selection step Multiply/Discard particles & e
N K Q
N K Q
M N K ' ( * K with respect to high/low importance weights !
N to obtain particles & e N K Q N K Q M N K ' ( * K . Sequential importance sampling step Kalman prediction. For i=1, . . . , N, and for N_c QS%S)S%Q using the resampled information, re-compute e
N N K P NWT Q
N N K P NUT Q $
N N K P NWT Q
N P NUT For N c Q)S%SS%Q compute f P N
M N K Q $ KM NUT " P $
N N K P NWT Q
N P NDTT-f P N
N K T Sampling step
N f P N
M N K Q $ KM NUT Updating step For i=1 , . . . , N, use one step of the Kalman recursion to compute the sufficient statistics & e
N Q
N ' given & e N N K P N T Q N N K P
N T ' . Figure 4: look-ahead RBPF algorithm at time G . The algorithm uses an optimal proposal distribution. It also selects particles from time GH using the information at time G . 7 Results The results are shown in Figures 5 and 6. The left graphs show error detection versus computing time per time-step (the signal sampling time was 1 second). The right graphs show the error detection versus number of particles. The error detection represents how many discrete states were not identified properly, and was calculated for 25 independent runs (1,000 time steps each). The graphs show that look-ahead RBPF works significantly better (low error rate and very low variance). This is essential for real-time operation with low error rates. 10 −3 10 −2 10 −1 10 0 10 1 0 10 20 30 40 50 60 70 80 90 100 % Error Detection Computing time per timestep (=) sec RBPF la−RBPF PF Real time 10 0 10 1 10 2 10 3 0 10 20 30 40 50 60 70 80 90 100 % Error Detection Number of particles PF RBPF la−RBPF Figure 5: Industrial dryer: error detection vs computing time and number of particles. The graphs also show that even for 1 particle, look-ahead RBPF is able to track the discrete state. The reason for this is that the sensors are very accurate (variance=0.01). Consequently, the distributions are very peaked and we are simply tracking the mode. Note that look-ahead RBPF is the only filter that uses the most recent information in the proposal distribution. Since the measurements are very accurate, it finds the mode easily. We repeated the level-tank experiments with noisier sensors (variance=0.08) and obtained the results shown in Figure 7. Noisier sensors, as expected, reduce the accuracy of look-ahead RBPF with a small number of particles. However, it is still possible to achieve low error rates in real-time. Since modern industrial and robotic sensors tend to be very accurate, we conclude that look-ahead RBPF has great potential. Acknowledgments Ruben Morales-Men´endez was partly supported by the Government of Canada (ICCS) and UBC CS department. David Poole and Nando de Freitas are supported by NSERC References [1] J Chen and J Howell. A self-validating control system based approach to plant fault detection and diagnosis. Computers and Chemical Engineering, 25:337–358, 2001. [2] S Thrun, J Langford, and V Verma. Risk sensitive particle filters. In S Becker T. G Dietterich and Z Ghahramani, editors, Advances in Neural Information Processing Systems 14, Cambridge, MA, 2002. MIT Press. [3] J Gertler. Fault detection and diagnosis in engineering systems. Marcel Dekker, Inc., 1998. [4] P Frank and X Ding. Survey of robust residual generation and evaluation methods in observerbased fault detection systems. J. Proc. Cont, 7(6):403–424, 1997. [5] P Frank, E Alcorta Garcia, and B.Koppen-Seliger. Modelling for fault detection and isolation versus modelling for control. Mathematics and computers in simulation, 53:259—271, 2000. [6] P Frank. Fault diagnosis in dynamic systems using analytical and knowledge-based redundancy – a survey and some new results. Automatica, 26:459–474, 1990. [7] R Isermann. Supervision, fault-detection and fault-diagnosis methods - an introduction. Control engineering practice, 5(5):639–652, 1997. 10 −3 10 −2 10 −1 10 0 10 1 0 10 20 30 40 50 60 70 80 90 100 % Error Detection PF RBPF la−RBPF Real time Computing time per timestep (=) sec 10 0 10 1 10 2 0 10 20 30 40 50 60 70 80 90 100 % Error Detection la−RBPF RBPF PF Number of particles Figure 6: Level-tank (accurate sensors): error detection vs computing time and number of particles. 10 −3 10 −2 10 −1 10 0 10 1 0 10 20 30 40 50 60 70 80 90 100 % Error Detection Computing time per timestep (=) sec la−RBPF RBPF PF Real time 10 0 10 1 10 2 10 3 0 10 20 30 40 50 60 70 80 90 100 % Error Detection Number of particles PF RBPF la−RBPF Figure 7: Level-tank (noisy sensors): error detection vs computing time and number of particles. [8] K Ogata. Discrete-Time Control Systems. Prentice Hall, second edition, 1995. [9] L Ljung. System Identification: Theory for the User. Prentice-Hall, 1987. [10] D Cooper. Control Station. University of Connecticut, third edition, 2001. [11] A Doucet, N de Freitas, and N J Gordon, editors. Sequential Monte Carlo Methods in Practice. Springer-Verlag, 2001. [12] G Kitagawa. Monte Carlo filter and smoother for non-Gaussian nonlinear state space models. Journal of Computational and Graphical Statistics, 5:1–25, 1996. [13] N de Freitas. Rao-Blackwellised particle filtering for fault diagnosis. In IEEE aerospace conference, 2001. [14] C Andrieu, A Doucet, and E Punskaya. Sequential Monte Carlo methods for optimal filtering. In A Doucet, N de Freitas, and N J Gordon, editors, Sequential Monte Carlo Methods in Practice. Springer-Verlag, 2001. [15] M Pitt and N Shephard. Filtering via simulation: auxiliary particle filters. Journal of the American statistical association, 94(446):590–599, 1999.
|
2002
|
116
|
2,121
|
Retinal Processing Emulation in a Programmable 2-Layer Analog Array Processor CMOS Chip R. Carmona, F. Jim´enez-Garrido, R. Dom´ınguez-Castro, S. Espejo, A. Rodr´ıguez-V´azquez Instituto de Microelectr´onica de Sevilla-CNM-CSIC Avda. Reina Mercedes s/n 41012 Sevilla (SPAIN) rcarmona@imse.cnm.es Abstract A bio-inspired model for an analog programmable array processor (APAP), based on studies on the vertebrate retina, has permitted the realization of complex programmable spatio-temporal dynamics in VLSI. This model mimics the way in which images are processed in the visual pathway, rendering a feasible alternative for the implementation of early vision applications in standard technologies. A prototype chip has been designed and fabricated in a 0.5µm standard CMOS process. Computing power per area and power consumption is amongst the highest reported for a single chip. Design challenges, trade-offs and some experimental results are presented in this paper. 1 Introduction The conventional role of analog circuits in mixed-signal VLSI is providing the I/O interface to the digital core of the chip —which realizes all the signal processing. However, this approach may not be optimum for the processing of multi-dimensional sensory signals, such as those found in vision applications. When massive information flows have to be treated in parallel, it may be advantageous to realize some preprocessing in the analog domain, at the plane where signals are captured. During the last years, different authors have focused on the realization of parallel preprocessing of multi-dimensional signals, using either purely digital techniques [1] or mixed-signal techniques, like in [2]. The data in Table 1 can help us to compare these two approaches. Here, the peak computing power (expressed as operations per second: XPS) per unit area and power is shown. This estimation is realized by considering the number of arithmetic analog operations that take place per unit time, in the analog case, or digital instructions per unit time, in the digital case. It can be seen that the computing power per area featured by chips based in Analog Programmable Array Processors (APAPs) is much higher than that exhibited by digital array processors. It can be argued that digital processors feature a larger accuracy, but accuracy requirements for vision applications are not rarely below 6 Table 1: Parallel processors comparison Reference CMOS No. Cells/ XPS/ XPS/ process of cells mm2 mm2 mW Li˜nan et. al. [2] 0.5µm 4096 81.0 7.93G 0.33G Gealow et. al. [1] 0.6µm 4096 66.7 4.00M 1.00M This chip 0.5µm 1024 29.2 6.01G 1.56G bits. Also, taking full advantage of the full digital resolution requires highly accurate A/D converters, what creates additional area and power overhead. The third row in Table 1 corresponds to the chip presented here. This chip outperforms the one in [2] in terms of functionality as it implements a reduced model of the biological retina [3]. It is capable of generating complex spatio-temporal dynamic processes, in a fully programmable way and with the possibility of storing intermediate processing results. 2 APAP chip architecture 2.1 Bio-inspired APAP model The vertebrate retina has a layered structure [3], consisting, roughly, in a layer of photodetectors at the top, bipolar cells carrying signals across the retina, affected by the operation of horizontal and amacrine cells, and ganglion cells in the other end. There are, in this description, some interesting aspects that markedly resemble the characteristics of the Cellular Neural Networks (CNNs) [4]: 2D aggregations of continuous signals, local connectivity between elementary nonlinear processors, analog weighted interactions between them. Motivated by these coincidences, a model consisting of 2 layers of processors coupled by some inter-layer weights, and an additional layer incorporating analog arithmetics, has been developed [5]. Complex dynamics can be programmed via the intra- and inter-layer coupling strengths and the relation between the time constants of the layers. The evolution of each cell, C(i, j), is described by two coupled differential equations, one for each CNN node: τn dxn,ij dt = −g[xn,ij(t)] + r1 X k=−r1 r1 X l=−r1 ann,kl · yn,(i+k)(j+l) + +bnn,00 · unn,ij + zn,ij + ano · yno,ij (1) where n and o stand for the node in question and the other node respectively. The nonlinear losses term and the output function in each layer are those described for the full-signal range (FSR) model of the CNN [7], in which the state voltage is also limited and can be identified with the output voltage: g(xn,ij) = lim m→∞ ( m(xn,ij −1) + 1 if xn,ij > 1 xn,ij if |xn,ij| ≤1 −m(xn,ij + 1) −1 if xn,ij < −1 (2) and: yn,ij = f(xn,ij) = 1 2(|xn,ij + 1| −|xn,ij −1|) (3) The proposed chip consists in an APAP of 32 × 32 identical 2nd-order CNN cells (Fig. 3), surrounded by the circuits implementing the boundary conditions. Figure 1: (a) Conceptual diagram of the basic cell and (b) internal structure of each CNN layer node 2.2 Basic processing cell architecture Each elementary processor includes two coupled continuous-time CNN cores (Fig. 1(a)). The synaptic connections between processing elements of the same or different layer are represented by arrows in the diagram. The basic processor contains also a programmable local logic unit (LLU) and local analog and logic memories (LAMs and LLMs) to store intermediate results. The blocks in the cell communicate via an intra-cell data bus, multiplexed to the array interface. Control bits and switch configuration are passed to the cell from a global programming unit. The internal structure of each CNN core is depicted in the diagram of Fig. 1(b). Each core receives contributions from the rest of the processing nodes in the neighbourhood which are summed and integrated in the state capacitor. The two layers differ in that the first layer has a scalable time constant, controlled by the appropriate binary code, while the second layer has a fixed time constant. The evolution of the state variable is also driven by self-feedback and by the feedforward action of the stored input and bias patterns. There is a voltage limiter for implementing the FSR CNN model. Forcing the state voltage to remain between these limits allows for using it as the output voltage. Then the state variable, which is now the output, is transmitted in voltage form to the synaptic blocks, in the periphery of the cell, where weighted contributions to the neighbours’ are generated. There is also a current memory that will be employed for cancellation of the offset of the synaptic blocks. Initialization of the state, input and/or bias voltages is done through a mesh of multiplexing analog switches that connect to the cell’s internal data bus. 3 Analog building blocks for the basic cell 3.1 Single-transistor synapse The synapse is a four-quadrant analog multiplier. Their inputs will be the cell state, or input, and the weight voltages, while the output will be the cell’s current contribution to a neighbouring cell. It can be realized by a single transistor biased in the ohmic region [6]. For a PMOS with gate voltage VX = Vx0 + Vx, and the p-diffusion terminals at VW = Vw0 + Vw and Vw, the drain-to-source current is: Io ≈−βpVwVx −βpVw Vx0 + | ˆVTp| −Vw0 −Vw 2 (4) which is a four-quadrant multiplier with an offset term that is time-invariant —at least during the evolution of the network— and not depending on the state. This offset is eliminated in a calibration step, with a current memory. For the synapse to operate properly, the input node of the CNN core, L ⃝in Fig. 2, must be kept at a constant voltage. This is achieved by a current conveyor. Any difference between the voltage at node L ⃝and the reference Vw0 is amplified and the negative feedback corrects the deviation. Notice that a voltage offset in the amplifier results in an error of the same order. An offset cancellation mechanism is provided (Fig. 2). 3.2 S3I current memory As it has been referred, the offset term of the synapse current must be removed for its output current to represent the result of a four-quadrant multiplication. For this purpose all the synapses are reset to VX = Vxo. Then the resulting current, which is the sum of the offset currents of all the synapses concurrently connected to the same node, is memorized. This value will be substracted on-line from the input current when the CNN loop is closed, resulting in a one-step cancellation of the errors of all the synapses. The validity of this method relies in the accuracy of the current memory. For instance, in this chip, the sum of all the contributions will range, for the applications for which it has been designed, from 18µA to 46µA. On the other side, the maximum signal to be handled is 1µA. If a signal resolution of 8b is pretended, then 0.5LSB = 2nA. Thus, our current memory must be able to distinguish 2nA out of 46µA. This represents an equivalent resolution of 14.5b. In order to achieve such accuracy level, a S3I current memory is used. It is composed by three stages (Fig. 2), each one consisting in a switch, a capacitor and a transistor. IB is the current to be memorized. After memorization the only error left corresponds to the last stage. 3.3 Time-constant scaling The differential equation that governs the evolution of the network (1) can be written as a sum of current contributions injected to the state capacitor. Scaling up/down this sum of currents is equivalent to scaling the capacitor and, thus, speeding up/down the network dynamics. Therefore, scaling the input current with the help of a current mirror, for instance, will have the effect of scaling the timeconstant. A circuit for continuously adjusting the current gain of a mirror can be designed based on a regulated-Cascode current mirror in the ohmic region. But the strong dependence of the ohmic-region biased transistors on the power rail voltage causes mismatches in τ between cells in the same layer. An alternative to this is a digitally programmable current mirror. It trades resolution in τ for robustness, hence, the mismatch between the time constants of the different cells is now fairly attenuated. Figure 2: Input block with current scaling, S3I memory and offset-corrected OTA schematic A new problem arises, though, because of current scaling. If the input current can be reshaped to a 16-times smaller waveform, then the current memory has to operate over a larger dynamic range. But, if designed to operate on large currents, the current memory will not work for the tiny currents of the scaled version of the input. If it is designed to run on small input currents, long transistors will be needed, and the operation will be unreliable for the larger currents. One way of avoiding this situation is to make the S3I memory to work on the original unscaled version of the input current. Therefore, the adjustable-time-constant CNN core will be a current conveyor, followed by the S3I current memory and then the binary weighted current mirror. The problem now is that the offsets introduced by the scaling block add up to the signal and the required accuracy levels can be lost. Our proposal is depicted in Fig. 2. It consists in placing the scaling block (programmable mirror) between the current conveyor and the current memory. In this way, any offset error will be cancelled in the auto-zeroing phase. In the picture, the voltage reference generated with the current conveyor, the regulated-Cascode current mirrors and the S3I memory can be easily identified. The inverter, Ai, driving the gates of the transistors of the current memory is required for stability. 4 Chip data and experimental results A prototype chip has been designed and fabricated in a 0.5µm single-poly triplemetal CMOS technology. Its dimensions are 9.27 × 8.45mm2 (microphotograph in Fig. 3). The cell density achieved is 29.24cells/mm2, once the overhead circuitry is detracted from the total chip area —given that it does not scale linearly with the number of cells. The power consumption of the whole chip is around 300mW. Data I/O rates are nominally 10MS/s. Equivalent resolution for the analog images handled by the chip is 7.5 bit (measured). The time constant of the fastest layer (fixed time constant) is intended to be under 100ns. The peak computing power of this chip is, therefore, 470GXPS, what means 6.01GXPS/mm2, and 1.56GXPS/mW. Figure 3: Prototype chip photograph The programmable dynamics of the chip permit the observation of different phenomena of the type of propagation of active waves, pattern generation, etc. By tuning the coefficients that control the interactions between the cells in the array— i. e. the weights of the synaptic blocks, which are common to every elementary processor— different dynamics are manifested. Fig. 4 displays the evolution of the state variables of the two coupled layers when it is programmed to show different propagative behaviors. In picture (a), the chip is programmed to resemble the socalled wide-field erasure effect observed in the retina. Markers in the fastest layer (bottom row) trigger wavefronts in this layer and induce slower waves in the other layer (upper row). These induced spots are fedback, inhibiting the waves propagating in the fast layer, and generating a trailing edge for each wavefront. In picture (b), a solitary traveling wave is triggered from each corner of the fast layer. This kind of behavior is proper of waves in active media. Finally, in picture (c), edge detection is computed by extraction the low frequency components of the image, obtained by a diffusion in the slower layer, from theoriginal one. The remaining information is that of the higher frequency components of the image. These phenomena have been widely observed in measurements of the vertebrate retina [3]. They constitute the patterns of activity generated by the presence of visual stimuli. Controlling the network dynamics and combining the results with the help of the built-in local logic and arithmetic operators, rather involved image processing tasks can be programmed like active-contour detection, object-tracking, etc. 5 Conclusions From the figures obtained, we can state that the proposed approach supposes a promising alternative to conventional digital image processing for applications related with early-vision and low-level focal-plane image processing. Based on a simple but precise model of part of the real biological system, a feasible efficient implementation of an artificial vision device has been designed. The peak operation speed of the chip outperforms its digital counterparts due to the fully parallel nature of the processing. This especially so when comparing the computing power per silicon area unit and per watt. Acknowledgments This work has been partially supported by ONR/NICOP Project N00014-00-1-0429, ESPRIT V Project IST-1999-19007, and by the Spanish CICYT Project TIC-19990826. References [1] Gealow, J.C. & Sodini, C.G. (1999) A Pixel Parallel Image Processor Using Logic Pitch -Matched to Dynamic Memory. IEEE Journal of Solid-State Circuits, Vol. 34, No. 6, pp. 831-839. [2] Li˜nan, G., Espejo, S., Dom´ınguez-Castro, R., Roca, E. and Rodr´ıguezV´azquez, A. (1998) A 64 x 64 CNN with Analog and Digital I/O. Proceedings of the IEEE Int. Conf. on Electronics, Circuits and Systems, pp. 203-206, Lisbon, Portugal. [3] Werblin, F. (1991) Synaptic Connections, Receptive Fields and Patterns of Activity in the Tiger Salamander Retina. Investigative Ophthalmology and Visual Science, Vol. 32, No. 3, pp. 459-483. [4] Werblin, F., Roska, T. and Chua, L.O. (1995) The Analogic Cellular Neural Network as a Bionic Eye. International Journal of Circuit Theory and Applications, Vol. 23, No. 6, pp. 541-69. [5] Rekeczky, Cs., Serrano-Gotarredona, T., Roska, T. and Rodr´ıguez-V´azquez, A. (2000) A Stored Program 2nd Order/3- Layer Complex Cell CNN-UM. Proc. of the Sixth IEEE International Workshop on Cellular Neural Networks and their Applications, pp. 219-224, Catania, Italy. [6] Dom´ınguez-Castro, R., Rodr´ıguez-V´azquez, A., Espejo, S. and Carmona, R. (1998) Four-Quadrant One-Transistor Synapse for High Density CNN Implementations. Proc. of the Fifth IEEE International Workshop on Cellular Neural Networks and their Applications, pp. 243-248, London, UK. [7] Espejo, S., Carmona, R. Carmona, Dom´ınguez-Castro, R. and Rodr´ıguezV´azquez, A. (1996) A VLSI Oriented Continuous- Time CNN Model. International Journal of Circuits Theory and Applications, Vol. 24, No. 3, pp. 341-356, John Wiley and Sons Ed. Figure 4: Examples of the different dynamics that can be programmed on the chip: (a) wide-field erasure effect, (b) traveling wave accross the layers, and (c) edge detection.
|
2002
|
117
|
2,122
|
Bayesian Image Super-Resolution Michael E. Tipping and Christopher M. Bishop Microsoft Research Cambridge, CB3 OFB, U.K. { mtipping, cmbishop} @microsoft.com http://research.microsoft.com/ "-'{ mtipping,cmbishop} Abstract The extraction of a single high-quality image from a set of lowresolution images is an important problem which arises in fields such as remote sensing, surveillance, medical imaging and the extraction of still images from video. Typical approaches are based on the use of cross-correlation to register the images followed by the inversion of the transformation from the unknown high resolution image to the observed low resolution images, using regularization to resolve the ill-posed nature of the inversion process. In this paper we develop a Bayesian treatment of the super-resolution problem in which the likelihood function for the image registration parameters is based on a marginalization over the unknown high-resolution image. This approach allows us to estimate the unknown point spread function, and is rendered tractable through the introduction of a Gaussian process prior over images. Results indicate a significant improvement over techniques based on MAP (maximum a-posteriori) point optimization of the high resolution image and associated registration parameters. 1 Introduction The task in super-resolution is to combine a set of low resolution images of the same scene in order to obtain a single image of higher resolution. Provided the individual low resolution images have sub-pixel displacements relative to each other, it is possible to extract high frequency details of the scene well beyond the Nyquist limit of the individual source images. Ideally the low resolution images would differ only through small (sub-pixel) translations, and would be otherwise identical. In practice, the transformations may be more substantial and involve rotations or more complex geometric distortions. In addition the scene itself may change, for instance if the source images are successive frames in a video sequence. Here we focus attention on static scenes in which the transformations relating the source images correspond to translations and rotations, such as can be obtained by taking several images in succession using a hand held digital camera. Our approach is readily extended to more general projective transformations if desired. Larger changes in camera position or orientation can be handled using techniques of robust feature matching, constrained by the epipolar geometry, but such sophistication is unnecessary in the present context. Most previous approaches, for example [1, 2, 3], perform an initial registration of the low resolution images with respect to each other, and then keep this registration fixed. They then formulate probabilistic models of the image generation process, and use maximum likelihood to determine the pixel intensities in the high resolution image. A more convincing approach [4] is to determine simultaneously both the low resolution image registration parameters and the pixel values of the high resolution image, again through maximum likelihood. An obvious difficulty of these techniques is that if the high resolution image has too few pixels then not all of the available high frequency information is extracted from the observed images, whereas if it has too many pixels the maximum likelihood solution becomes ill conditioned. This is typically resolved by the introduction of penalty terms to regularize the maximum likelihood solution, where the regularization coefficients may be set by cross-validation. The regularization terms are often motivated in terms of a prior distribution over the high resolution image, in which case the solution can be interpreted as a MAP (maximum a-posteriori) optimization. Baker and Kanade [5] have tried to improve the performance of super-resolution algorithms by developing domain-specific image priors, applicable to faces or text for example, which are learned from data. In this case the algorithm is effectively hallucinating perceptually plausible high frequency features. Here we focus on general purpose algorithms applicable to any natural image, for which the prior encodes only high level information such as the correlation of nearby pixels. The key development in this paper, which distinguishes it from previous approaches, is the use of Bayesian, rather than simply MAP, techniques by marginalizing over the unknown high resolution image in order to determine the low resolution image registration parameters. Our formulation also allows the choice of continuous values for the up-sampling process, as well the shift and rotation parameters governing the image registration. The generative process by which the high resolution image is smoothed to obtain a low resolution image is described by a point spread function (PSF). It has often been assumed that the point spread function is known in advance, which is unrealistic. Some authors [3] have estimated the PSF in advance using only the low resolution image data, and then kept this estimate fixed while extracting the high resolution image. A key advantage of our Bayesian marginalization is that it allows us to determine the point spread function alongside both the registration parameters and the high resolution image in a single, coherent inference framework. As we show later, if we attempt to determine the PSF as well as the registration parameters and the high resolution image by joint optimization, we obtain highly biased (over-fitted) results. By marginalizing over the unknown high resolution image we are able to determine the PSF and the registration parameters accurately, and thereby reconstruct the high resolution image with subjectively very good quality. 2 Bayesian Super-resolution Suppose we are given K low-resolution intensity images (the extension to 3-colour images is straightforward). We shall find it convenient notationally to represent the images as vectors y(k) of length M , where k = 1, ... , K, obtained by raster scanning the pixels of the images. Each image is shifted and rotated relative to a reference image which we shall arbitrarily take to be y(1). The shifts are described by 2-dimensional vectors Sk, and the rotations are described by angles Ok. The goal is to infer the underlying scene from which the low resolution images are generated. We represent this scene by a single high-resolution image, which we again denote by a raster-scan vector x whose length is N » M. Our approach is based on a generative model for the observed low resolution images, comprising a prior over the high resolution image together with an observation model describing the process by which a low resolution image is obtained from the high resolution one. It should be emphasized that the real scene which we are trying to infer has effectively an infinite resolution, and that its description as a pixellated image is a computational artefact. In particular if we take the number N of pixels in this image to be large the inference algorithm should remain well behaved. This is not the case with maximum likelihood approaches in which the value of N must be limited to avoid ill-conditioning. In our approach, if N is large the correlation of neighbouring pixels is determined primarily by the prior, and the value of N is limited only by the computational cost of working with large numbers of high resolution pixels. We represent the prior over the high resolution image by a Gaussian process p(x) = N(xIO, Zx) (1) where the covariance matrix Zx is chosen to be of the form Zx(i , j) = Aexp {_llvi ~2VjI12}. (2) Here Vi denotes the spatial position in the 2-dimensional image space of pixel i, the coefficient A measures the 'strength' of the prior, and r defines the correlation length scale. Since we take Zx to be a fixed matrix, it is straightforward to use a different functional form for Zx if desired. It should be noted that in our image representation the pixel intensity values lie in the range (-0.5,0.5), and so in principle a Gaussian process prior is inappropriate 1 . In practice we have found that this causes little difficulty, and in Section 4 we discuss how a more appropriate distribution could be used. The low resolution images are assumed to be generated from the high resolution image by first applying a shift and a rotation, then convolving with some point spread function, and finally downsampling to the lower resolution. This is expressed through the transformation equation (3) where €(k) is a vector of independent Gaussian random variables €i ~ N(O, /3-1), with zero mean and precision (inverse variance) /3, representing noise terms intended to model the camera noise as well as to capture any discrepancy between our generative model and the observed data. The transformation matrix W(k) in (3) is given by a point spread function which captures the down-sampling process and which we again take to have a 'Gaussian' form (4) lNote that the established work we have referenced, where a Gaussian prior or quadratic regularlizer is utilised, also overlooks the bounded nature of the pixel space. with (5) where j = 1, ... M and i = 1, ... , N. Here "( represents the 'width' of the point spread function, and we shall treat "( as an unknown parameter to be determined from the data. Note that our approach generalizes readily to any other form of point spread function, possibly containing several unknown parameters, provided it is differentiable with respect to those parameters. In (5) the vector U)k) is the centre of the PSF and is dependent on the shift and rotation of the low resolution image. We choose a parameterization in which the centre of rotation coincides with the centre v of the image, so that U)k) = R(k)(Vj - v) + v + Sk where R(k) is the rotation matrix (k) ( cosBk R = _ sinBk We can now write down the likelihood function in the form (6) (7) (8) Assuming the images are generated independently from the model, we can then write the posterior distribution over the high resolution image in the form with E ~ [z;' +fi (~W(WW(')) r, J.L = (3~ (~W(k)T y(k)) . (9) (10) (11) (12) Thus the posterior distribution over the high resolution image is again a Gaussian process. If we knew the registration parameters {Sk' Bk }, as well as the PSF width parameter ,,(, then we could simply take the mean J.L (which is also the maximum) of the posterior distribution to be our super-resolved image. However, the registration parameters are unknown. Previous approaches have either performed a preliminary registration of the low resolution images against each other and then fixed the registration while determining the high resolution image, or else have maximized the posterior distribution (9) jointly with respect to the high resolution image x and the registration parameters (which we refer to as the 'MAP' approach). Neither approach takes account of the uncertainty in determining the high resolution image and the consequential effects on the optimization of the registration parameters. Here we adopt a Bayesian approach by marginalizing out the unknown high resolution image. This gives the marginal likelihood function for the low resolution images in the form (13) where (14) and y and Ware the vector and matrix of stacked y(k) and W(k) respectively. Using some standard matrix manipulations we can rewrite the marginal likelihood in the form 1 [ K 10gp(YI {Sk' ek}, I ) = -"2 ,B 2..: Ily(k) - W(k) J.L112 + J.L TZ;l J.L k=l +logIZxl-IOgl~I-KMIOg,B]. (15) We now wish to optimize this marginal likelihood with respect to the parameters {sk,ed'I' and to do this we have compared two approaches. The first is to use the expectation-maximization (EM) algorithm. In the E-step we evaluate the posterior distribution over the high resolution image given by (10). In the M-step we maximize the expectation over x of the log of the complete data likelihood p(y,xl{sk,ed'l) obtained from the product of the prior (1) and the likelihood (8). This maximization is done using the scaled conjugate gradients algorithm (SeG) [6]. The second approach is to maximize the marginal likelihood (15) directly using SeG. Empirically we find that direct optimization is faster than EM, and so has been used to obtain the results reported in this paper. Since in (15) we must compute ~, which is N x N, in practice we optimize the shift, rotation and PSF width parameters based on an appropriately-sized subset of the image only. The complete high resolution image is then found as the mode of the full posterior distribution, obtained iteratively by maximizing the numerator in (9), again using SeG optimization. 3 Results In order to evaluate our approach we first apply it to a set of low resolution images synthetically down-sampled (by a linear scaling of 4 to 1, or 16 pixels to 1) from a known high-resolution image as follows. For each image we wish to generate we first apply a shift drawn from a uniform distribution over the interval (-2,2) in units of high resolution pixels (larger shifts could in principle be reduced to this level by pre-registering the low resolution images against each other) and then apply a rotation drawn uniformly over the interval (-4,4) in units of degrees. Finally we determine the value at each pixel of the low resolution image by convolution of the original image with the point spread function (centred on the low resolution pixel), with width parameter 1 = 2.0. From a high-resolution image of 384 x 256 we chose to use a set of 16 images of resolution 96 x 64. In order to limit the computational cost we use patches from the centre of the low resolution image of size 9 x 9 in order to determine the values of the shift, rotation and PSF width parameters. We set the resolution of the super-resolved image to have 16 times as many pixels as the low resolution images which, allowing for shifts and the support of the point spread function, gives N = 50 x 50. The Gaussian process prior is chosen to have width parameter r = 1.0, variance parameter A = 0.04, and the noise process is given a standard deviation of 0.05. Note that these values can be set sensibly a priori and need not be tuned to the data. The scaled conjugate gradient optimization is initialised by setting the shift and rotation parameters equal to zero, while the PSF width "( is initialized to 4.0 since this is the upsampling factor we have chosen between low resolution and superresolved images. We first optimize only the shifts, then we optimize both shifts and rotations, and finally we optimize shifts, rotations and PSF width, in each case running until a suitable convergence tolerance is reached. In Figure l(a) we show the original image, together with an example low resolution image in Figure l(b). Figure l(c) shows the super-resolved image obtained using our Bayesian approach. We see that the super-resolved image is of dramatically better quality than the low resolution images from which it is inferred. The converged value for the PSF width parameter is "( = 1.94, close to the true value 2.0. Figure 1: Example using synthetically generated data showing (top left) the original image, (top right) an example low resolution image and (bottom left) the inferred super-resolved image. Also shown, in (bottom right), is a comparison super-resolved image obtained by joint optimization with respect to the super-resolved image and the parameters, demonstrating the significanly poorer result. Notice that there are some small edge effects in the super-resolved image arising from the fact that these pixels only receive evidence from a subset of the low resolution images due to the image shifts. Thus pixels near the edge of the high resolution image are determined primarily by the prior. For comparison we show, in Figure l(d), the corresponding super-resolved image obtained by performing a MAP optimization with respect to the high resolution image. This is of significantly poorer quality than that obtained from our Bayesian approach. The converged value for the PSF width in this case is '"Y = 0.43 indicating severe over-fitting. In Figure 2 we show plots of the true and estimated values for the shift and rotation parameters using our Bayesian approach and also using MAP optimization. Again we see the severe over-fitting resulting from joint optimization, and the significantly better results obtained from the Bayesian approach. (a) Shift estimation 2.51.======;-~-~-~1 I x truth ~ 2 0 Bayesian 1.5 '---l::,. __ M_A_P _____ ;: ..c 0.5 (Jl ~ 0 t Q) -0.5 > -1 -1.5 -2 -2.5 '---~-~--~--~-~----' -2 -1 0 1 2 horizontal shift (b) Rotation estimation 1.8 r-;:::::::::::::::::====;--~--~I 1 _ Bayesian 1 ~ 1.6 _ MAP ~ g'1.4 -0 ::- 1.2 e Cii c :2 0.8 ctl e 0.6 Q) ~ 0.4 o (Jl ~ 0.: L.........IJIIL...IL..UI '-"!....H...IL.-L..II:L II ..oL..ll... II .lL..H..JIO!....aL..J II o 5 10 15 low-resolution imaae index Figure 2: (a) Plots of the true shifts for the synthetic data, together with the estimated values obtained by optimization of the marginal likelihood in our Bayesian framework and for comparison the corresponding estimates obtained by joint optimization with respect to registration parameters and the high resolution image. (b) Comparison of the errors in determining the rotation parameters for both Bayesian and MAP approaches. Finally, we apply our technique to a set of images obtained by taking 16 frames using a hand held digital camera in 'multi-shot' mode (press and hold the shutter release) which takes about 12 seconds. An example image, together with the super-resolved image obtained using our Bayesian algorithm, is shown in Figure 3. 4 Discussion In this paper we have proposed a new approach to the problem of image superresolution, based on a marginalization over the unknown high resolution image using a Gaussian process prior. Our results demonstrate a worthwhile improvement over previous approaches based on MAP estimation, including the ability to estimate parameters of the point spread function. One potential application our technique is the extraction of high resolution images from video sequences. In this case it will be necessary to take account of motion blur, as well as the registration, for example by tracking moving objects through the successive frames [7]. (a) Low-resolution image (1 of 16) (b) 4x Super-resolved image (Bayesian) Figure 3: Application to real data showing in (a) one of the 16 captured in succession usind a hand held camera of a doorway with nearby printed sign. Image (b) shows the final image obtained from our Bayesian super-resolution algorithm. Finally, having seen the advantages of marginalizing with respect to the high resolution image, we can extend this approach to a fully Bayesian one based on Markov chain Monte Carlo sampling over all unknown parameters in the model. Since our model is differentiable with respect to these parameters, this can be done efficiently using the hybrid Monte Carlo algorithm. This approach would allow the use of a prior distribution over high resolution pixel intensities which was confined to a bounded interval, instead ofthe Gaussian assumed in this paper. Whether the additional improvements in performance will justify the extra computational complexity remains to be seen. References [1] N. Nguyen, P. Milanfar, and G. Golub. A computationally efficient superresolution image reconstruction algorithm. IEEE Transactions on Image Processing, 10(4):573583, 200l. [2] V. N. Smelyanskiy, P. Cheeseman, D. Maluf, and R. Morris. Bayesian super-resolved surface reconstruction from images. In Proceedings CVPR, volume 1, pages 375- 382, 2000. [3] D. P. Capel and A. Zisserman. Super-resolution enhancement of text image sequences. In International Conference on Pattern Recognition, pages 600- 605, Barcelona, 2000. [4] R. C. Hardie, K. J. Barnard, and E. A. Armstrong. Joint MAP registration and high-resolution image estimation using a sequence of undersampled images. IEEE Transactions on Image Processing, 6(12):1621-1633, 1997. [5] S. Baker and T. Kanade. Limits on super-resolution and how to break them. Technical report, Carnegie Mellon University, 2002. submitted to IEEE Transactions on Pattern Analysis and Machine Intelligence. [6] 1. T. Nabney. Netlab: Algorithms for Pattern Recognition. Springer, London, 2002. http://www.ncrg.aston.ac. uk/netlab;' [7] B. Bascle, A. Blake, and A. Zisserman. Motion deblurring and super-resolution from an image sequence. In Proceedings of the Fourth European Conference on Computer Vision, pages 573- 581, Cambridge, England, 1996.
|
2002
|
118
|
2,123
|
Charting a Manifold Matthew Brand Mitsubishi Electric Research Labs 201 Broadway, Cambridge MA 02139 USA www.merl.com/people/brand/ Abstract We construct a nonlinear mapping from a high-dimensional sample space to a low-dimensional vector space, effectively recovering a Cartesian coordinate system for the manifold from which the data is sampled. The mapping preserves local geometric relations in the manifold and is pseudo-invertible. We show how to estimate the intrinsic dimensionality of the manifold from samples, decompose the sample data into locally linear low-dimensional patches, merge these patches into a single lowdimensional coordinate system, and compute forward and reverse mappings between the sample and coordinate spaces. The objective functions are convex and their solutions are given in closed form. 1 Nonlinear dimensionality reduction (NLDR) by charting Charting is the problem of assigning a low-dimensional coordinate system to data points in a high-dimensional sample space. It is presumed that the data lies on or near a lowdimensional manifold embedded in the sample space, and that there exists a 1-to-1 smooth nonlinear transform between the manifold and a low-dimensional vector space. The datamodeler’s goal is to estimate smooth continuous mappings between the sample and coordinate spaces. Often this analysis will shed light on the intrinsic variables of the datagenerating phenomenon, for example, revealing perceptual or configuration spaces. Our goal is to find a mapping—expressed as a kernel-based mixture of linear projections— that minimizes information loss about the density and relative locations of sample points. This constraint is expressed in a posterior that combines a standard gaussian mixture model (GMM) likelihood function with a prior that penalizes uncertainty due to inconsistent projections in the mixture. Section 3 develops a special case where this posterior is unimodal and maximizable in closed form, yielding a GMM whose covariances reveal a patchwork of overlapping locally linear subspaces that cover the manifold. Section 4 shows that for this (or any) GMM and a choice of reduced dimension d, there is a unique, closed-form solution for a minimally distorting merger of the subspaces into a d-dimensional coordinate space, as well as an reverse mapping defining the surface of the manifold in the sample space. The intrinsic dimensionality d of the data manifold can be estimated from the growth process of point-to-point distances. In analogy to differential geometry, we call the subspaces “charts” and their merger the “connection.” Section 5 considers example problems where these methods are used to untie knots, unroll and untwist sheets, and visualize video data. 1.1 Background Topology-neutral NLDR algorithms can be divided into those that compute mappings, and those that directly compute low-dimensional embeddings. The field has its roots in mapping algorithms: DeMers and Cottrell [3] proposed using auto-encoding neural networks with a hidden layer “bottleneck,” effectively casting dimensionality reduction as a compression problem. Hastie defined principal curves [5] as nonparametric 1D curves that pass through the center of “nearby” data points. A rich literature has grown up around properly regularizing this approach and extending it to surfaces. Smola and colleagues [10] analyzed the NLDR problem in the broader framework of regularized quantization methods. More recent advances aim for embeddings: Gomes and Mojsilovic [4] treat manifold completion as an anisotropic diffusion problem, iteratively expanding points until they connect to their neighbors. The ISOMAP algorithm [12] represents remote distances as sums of a trusted set of distances between immediate neighbors, then uses multidimensional scaling to compute a low-dimensional embedding that minimally distorts all distances. The locally linear embedding algorithm (LLE) [9] represents each point as a weighted combination of a trusted set of nearest neighbors, then computes a minimally distorting low-dimensional barycentric embedding. They have complementary strengths: ISOMAP handles holes well but can fail if the data hull is nonconvex [12]; and vice versa for LLE [9]. Both offer embeddings without mappings. It has been noted that trusted-set methods are vulnerable to noise because they consider the subset of point-to-point relationships that has the lowest signal-to-noise ratio; small changes to the trusted set can induce large changes in the set of constraints on the embedding, making solutions unstable [1]. In a return to mapping, Roweis and colleagues [8] proposed global coordination—learning a mixture of locally linear projections from sample to coordinate space. They constructed a posterior that penalizes distortions in the mapping, and gave a expectation-maximization (EM) training rule. Innovative use of variational methods highlighted the difficulty of even hill-climbing their multimodal posterior. Like [2, 7, 6, 8], the method we develop below is a decomposition of the manifold into locally linear neighborhoods. It bears closest relation to global coordination [8], although by a different construction of the problem, we avoid hill-climbing a spiky posterior and instead develop a closed-form solution. 2 Estimating locally linear scale and intrinsic dimensionality We begin with matrix of sample points Y .= [y1,···,yN], yn ∈RD populating a Ddimensional sample space, and a conjecture that these points are samples from a manifold M of intrinsic dimensionality d < D. We seek a mapping onto a vector space G(Y) →X .= [x1,···,xN], xn ∈Rd and 1-to-1 reverse mapping G−1(X) →Y such that local relations between nearby points are preserved (this will be formalized below). The map G should be non-catastrophic, that is, without folds: Parallel lines on the manifold in RD should map to continuous smooth non-intersecting curves in Rd. This guarantees that linear operations on X such as interpolation will have reasonable analogues on Y. Smoothness means that at some scale r the mapping from a neighborhood on M to Rd is effectively linear. Consider a ball of radius r centered on a data point and containing n(r) data points. The count n(r) grows as rd, but only at the locally linear scale; the grow rate is inflated by isotropic noise at smaller scales and by embedding curvature at larger scales. To estimate r, we look at how the r-ball grows as points are added to it, tracking c(r) .= d d logn(r) logr. At noise scales, c(r) ≈1/D < 1/d, because noise has distributed points in all directions with equal probability. At the scale at which curvature becomes significant, c(r) < 1/d, because the manifold is no longer perpendicular to the surface of the ball, so the ball does not have to grow as fast to accommodate new points. At the locally linear scale, the process peaks at c(r) = 1/d, because points are distributed only in the directions of the manifold’s local tangent space. The maximum of c(r) therefore gives an estimate of both the scale and the local dimensionality of the manifold (see figure 1), provided that the ball hasn’t expanded to a manifold boundary—boundaries have lower dimension than Scale behavior of a 1D manifold in 2-space samples noise scale locally linear scale curvature scale 2 10 1 10 2 10 3 10 0 10 1 #points (log scale) radius (log scale) Point−count growth process on a 2D manifold in 3−space radial growth process 1D hypothesis 2D hypothesis 3D hypothesis Figure 1: Point growth processes. LEFT: At the locally linear scale, the number of points in an r-ball grows as rd; at noise and curvature scales it grows faster. RIGHT: Using the point-count growth process to find the intrinsic dimensionality of a 2D manifold nonlinearly embedded in 3-space (see figure 2). Lines of slope 1/3, 1/2, and 1 are fitted to sections of the logr/lognr curve. For neighborhoods of radius r ≈1 with roughly n ≈10 points, the slope peaks at 1/2 indicating a dimensionality of d = 2. Below that, the data appears 3D because it is dominated by noise (except for n ≤D points); above, the data appears >2D because of manifold curvature. As the r-ball expands to cover the entire data-set the dimensionality appears to drop to 1 as the process begins to track the 1D edges of the 2D sheet. the manifold. For low-dimensional manifolds such as sheets, the boundary submanifolds (edges and corners) are very small relative to the full manifold, so the boundary effect is typically limited to a small rise in c(r) as r approaches the scale of the entire data set. In practice, our code simply expands an r-ball at every point and looks for the first peak in c(r), averaged over many nearby r-balls. One can estimate d and r globally or per-point. 3 Charting the data In the charting step we find a soft partitioning of the data into locally linear low-dimensional neighborhoods, as a prelude to computing the connection that gives the global lowdimensional embedding. To minimize information loss in the connection, we require that the data points project into a subspace associated with each neighborhood with (1) minimal loss of local variance and (2) maximal agreement of the projections of nearby points into nearby neighborhoods. Criterion (1) is served by maximizing the likelihood function of a Gaussian mixture model (GMM) density fitted to the data: p(yi|µ,Σ) .= ∑j p(yi|µj,Σj) pj = ∑j N (yi;µj,Σj) pj . (1) Each gaussian component defines a local neighborhood centered around µ j with axes defined by the eigenvectors of Σ j. The amount of data variance along each axis is indicated by the eigenvalues of Σ j; if the data manifold is locally linear in the vicinity of the µj, all but the d dominant eigenvalues will be near-zero, implying that the associated eigenvectors constitute the optimal variance-preserving local coordinate system. To some degree likelihood maximization will naturally realize this property: It requires that the GMM components shrink in volume to fit the data as tightly as possible, which is best achieved by positioning the components so that they “pancake” onto locally flat collections of datapoints. However, this state of affairs is easily violated by degenerate (zero-variance) GMM components or components fitted to overly small enough locales where the data density off the manifold is comparable to density on the manifold (e.g., at the noise scale). Consequently a prior is needed. Criterion (2) implies that neighboring partitions should have dominant axes that span similar subspaces, since disagreement (large subspace angles) would lead to inconsistent projections of a point and therefore uncertainty about its location in a low-dimensional coordinate space. The principal insight is that criterion (2) is exactly the cost of coding the location of a point in one neighborhood when it is generated by another neighborhood—the cross-entropy between the gaussian models defining the two neighborhoods: D(N1∥N2) = Z dyN (y;µ1,Σ1)log N (y;µ1,Σ1) N (y;µ2,Σ2) = (log|Σ−1 1 Σ2|+trace(Σ−1 2 Σ1)+(µ2−µ1)⊤Σ−1 2 (µ2−µ1)−D)/2. (2) Roughly speaking, the terms in (2) measure differences in size, orientation, and position, respectively, of two coordinate frames located at the means µ1,µ2 with axes specified by the eigenvectors of Σ1,Σ2. All three terms decline to zero as the overlap between the two frames is maximized. To maximize consistency between adjacent neighborhoods, we form the prior p(µ,Σ) .= exp[−∑i̸=j mi(µ j)D(Ni∥Nj)], where mi(µ j) is a measure of co-locality. Unlike global coordination [8], we are not asking that the dominant axes in neighboring charts are aligned—only that they span nearly the same subspace. This is a much easier objective to satisfy, and it contains a useful special case where the posterior p(µ,Σ|Y) ∝ ∑i p(yi|µ,Σ)p(µ,Σ) is unimodal and can be maximized in closed form: Let us associate a gaussian neighborhood with each data-point, setting µi = yi; take all neighborhoods to be a priori equally probable, setting pi = 1/N; and let the co-locality measure be determined from some local kernel. For example, in this paper we use mi(µj) ∝N (µ j;µi,σ2), with the scale parameter σ specifying the expected size of a neighborhood on the manifold in sample space. A reasonable choice is σ = r/2, so that 2erf(2) > 99.5% of the density of mi(µj) is contained in the area around yi where the manifold is expected to be locally linear. With uniform pi and µi, mi(µj) and fixed, the MAP estimates of the GMM covariances are Σi = ∑ j mi(µ j) (y j −µi)(yj −µi)⊤+(µj −µi)(µj −µi)⊤+Σ j !, ∑ j mi(µj) . (3) Note that each covariance Σi is dependent on all other Σj. The MAP estimators for all covariances can be arranged into a set of fully constrained linear equations and solved exactly for their mutually optimal values. This key step brings nonlocal information about the manifold’s shape into the local description of each neighborhood, ensuring that adjoining neighborhoods have similar covariances and small angles between their respective subspaces. Even if a local subset of data points are dense in a direction perpendicular to the manifold, the prior encourages the local chart to orient parallel to the manifold as part of a globally optimal solution, protecting against a pathology noted in [8]. Equation (3) is easily adapted to give a reduced number of charts and/or charts centered on local centroids. 4 Connecting the charts We now build a connection for set of charts specified as an arbitrary nondegenerate GMM. A GMM gives a soft partitioning of the dataset into neighborhoods of mean µk and covariance Σk. The optimal variance-preserving low-dimensional coordinate system for each neighborhood derives from its weighted principal component analysis, which is exactly specified by the eigenvectors of its covariance matrix: Eigendecompose VkΛkV⊤ k ←Σk with eigenvalues in descending order on the diagonal of Λk and let Wk .= [Id,0]V⊤ k be the operator projecting points into the kth local chart, such that local chart coordinate uki .= Wk(yi −µk) and Uk .= [uk1,···,ukN] holds the local coordinates of all points. Our goal is to sew together all charts into a globally consistent low-dimensional coordinate system. For each chart there will be a low-dimensional affine transform Gk ∈R(d+1)×d that projects Uk into the global coordinate space. Summing over all charts, the weighted average of the projections of point yi into the low-dimensional vector space is c x|y .= ∑ j Gj Wj(y−µj) 1 pj|y(y) ⇒ d xi|yi .= ∑ j Gj uji 1 pj|y(yi), (4) where pk|y(y) ∝pkN (y;µk,Σk), ∑k pk|y(y) = 1 is the probability that chart k generates point y. As pointed out in [8], if a point has nonzero probabilities in two charts, then there should be affine transforms of those two charts that map the point to the same place in a global coordinate space. We set this up as a weighted least-squares problem: G .= [G1,···,GK] = arg min Gk,Gj ∑ i pk|y(yi)pj|y(yi)
Gk uki 1 −Gj uji 1
2 F . (5) Equation (5) generates a homogeneous set of equations that determines a solution up to an affine transform of G. There are two solution methods. First, let us temporarily anchor one neighborhood at the origin to fix this indeterminacy. This adds the constraint G1 = [I,0]⊤. To solve, define indicator matrix Fk .= [0,···,0,I,0,···,0]⊤with the identity matrix occupying the kth block, such that Gk = GFk. Let the diagonal of Pk .= diag([pk|y(y1),···,pk|y(yN)]) record the per-point posteriors of chart k. The squared error of the connection is then a sum of of all patch-to-anchor and patch-to-patch inconsistencies: E .= ∑ k "
(GUk − U1 0 )PkP1
2 F + ∑ j̸=k
(GUj −GUk)PjPk
2 F # ; Uk .= Fk Uk 1 . (6) Setting dE/dG = 0 and solving to minimize convex E gives G⊤= ∑ k UkP2 k ∑ j̸=k P2 j ! U⊤ k −∑ j̸=k UkP2 kP2 jU⊤ j !−1 ∑ k UkP2 kP2 1 U1 0 ⊤! . (7) We now remove the dependence on a reference neighborhood G1 by rewriting equation 5, G = argmin G ∑j̸=k∥(GUj −GUk)P jPk∥2 F = ∥GQ∥2 F = trace(GQQ⊤G⊤) , (8) where Q .= ∑j̸=k Uj −Uk PjPk . If we require that GG⊤= I to prevent degenerate solutions, then equation (8) is solved (up to rotation in coordinate space) by setting G⊤to the eigenvectors associated with the smallest eigenvalues of QQ⊤. The eigenvectors can be computed efficiently without explicitly forming QQ⊤; other numerical efficiencies obtain by zeroing any vanishingly small probabilities in each Pk, yielding a sparse eigenproblem. A more interesting strategy is to numerically condition the problem by calculating the trailing eigenvectors of QQ⊤+ 1. It can be shown that this maximizes the posterior p(G|Q) ∝p(Q|G)p(G) ∝e−∥GQ∥2 F e−∥G1∥, where the prior p(G) favors a mapping G whose unit-norm rows are also zero-mean. This maximizes variance in each row of G and thereby spreads the projected points broadly and evenly over coordinate space. The solutions for MAP charts (equation (5)) and connection (equation (8)) can be applied to any well-fitted mixture of gaussians/factors1/PCAs density model; thus large eigenproblems can be avoided by connecting just a small number of charts that cover the data. 1We thank reviewers for calling our attention to Teh & Roweis ([11]—in this volume), which shows how to connect a set of given local dimensionality reducers in a generalized eigenvalue problem that is related to equation (8). original data embedding, XY view XYZ view charting best Isomap best LLE (regularized) charting (projection onto coordinate space) reconstruction (back−projected coordinate grid) data (linked) XZ view XY view random subset of local charts LLE, n=5 LLE, n=6 LLE, n=7 LLE, n=8 LLE, n=9 LLE, n=10 Figure 2: The twisted curl problem. LEFT: Comparison of charting, ISOMAP, & LLE. 400 points are randomly sampled from the manifold with noise. Charting is the only method that recovers the original space without catastrophes (folding), albeit with some shear. RIGHT: The manifold is regularly sampled (with noise) to illustrate the forward and backward projections. Samples are shown linked into lines to help visualize the manifold structure. Coordinate axes of a random selection of charts are shown as bold lines. Connecting subsets of charts such as this will also give good mappings. The upper right quadrant shows various LLE results. At bottom we show the charting solution and the reconstructed (back-projected) manifold, which smooths out the noise. Once the connection is solved, equation (4) gives the forward projection of any point y down into coordinate space. There are several numerically distinct candidates for the backprojection: posterior mean, mode, or exact inverse. In general, there may not be a unique posterior mode and the exact inverse is not solvable in closed form (this is also true of [8]). Note that chart-wise projection defines a complementary density in coordinate space px|k(x) = N (x;Gk 0 1 ,Gk [Id,0]Λk[Id,0]⊤ 0 0 0 G⊤ k ). (9) Let p(y|x,k), used to map x into subspace k on the surface of the manifold, be a Dirac delta function whose mean is a linear function of x. Then the posterior mean back-projection is obtained by integrating out uncertainty over which chart generates x: c y|x = ∑ k pk|x(x) µk +W⊤ k Gk I 0 + x−Gk 0 1 ! , (10) where (·)+ denotes pseudo-inverse. In general, a back-projecting map should not reconstruct the original points. Instead, equation (10) generates a surface that passes through the weighted average of the µi of all the neighborhoods in which yi has nonzero probability, much like a principal curve passes through the center of each local group of points. 5 Experiments Synthetic examples: 400 2D points were randomly sampled from a 2D square and embedded in 3D via a curl and twist, then contaminated with gaussian noise. Even if noiselessly sampled, this manifold cannot be “unrolled” without distortion. In addition, the outer curl is sampled much less densely than the inner curl. With an order of magnitude fewer points, higher noise levels, no possibility of an isometric mapping, and uneven sampling, this is arguably a much more challenging problem than the “swiss roll” and “s-curve” problems featured in [12, 9, 8, 1]. Figure 2LEFT contrasts the (unique) output of charting and the best outputs obtained from ISOMAP and LLE (considering all neighborhood sizes between 2 and 20 points). ISOMAP and LLE show catastrophic folding; we had to change LLE’s a. data, xy view b. data, yz view c. local charts d. 2D embedding true manifold arc length 1D ordinate e. 1D embedding Figure 3: Untying a trefoil knot ( ) by charting. 900 noisy samples from a 3D-embedded 1D manifold are shown as connected dots in front (a) and side (b) views. A subset of charts is shown in (c). Solving for the 2D connection gives the “unknot” in (d). After removing some points to cut the knot, charting gives a 1D embedding which we plot against true manifold arc length in (e); monotonicity (modulo noise) indicates correctness. Three principal degrees of freedom recovered from raw jittered images expression pose scale images synthesized via backprojection of straight lines in coordinate space Figure 4: Modeling the manifold of facial images from raw video. Each row contains images synthesized by back-projecting an axis-parallel straight line in coordinate space onto the manifold in image space. Blurry images correspond to points on the manifold whose neighborhoods contain few if any nearby data points. regularization in order to coax out nondegenerate (>1D) solutions. Although charting is not designed for isometry, after affine transform the forward-projected points disagree with the original points with an RMS error of only 1.0429, lower than the best LLE (3.1423) or best ISOMAP (1.1424, not shown). Figure 2RIGHT shows the same problem where points are sampled regularly from a grid, with noise added before and after embedding. Figure 3 shows a similar treatment of a 1D line that was threaded into a 3D trefoil knot, contaminated with gaussian noise, and then “untied” via charting. Video: We obtained a 1965-frame video sequence (courtesy S. Roweis and B. Frey) of 20 × 28-pixel images in which B.F. strikes a variety of poses and expressions. The video is heavily contaminated with synthetic camera jitters. We used raw images, though image processing could have removed this and other uninteresting sources of variation. We took a 500-frame subsequence and left-right mirrored it to obtain 1000 points in 20×28 = 560D image space. The point-growth process peaked just above d = 3 dimensions. We solved for 25 charts, each centered on a random point, and a 3D connection. The recovered degrees of freedom—recognizable as pose, scale, and expression—are visualized in figure 4. charting stereographic map to 3D fishbowl original data Figure 5: Flattening a fishbowl. From the left: Original 2000×2D points; their stereographic mapping to a 3D fishbowl; its 2D embedding recovered using 500 charts; and the stereographic map. Fewer charts lead to isometric mappings that fold the bowl (not shown). Conformality: Some manifolds can be flattened conformally (preserving local angles) but not isometrically. Figure 5 shows that if the data is finely charted, the connection behaves more conformally than isometrically. This problem was suggested by J. Tenenbaum. 6 Discussion Charting breaks kernel-based NLDR into two subproblems: (1) Finding a set of datacovering locally linear neighborhoods (“charts”) such that adjoining neighborhoods span maximally similar subspaces, and (2) computing a minimal-distortion merger (“connection”) of all charts. The solution to (1) is optimal w.r.t. the estimated scale of local linearity r; the solution to (2) is optimal w.r.t. the solution to (1) and the desired dimensionality d. Both problems have Bayesian settings. By offloading the nonlinearity onto the kernels, we obtain least-squares problems and closed form solutions. This scheme is also attractive because large eigenproblems can be avoided by using a reduced set of charts. The dependence on r, like trusted-set methods, is a potential source of solution instability. In practice the point-growth estimate seems fairly robust to data perturbations (to be expected if the data density changes slowly over a manifold of integral Hausdorff dimension), while the use of a soft neighborhood partitioning appears to make charting solutions reasonably stable to variations in r. Eigenvalue stability analyses may prove useful here. Ultimately, we would prefer to integrate r out. In contrast, use of d appears to be a virtue: Unlike other eigenvector-based methods, the best d-dimensional embedding is not merely a linear projection of the best d + 1-dimensional embedding; a unique distortion is found for each value of d that maximizes the information content of its embedding. Why does charting performs well on datasets where the signal-to-noise ratio confounds recent state-of-the-art methods? Two reasons may be adduced: (1) Nonlocal information is used to construct both the system of local charts and their global connection. (2) The mapping only preserves the component of local point-to-point distances that project onto the manifold; relationships perpendicular to the manifold are discarded. Thus charting uses global shape information to suppress noise in the constraints that determine the mapping. Acknowledgments Thanks to J. Buhmann, S. Makar, S. Roweis, J. Tenenbaum, and anonymous reviewers for insightful comments and suggested “challenge” problems. References [1] M. Balasubramanian and E. L. Schwartz. The IsoMap algorithm and topological stability. Science, 295(5552):7, January 2002. [2] C. Bregler and S. Omohundro. Nonlinear image interpolation using manifold learning. In NIPS–7, 1995. [3] D. DeMers and G. Cottrell. Nonlinear dimensionality reduction. In NIPS–5, 1993. [4] J. Gomes and A. Mojsilovic. A variational approach to recovering a manifold from sample points. In ECCV, 2002. [5] T. Hastie and W. Stuetzle. Principal curves. J. Am. Statistical Assoc, 84(406):502–516, 1989. [6] G. Hinton, P. Dayan, and M. Revow. Modeling the manifolds of handwritten digits. IEEE Trans. Neural Networks, 8, 1997. [7] N. Kambhatla and T. Leen. Dimensionality reduction by local principal component analysis. Neural Computation, 9, 1997. [8] S. Roweis, L. Saul, and G. Hinton. Global coordination of linear models. In NIPS–13, 2002. [9] S. T. Roweis and L. K. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290:2323–2326, December 22 2000. [10] A. Smola, S. Mika, B. Schölkopf, and R. Williamson. Regularized principal manifolds. Machine Learning, 1999. [11] Y. W. Teh and S. T. Roweis. Automatic alignment of hidden representations. In NIPS–15, 2003. [12] J. B. Tenenbaum, V. de Silva, and J. C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290:2319–2323, December 22 2000.
|
2002
|
119
|
2,124
|
Spikernels: Embedding Spiking Neurons in Inner-Product Spaces Lavi Shpigelman Yoram Singer Rony Paz Eilon Vaadia School of computer Science and Engineering Interdisciplinary Center for Neural Computation Dept. of Physiology, Hadassah Medical School The Hebrew University Jerusalem, 91904, Israel {shpigi,singer}@cs.huji.ac.il {ronyp,eilon}@hbf.huji.ac.il Abstract Inner-product operators, often referred to as kernels in statistical learning, define a mapping from some input space into a feature space. The focus of this paper is the construction of biologically-motivated kernels for cortical activities. The kernels we derive, termed Spikernels, map spike count sequences into an abstract vector space in which we can perform various prediction tasks. We discuss in detail the derivation of Spikernels and describe an efficient algorithm for computing their value on any two sequences of neural population spike counts. We demonstrate the merits of our modeling approach using the Spikernel and various standard kernels for the task of predicting hand movement velocities from cortical recordings. In all of our experiments all the kernels we tested outperform the standard scalar product used in regression with the Spikernel consistently achieving the best performance. 1 Introduction Neuronal activity in primary motor cortex (MI) during multi-joint arm reaching movements in 2D and 3-D [1, 2] and drawing movements [3] has been used extensively as a test bed for gaining understanding of neural computations in the brain. Most approaches assume that information is coded by firing rates, measured on various time scales. The tuning curve approach models the average firing rate of a cortical unit as a function of some external variable, like the frequency of an auditory stimulus or the direction of a planned movement. Many studies of motor cortical areas [4, 2, 5, 3, 6] showed that while single units are broadly tuned to movement direction, a relatively small population of cells (tens to hundreds) carries enough information to allow for accurate prediction. Such broad tuning can be found in many parts of the nervous system, suggesting that computation by distributed populations of cells is a general cortical feature. The population-vector method [4, 2] describes each cell’s firing rate as the dot product between that cell’s preferred direction and the direction of hand movement. The vector sum of preferred directions, weighted by the measured firing rates is used both as a way of understanding what the cortical units encode and as a means for estimating the velocity vector. Several recent studies [7, 8, 9] propose that neurons can represent or process multiple parameters simultaneously, suggesting that it is the dynamic organization of the activity in neuronal populations that may represent temporal properties of behavior such as the computation of transformation from ’desired action’ in external coordinates to muscle activation patterns. Some studies [10, 11, 12] support the notion that neurons can associate and dissociate rapidly to functional groups in the process of performing a computational task. The concepts of simultaneous encoding of multiple parameters and dynamic representation in neuronal populations, could together explain some of the conundrums in motor system physiology. These concepts also invite usage of increasingly complex models for relating neural activity to behavior. Advances in computing power and recent developments of physiological recording methods allow recording of ever growing numbers of cortical units that can be used for real-time analysis and modeling. These developments and new understandings have recently been used to reconstruct movements on the basis of neuronal activity in real-time in an effort to facilitate the development of hybrid brainmachine interfaces that allow interaction between living brain tissue and artificial electronic or mechanical devices to produce brain controlled movements [13, 6, 14, 15, 11, 16, 17]. Current attempts at predicting movement from cortical activity rely on modeling techniques such as cosine-tuning estimation (pop. vector) [18], linear regression [15, 19] and artificial neural nets [15] (though this study reports getting better results by linear regression). A major deficiency of standard approaches is poor ability to extract the relevant information from monitored brain activity in an efficient manner that will allow reducing the number of recorded channels and recording time. The paper is organized as follows. In Sec. 2 we describe the problem setting that this paper is concerned with. In Sec. 3 we introduce and explain the main mathematical tool that we use, namely, the kernel operator. In Sec. 4 we discuss the design and implementation of a biologically-motivated kernel for neural activities. We report experimental results in Sec. 5 and give conclusions in Sec. 6. 2 Problem setting Consider the case where we monitor instantaneous spike rates from cortical units during physical motor behavior of a subject. Our goal is to learn a predictive model of some behavior parameter with the cortical activity as the input. Formally speaking, let be a sequence of instantaneous firing rates from cortical units consisting of
samples altogether. We use to denote sequences of firing rates and denote by the length of a sequence . Let be the th sample (i.e. instantaneous firing rates) of a sequence . We also use to denote the concatenation of with one more sample . We refer to the instantaneous firing rate of a unit ! by #" . We also need to employ a notation for sub-sequences. The $ -long prefix is denoted &%' ( . Finally, throughout the work we need to examine a substrings of sequences. We denote by ) a vector of indices into the sequence where )+*, % . / / /0 .1 and 243 % 3 536/ / /&3 .1 37# . We also need to introduce some notation for target variables we would like to predict. Let 89 denote some parameter of the movement that we would like to predict (e.g. the movement velocity in the direction, :<; ). Our goal is to learn an approximation = 8 ( of the form >@? A CB from neural firing rates to movement parameter. In general, information about movement can be found in neural activity both before and after the time of movement itself. Our plan, though, is to design a model that can be used for controlling a neural prosthesis. We will therefore confine ourselves to causal predictors that use %' ( to predict 8 ( . We therefore would like to make = 8 ( *D>E. %F' ( as close as possible (in a sense that is explained in the sequel) to 8 ( . 3 Kernel methods for regression A major mathematical notion employed in this paper is kernel operators. Kernel operators allow algorithms whose interface to the data is limited to scalar products to employ complicated premappings of the data into feature spaces by use of kernels. Formally, a kernel is an innerproduct operator GH?JILKMI B where I is some arbitrary vector space. An explicit way to describe G is via a mapping N6?I BPO from I to an inner-products space O such that GQJ+RST*UN#VXW<N+RY . Given a kernel operator we can use it to perform various statistical learning tasks. One such task is support vector regression (SVR) [20] which attempts to find a regression function for target values that is linear if observed in the (typically very large) feature space mapped by the kernel. We give here a brief description of SVR for the the sake of clarity. Support Vector Regression minimizes Vapnik’s [21] -insensitive loss function 8 >J *
8 >J which defines a hyperplane with width around the estimate. Examples that fall within it’s boundaries are considered well estimated and do not contribute to the error. Examples outside the tube contribute linearly to the loss. Say N# is the feature vector implemented by kernel GQ WS . To estimate a linear (linear in feature space) regression >J * WN# with precision , one minimizes 2 "! % 8 >EN This can be written as a constrained minimization problem minimize # %$&%$'&* 2 ( ) ! % *$+,$'& subject to WN J.-/. 0 8A3 12$ 8 W N 3 -/. X3 12$4& $ %$ & 65 By switching to the dual problem of this optimization problem, it is possible to incorporate the kernel function, achieving a mapping that may not be feasible by calculating (possibly infinite) feature vectors N# . For 87 9 5 chosen a-priori, the dual problem is maximize : ; <; & * = "! % ; & /; > "! % ; & 2; . 8A 2 ? @<! % ;A& 2; . ;A& @ ,;@ ! B@ subject to C
2A / / / <DET? ; <; & 2F GIH and ! % ; ,; & * The solution of the regression estimate takes the form >J * ! % ; & ,; . !V J. In summary, SVM regression solves a quadratic optimization problem to find a hyperplane in the kernel induced feature space that best estimates the data for an -insensitive linear loss function. 4 Spikernels The quality of SVM learning is highly dependent on how the data is embedded in the feature space via the kernel operator. For this reason, several studies have been devoted lately to developing new kernels [22, 23, 24]. In fact, for classification problems, a good kernel would render the work of the classification algorithm trivial. With this in mind, we develop a kernel for neural spiking activity. 4.1 Motivation Our goal in developing a kernel for spike trains is to map similar patterns to nearby areas of the feature space. Current methods for predicting response variables from neural activities use standard linear regression techniques (see for instance [15]) or or even replace the time pattern with mean firing rates. A notable example is the population vector method [18]. Other approaches use off-the-shelf learning algorithms, intended for general purpose. In the description of our kernel we attempt to capture some well accepted notions on similarities between spike trains. We make the following assumptions regarding similarities between spike patterns: Time Rate Pattern A Pattern B Pattern A Pattern B Time Rate Pattern A Pattern B Pattern A Pattern B Time Rate Pattern A Pattern B Pattern A Pattern B Time of Interest Time of Interest Figure 1: Illustrative examples of pattern similarities. Left: bin-by-bin comparison yields small differences. Middle: patterns with large bin-by-bin differences that can be eliminated with some time warping. Right: patterns whose suffix (time of interest) is similar and prefix is different. The most commonly made assumption is that similar firing patterns may have small differences in a bin-by-bin comparison. This type of variation is due to inherent noise of any physical system but also responses to external factors that were not recorded and are not directly related the to the task performed. On the left-hand side of Fig. 1 we show an example of two patterns that are bin-wise similar though clearly not identical. A cortical population may display highly specific patterns to represent specific information. It is conceivable that some features of external stimuli are represented by population dynamics that would be best described as ’temporal’ coding. Two patterns may be quite different in a simple bin-wise comparison but if they are aligned by some non-linear time distortion or shifting, the similarity becomes apparent. An illustration of such patterns is given in the middle plots of Fig. 1. In comparing patterns we would like to induce a higher score when the time-shifts are small. Patterns that are associated with identical values of an external stimulus at time $ may be similar at that time but very different at $ when values of the external stimulus for these patterns are no longer similar (as illustrated on the right-hand-side of Fig. 1). 4.2 Kernel definition We describe the kernel by specifying the features that make up the feature space. Our construction of the feature space builds on the work of Lodhi et al. [24]. First, we need to introduce a few more notations. Let be a sequence of length * . The set of all possible -long index vectors defining a sub-sequence of is 9? *
) ? )J 1 243 ) % / / / ) 1 3 # . Also, let ;
# denote a bin-wise distance over a pair of samples (firing rates). We also overload notation and denote by . * 1 " ! % " a distance between sequences. The sequence distance is the sum over the samples constituting the two sequences. Let # D 2 . The component of our (infinite) feature vector N is defined as, N * ! " #$% &(' *),+.-0/ ? 1 32 1 +.15467 (1) where and is a normalization constant that simplifies the calculation and and )F% is the first index of ) . In words, N8V is a sum over all n-long sub-sequences of . Each sub-sequence is compared to (the feature coordinate) and is weighted up according to its similarity to . In particular, part of the weight of each sub-sequence of reflects how concentrated the subsequence is toward the end of . Put another way, the entry indexed by measures how close is to the time series near its end. This definition seems to fit our assumptions on neural coding for the following reasons: It allows for complex patterns: small values of and (or concentrated measures) mean that each coordinate tends toward being either 2 or depending whether is almost identical to a suffix of or not. Patterns that are piece-wise similar to contribute to the feature coordinate with a weight that decays as the sample-by-sample comparison between the sequences grows large. We allow gaps in the indexes defining sub-sequences, thus, allowing for time warping. Patterns that begin further from the required prediction time are penalized by an exponentially decaying weight. 4.3 Efficient kernel calculation he definition of N given by Eq. (1) requires the manipulation of an infinite feature space. Straightforward calculation of the feature values and performing the induced inner-product is clearly impossible. Based on ideas from [24] we developed an indirect method for evaluating the kernel through a recursion which can be performed efficiently using dynamic programing. We now describe the recursion. Denote by the last entry in the sequence ) 32 1 +.; 1 . We now describe two recursive equations for N with respect to the length of the time series and the sub-sequence length. Due to the lack of space we skip some of the algebraic manipulations that are needed to derive the recursions. The first equation is N V * N ),+ ? ; 1 7 N 7 7 (2) Eq. (2) simply separates the sum over sub-sequences of into two subsets: one where is not specified by the index vectors and the latter where ) 1 specifies . The second recursive equation for N is, again, with respect to both the length of the sub-sequence ( ) and the length of the sequence , N*VP* 7 32 1 + 1 @! % ),+ ? 1 32 1 + 154 @ % N* 7 7J %' @ 4V%0 (3) The last equation simply states that the feature is a sum over all possible values of ) 1 . Note that for , 1 ? @ is empty. Eqs. (2) and (3) are now used for computing the recursion equation for G : G 1 J#*
N*VV N We plug Eq. (2) into N* V and plug Eq. (3) into N8 . Using algebraic manipulations we replace integrals over scalar products of N by the proper kernels and get the following recursive function, G 1 J * G 1 - 32 1 + 1 @<! % 32 1 + 154 @ G 1 4 %J %' @ 4 %0 *),+ ? ; 1 *),+ ? 1 1(4) The initial conditions are: C ) 32 1 +.1 T ) 32 1 + 1 G<P* 2 if
# F# G * Assuming that the computation time of the integral in Eq. (4) is a constant, computing the entire recursion requires .# time. We can achieve a speed up by a factor of if we cache the term on the right hand side of Eq.(4) as follows. Define, G R 1 J8P* 32 1 + 1 @<! % 32 1 + 1 4 @ G 1 4V% 8 %F' @ 4 % 8) + ? ; 1 ),+ ? + 1 1 1 (5) Separating the above sum into its two parts (one for *D#8 and one for the rest), and using the definition of G R from Eq.(5) we get the following recursive equation for G R , G R 1 J8& * G 1 4 % *),+ ? ; 1 ) + ? 1 1 G R 1 J (6) C ) 32 1 +.1 ) 32 1 + 1 G R <*,2 if
# F# G R <* Finally, the recursive equation for G is, G 1 J * G 1 - G R 1 J yielding an D.# dynamic programing solution for G 1 < . 4.4 Spikernel variants. The kernels defined by Eq.(1) consider only patterns of fixed length ( ). It makes sense to look at sub-sequences of various lengths. Since a linear combination of kernels is also a kernel, we can define our kernel to be GQ $ #* 1 ! % G $ 7 / The kernel summation can be interpreted as a concatenation of the feature vectors that these kernels represent. Weighted summation is concatenation of the feature vectors after first multiplying them by the square root of the weights. Different choices of +;
result in kernels that differ in the way two rate values are compared. Say we assign ;
to be the squared - norm: ;E - ) " ! % ;#" V" , the integral in the kernel recursion Eq.(6) becomes: 8) + ? 1 ),+ ? 1
* ) 7 4 Note that the constant "!$#%'& , which has an fold gain affect on G goes to infinity as goes to 1. This gain results in a kernel whose computation is numerically unstable. However, we can easily cancel it with the constant . Substituting this result back into Eq.(4) we get G R 1 J8P* G 1 4 % 7 ; 4 G R 1 J We show results for the - norm. 5 Experimental results Data collection: The data used in this work was recorded from the primary motor cortex of a rhesus (Macaca mulatta) monkey (~4.5 kg). The animal’s care and surgical procedures accorded with The NIH Guide for the Care and Use of Laboratory Animals (rev. 1996) and with the Hebrew University guidelines supervised by the institutional committee for animal care and use. The monkey sat in a dark chamber, and 8 electrodes were introduced into each hemisphere. The electrode signals were amplified, filtered and sorted (MCP-PLUS, MSD, Alpha-Omega, Nazareth, Israel). The data used in this report includes 31 single units and 16 multi-unit channels (MUA) that were recorded in one session by 16 microelectrodes. The monkey used two planarmovement manipulanda to control 2 cursors (X and + shapes) on the screen to perform center-out task. Each trial begun when the monkey centered both cursors on a central circle for 1.0-1.5s. Either cursor could turn green, indicating the hand to be used in the trial (X for right arm and + for the left). Then, (after an additional hold period of 1.0-1.5s) one of eight targets appeared at a distance of 4 cm from the origin and monkey had to move and reach the target in less than 2s to receive liquid reward. At the end of each session, we examined the activity of neurons evoked by passive manipulation of the limbs and applied intracortical microstimulation (ICMS) to evoke movements. The data presented here was recorded in penetration sites where ICMS evoked shoulder and elbow movements. Penetration locations were verified by MRI (Biospec Bruker 4.7 Tesla). Data preprocessing and modeling: The movements and spike data were preprocessed to create a labeled corpus. We used only the data from trials on which the monkey succeeded in the movement task and examined only the right hand movements. We partitioned the movement and spike trains into 2 D( -long bins to get the spike counts and average hand movement velocities in each segment. We then normalized the spike counts to achieve a zero mean and a unit variance for each cortical unit. A labeled example A(F: ( for time $ consisted of the I or velocity as the target label and the preceding 2 second (i.e. 10 segments) of spike counts from all ( ) cortical units as the input sequence ( . In our experiments the number of cortical units was hence the matrix of spike counts is of size K 2 . Each kernel employs a few parameters ( J # / / / ) and the SVM regression setup requires setting of two more parameters, ( and ). Therefore, the learning task is performed in two stages. First, we used cross-validation to choose the best parameters using a validation set. Then, we learned to predict the response variable using SVR. Overall we had minutes of clean cortical recordings of which we used the first 2 minutes as our validation set for tuning the parameters. The second half was used for training and testing. The kernels that we tested are the exponential kernel ( GQ * 4 +34 1 , the homogeneous polynomial kernel ( GQ *, W ) , * ), the standard scalar product kernel ( GQ#* W ) which boils down to a linear regression, and the Spikernel. Accuracy results were obtained by performing 5-fold cross-validation for each kernel. The 5 folds were produced by randomly splitting the data into 5 groups: out of the groups were used for training and the rest of the data was used for evaluation. This was process was repeated 5 times by using once each fifth of the data as a test set. We computed the correlation coefficient per fold for each kernel. The per-fold results are shown in Fig. 2A as a scatter plot. Each point compares the Spikernel score versus one of the adversaries. The Spikernel out-performed the rest in every single test set. We found out that predicting the : signal was more difficult than predicting the :; signal. This may be the result of sampling a population of cortical units that are tuned more to the left-right directions. The mean results are summarized in Fig. 2B. The linear regression method (scalar-product kernel) came in last. It seems that both re-mapping the data by standard kernels and by the Spikernel allow for better prediction models. The ordering of the kernels by their mean score is consistent when looking at per-test results, except for the exponential kernel which is out-performed by linear regression in some of the tests. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Spikernel standard embeddings
C=0.01 0.21 0.44 Lin. - (s·t) γ γ γ γ =10-6 C=1 0.25 0.47 exp(-γγγγ(s-t)2) C=10 0.29 0.56 (s·t)3 C=0.01 0.36 0.62 (s·t)2 µµµµ====0.99, , , , λλλλ=0.7, N=5 0.49 0.70 Spikernel Parameters vy Mean r vx Mean r Kernel C=0.01 0.21 0.44 Lin. - (s·t) γ γ γ γ =10-6 C=1 0.25 0.47 exp(-γγγγ(s-t)2) C=10 0.29 0.56 (s·t)3 C=0.01 0.36 0.62 (s·t)2 µµµµ====0.99, , , , λλλλ=0.7, N=5 0.49 0.70 Spikernel Parameters vy Mean r vx Mean r Kernel Mean Values Figure 2: The Spikernel is compared to (color & shape coded) standard kernels. A - Scatter plot of correlation coefficient results in all cross-validation folds. B – Mean correlation coefficient values for each kernel type The Spikernel out-performs in all folds. A B 6 Summary In this paper we described an approach based on recent advances in kernel-based learning for predicting response variables from neural activities. On the data we collected, all the kernels we devised outperform the standard scalar product that is used in linear regression. Furthermore, the Spikernel, a biologically motivated kernel operator for spike counts outperforms all the other kernels. Our current research is focused in two directions. First, we are investigating the adaptations of the Spikernel to other neural activities such as Local Field Potentials (LFP). Our second and more challenging goal is to devise statistical learning algorithms that use the Spikernel as part of a dynamical system that may incorporate bio-feedback. We believe that such extensions are an important and necessary steps toward operational neural prostheses. Acknowledgments: Supported in part by the German-Israeli-Foundation for Scientific Research and Development (GIF) and by the German-Israeli Project Cooperation (DIP) established by BMBF. References [1] Georgopoulos AP, Schwartz AB, and Kettner RE. Neuronal population coding of movement direction. Science, 233:1416–1419, 1986. [2] Apostolos P. Georgopoulus, Ronald E. Kettner, and Andrew B. Wchwartz. Primate motor cortex and free arm movements to visual targets in three-dimensional space. The Journal of NeuroScience, 8, August 1988. [3] Schwartz AB. Direct cortical representation of drawing. Science, 265:540–542, 1994. [4] A. P. Georgopoulus, J.F. Kalaska, and J.T. Massey. Spatial coding of movements: A hypothesis concerning the coding of movement of movement direction by motor cortical populations. Experimental Brain Research (Supp), 7:327–336, 1983. [5] Daniel W. Moran and Andrew B. Schwartz. Motor cortical representation of speed and direction during reaching. Journal of Neurophysiology, 82:2676–2692, 1999. [6] Mark Laubach, Johan Wessberh, and Miguel A. L. Nicolelis. Cortical ensemble activity increasingly predicts behavior outcomes during learning of a motor task. Nature, 405(1), June 2000. [7] Fu QG, Flament D, Coltz JD, and Ebner TJ. Relationship of cerebellar purkinje cell simple spike discharge to movement kinematics in the monkey. Journal of Neurophysiology, 78, 1997. [8] Donchin O, Gribova A, Steinberg O, Bergman H, and Vaadia E. Primary motor cortex is involved in bimanual coordination. Nature, 1998. [9] Anthony G. Reina, Daniel W. Moran, and Andrew B. Schwartz. On the relationship between joint angular velocity and motor cortical discharge during reaching. Journal of Neurophysiology, 85:2576– 2589, 2001. [10] E. Vaadia, I. Haalman, M. Abeles, H. Bergman, Y. Prut, H. Slovin, and A. Aertsen. Dynamics of neuronal interactions in monkey cortex in relation to behavioral events. Nature, 373:515–518, Febuary 1995. [11] Nicolelis MA Laubach M, Shuler M. Independent component analyses for quantifying neuronal ensemble interactions. J Neurosci Methods, 1999. [12] A. Reihle, S. Grun, M. Diesmann, and A. M. H. J. Aersten. Spike synchronization and rate modulation differentially involved in motor cortical function. Science, 278:1950–1952, 1997. [13] Chapin JK, Moxon KA, Markowitz RS, and Nicolelis MA. Real-time control of a robot arm using simultaneously recorded neurons in the motor cortex. Nature Neuroscience, 2:664–670, 1999. [14] Miguel A. L. Nicolelis. Actions from thoughts. Nature, 409(18), January 2001. [15] Johan Wessberg, Christopher R. Stambaugh, Jerald D. Kralik, Pamela D. Beck, Mark Laubach, John K. Chapin, Jung Kim, James Biggs, Mandayam A. Srinivasan, and Miguel A. L. Nicolelis. Real-time predictionof hand trajectory by ensembles of cortical neurons in primates. Nature, 408(16), November 2000. [16] Nicolelis MA, Ghazanfar AA, Faggin BM, Votaw S, and Oliveira LM. Reconstructing the engram: simultaneous, multisite, many single neuron recordings. Neuron, 18:529–537, 1997. [17] Isaacs RE, Weber DJ, and Schwartz A. Work toward real-time control of a cortical neural prothesis. IEEE Trans Rehabil Eng, 8(196–198), 2000. [18] Dawn M. Taylor, Stephen I. Helms Tillery, and Andrew B. Schwartz. Direct cortical control of 3d neuroprosthetic devices. Science, 2002. [19] Mijail D. Serruya, Nicholas G. Hatsopoulus, Liam Paninski, Matthew R. Fellows, and John P. Donoghue. Instant neural control of a movement signal. Nature, 416:141–142, March 2002. [20] A. Smola and B. Sch. A tutorial on support vector regression, 1998. [21] Vladimir Vapnik. The Nature of Statistical Learning Theory. Springer, N.Y., 1995. [22] Tommi S. Jaakola and David Haussler. Exploiting generative models in discriminative calssifiers. In NIPS, 1998. [23] Marc G. Genton. Classes of kernels for machine learning: A statistical perspective. Journal of MAchine Learning Research, 2:299–312, January 2001. [24] Huma Lodhi, John Shawe-Taylor, Nello Cristianini, and Christopher J. C. H. Watkins. Text classification using string kernels. In NIPS, pages 563–569, 2000.
|
2002
|
12
|
2,125
|
A Minimal Intervention Principle for Coordinated Movement Emanuel Todorov Department of Cognitive Science University of California, San Diego todorov@cogsci.ucsd.edu Michael I. Jordan Computer Science and Statistics University of California, Berkeley jordan@cs.berkeley.edu Abstract Behavioral goals are achieved reliably and repeatedly with movements rarely reproducible in their detail. Here we offer an explanation: we show that not only are variability and goal achievement compatible, but indeed that allowing variability in redundant dimensions is the optimal control strategy in the face of uncertainty. The optimal feedback control laws for typical motor tasks obey a “minimal intervention” principle: deviations from the average trajectory are only corrected when they interfere with the task goals. The resulting behavior exhibits task-constrained variability, as well as synergetic coupling among actuators—which is another unexplained empirical phenomenon. 1 Introduction Both the difficulty and the fascination of the motor coordination problem lie in the apparent conflict between two fundamental properties of the motor system: the ability to accomplish its goal reliably and repeatedly, and the fact that it does so with variable movements [1]. More precisely, trial-to-trial fluctuations in individual degrees of freedom are on average larger than fluctuations in task-relevant movement parameters—motor variability is constrained to a redundant or “uncontrolled” manifold [16] rather than being suppressed altogether. This pattern has now been observed in a long list of behaviors [1, 6, 16, 14]. In concordance with such naturally occurring variability, experimentally induced perturbations [1, 3, 12] are compensated in a way that maintains task performance rather than a specific stereotypical movement pattern. This body of evidence is fundamentally incompatible with standard models of motor coordination that enforce a strict separation between trajectory planning and trajectory execution [2, 8, 17, 10]. In such serial planning/execution models, the role of the planning stage is to resolve the redundancy inherent in the musculo-skeletal system, by replacing the behavioral goal (achievable via infinitely many movement trajectories) with a specific “desired trajectory.” Accurate execution of the desired trajectory guarantees achievement of the goal, and can be implemented with relatively simple trajectory-tracking algorithms. While this approach is computationally viable (and often used in engineering), the numerous observations of task-constrained variability and goal-directed corrections indicate that the online execution mechanisms are able to distinguish, and selectively enforce, the details that are crucial for the achievement of the goal. This would be impossible if the behavioral goal were replaced with a specific trajectory. Instead, these observations imply a very different control scheme, one which pursues the behavioral goal more directly. Efforts to delineate such a control scheme have led to the idea of motor synergies, or high-level “control knobs,” that have invariant and predictable effects on the task-relevant movement parameters despite variability in individual degrees of freedom [9, 11]. But the computational underpinnings of such an approach—how the synergies appropriate for a given task and plant can be constructed, what control scheme is capable of utilizing them, and why the motor system should prefer such a control scheme in the first place—remain unclear. This general form of hierarchical control implies correlations among the control signals sent to multiple actuators (i.e., synergetic coupling) and a corresponding reduction in control space dimesionality. Such phenonema have indeed been observed [4, 18], but the relationship to the hypothetical functional synergies remains to be established. In this paper we aim to resolve the apparent conflict at the heart of the motor coordination problem, and clarify the relationship between variability, task goals, and motor synergies. We treat motor coordination within the framework of stochastic optimal control, and postulate that the motor system approximates the best possible control scheme for a given task. Such a control scheme will generally take the form of a feedback control law. Whenever the task allows redundant solutions, the initial state of the plant is uncertain, the consequences of the control signals are uncertain, and the movement duration exceeds the shortest sensory-motor delay, optimal performance is achieved by a feedback control law that resolves redundancy moment-by-moment—using all available information to choose the most advantageous course of action under the present circumstances. By postponing all decisions regarding movement details until the last possible moment, this control law takes advantage of the opportunities for more successful task completion that are constantly being created by unpredictable fluctuations away from the average trajectory. Such exploitation of redundancy not only results in higher performance, but also gives rise to task-constrained variability and motor synergies—the phenomena we seek to explain. The present paper is related to a recent publication targeted at a neuroscience audience [14]. Here we provide a number of technical results missing from [14], and emphasize the aspects of our work that are most likely to be of interest to the computational modeling community. 2 The Minimal Intervention principle Our general explanation of the above phenomena follows from an intuitive property of optimal feedback controllers which we call the “minimal intervention” principle: deviations from the average trajectory are corrected only when they interfere with task performance. If this principle holds, and the noise perturbs the system in all directions, the interplay of the noise and control processes will result in variability which is larger in task-irrelevant directions. At the same time, the fact that certain deviations are not being corrected implies that the corresponding control subspace is not being used—which is the phenomenon typically interpreted as evidence for motor synergies [4, 18]. Why should the minimum intervention principle hold? An optimal feedback controller has nothing to gain from correcting task-irrelevant deviations, because its only concern is task performance and by definition such deviations do not interfere with performance. On the other hand, generating a corrective control signal can be detrimental, because: 1) the noise in the motor system is known to be multiplicative [13] and therefore could increase; 2) the cost being minimized most likely includes a control-dependent effort penalty which could also increase. We now formalize the notions of “redundancy” and “correction,” and show that for a surprisingly general class of systems they are indeed related—as our intuition suggests. 2.1 Local analysis of a general class of optimal control problems Redundancy is not easy to define. Consider the task of reaching, which requires the fingertip to be at a specified target at some point in time . At time , all arm configurations for which the fingertip is at the target are redundant. But at times different from this geometric approach is insufficient to define redundancy. Therefore we follow a more general approach. Consider a system with state
, control
, instantaneous scalar cost , and dynamics "!#$% '&)(*+$, .where /
0 is multidimensional standard Brownian motion. Control signals are generated by a feedback control law, which can be any mapping of the form 12 3 . The analysis below heavily relies on properties of the optimal cost-to-go function, defined as 465 78:9;=< >@?ACBCA DFEHG ?A DJILK M NOPNF 3 +NOPNF N where the minimum is achieved by the optimal control law 3 5 . Suppose that in a given task the system of interest (driven by the optimal control law) generates an average trajectory . On a given trial, let QR be the deviation form the average trajectory at time . Let Q 4 5 be the change in the optimal cost-to-go 4 5 due to the deviation QR ; i.e., Q 4 5 PQR7S 4 5 &QR78T 4 5 @ . Now we are ready to define redundancy: the deviation QR is redundant iff Q 4 5 UQR7/ . Note that our definition reduces to the intuitive geometric definition at the end of the movement, where the cost function and optimal cost-to-go 4 5 are identical. To define the notion of “correction,” we need to separate the passive and active dynamics: !#$,#VS77&LW76 The (infinitesimal) expected change in due to the control * 3 5 1&XQR7 can now be identified: Y @Z"[W+ &LQR7 3 5 1&LQR7 . The corrective action of the control signal is naturally defined as \^]O__'PQR78a`TbY Z cQd7e . In order to relate the quantities Q 4 5 UQR7 and \^]O__'UQd7 , we obviously need to know something about the optimal control law 3 5 . For problems in the above general form, the optimal control law 3 5 is given [7] by the minimum f _g$9;=< Z h$,@&)!#$%ji 465 G 7@&lk mon _ f \^prqs(*+$%ji 465 GtG 76(*$%u where 4 5 G 7 and 4 5 GtG 7 are the gradient and Hessian of the optimal cost-to-go function 4 5 7 . To be able to minimize this expression explicitly, we will restrict the class of problems to (+$,v wyxHz 76 {s{|{ x 0 7}~ $,v 77&lk m i +7} The matrix notation means that the c
column of ( is xy +7} . Note that the latter formulation is still very general, and can represent realistic musculo-skeletal dynamics and motor tasks. Using the fact1 that (S( i 0 z x , i x i and n _ f \|p
RS n _ f \|p
, and eliminating terms that do not depend on , the expression that has to be minimized w.r.t becomes i W7 i 4 5 G 7@&lk m i 77& 0
z x +7 i 4 5 GtG 7 x +7 ? M B G D Therefore the optimal control law is 3 5 +7
T 7 z W7 i 4 5 G 7 We now return to the relationship between “redundancy” and “correction.” The time index will be suppressed for clarity. We expand the optimal cost-to-go to second order: 4 5 &XQR7 4 5 1&XQR78&QR i 4 5 G ,
&QR i 4 5 GtG @ Qd , also expand its gradient to first order: 4 5 G 1&XQd7 4 5 G 7$& 4 5 GtG @ Qd , and approximate all other quantities as being constant in a small neighborhood of . The effect of the control signal becomes Y 'Z TyW , 7 z W @ i 4 5 G 7@& 4 5 GtG 76QR7 . Substituting in the above definitions yields Q 4}5 +QR7 `+QR$ 4}5 G 77& 465 GtG 7QR7e \|] __'+QR7 `+QR$ 4}5 G 77& 465 GtG 7QR7e ? G D ? G D "! ? G D$# where the weighted dot-product notation `$&%@e(' stands for i*) % . Thus both Q 4 5 +QR7 and \^]O__7UQR7 are dot-products of the same two vectors. When 4 5 G @& 4 5 GtG 76QR —which can happen for infinitely many QR when the Hessian 4 5 GtG 7 is singular—the deviation is redundant and the optimal controller takes no corrective action. Furthermore, Q 4 5 UQR7 and \^] __'PQd7 are positively correlated because W @ 7 z W , i is a positive semi-definite matrix2. Thus the optimal controller resists single-trial deviations that take the system to more costly states, and magnifies deviations to less costly states. This analysis confirms the minimal intervention principle to be a very general property of optimal feedback controllers, explaining why variability patterns elongated in taskirrelevant dimensions (as well as synergetic actuator coupling) have been observed in such a wide range of experiments involving different actuators and behavioral goals. 2.2 Linear-Quadratic-Gaussian (LQG) simulations The local analysis above is very general, but it leaves a few questions open: i) what happens when the deviation QR is not small; ii) how does the optimal cost-to-go (which defines redundancy) relate to the cost function (which defines the task); iii) what is the distribution of states resulting from the sequence of optimal control signals? To address such questions (and also build models of specific motor control experiments) we need to focus on a class of control problems for which the optimal control law can actually be found. To that end, we have modified [15] the extensively studied LQG framework to include the multiplicative control noise characteristic of the motor system. The control problems studied here and in 1Defining the unit vector +-, as having a . in position / and 0 in all other positions, we can write 1325436 ,
798;: ,$<*+(= , . Then 1>1 = 234 , 4@? : ,$<*+(= , + ? <A= : = ? 2B4 ,C: ,$<A<*= : = , , since +(= , + ? 2ED ? , . 2 FHG IKJ has to be positive semi-definite—or else we could find a control signal that makes the instantaneous cost negative, and that is impossible by definition. Therefore L FNM 8 LPO is also positive semi-definite. 0 25 50 −0.5 0 0.5 1 Time Step dv : dq dv : corr dcov : dq Figure 1: W x were generated randomly, with the restiction that has singular values less than k (i.e. the passive dynamics is stable); the last component of the state is k (for similarity with motor control tasks), and are positive semi-definite, and M M . For each problem ( ) and each point in time , we generated 100 random unit vector and scaled them by mean(sqrt(svd(cov( )))). Then 4
& 7 i & 7HT i , & 7 i * & 7HT i , 4 & @ i \|] 7'
& @
T i \^] 7 , \^] __ i Wb & 7 . The notation “dv : dq” stands for the correlation between the 4 and the , etc. the next section are in the form Dynamics M z y M &)WR M &wyxz M {|{s{ x 0 M ~ M Feedback % M M & M Cost i M M M &) i M M Note that the system state M is now partially observable, through noisy sensor readings % M . When the noise is additive instead of being multiplicative, the optimal control problem has the well-known solution [5] 3r5 M M
aT! M M#" M z $ % M &LWd M &'& M $% M T() M where M is an internal estimate of the system state, updated recursively by a Kalman filter. The sequences of matrices and & are computed from the associated discrete-time Ricatti equations [5]. Multiplicative noise complicates matters, but we have found [15] that for systems with stable passive dynamics a similar control strategy is very close to optimal. The modified equations for and & are given in [15]. The optimal cost-to-go function is 4 5 M M v K M M M &L\^]O<+* n M M &' K M z *T W M " , The Hessian M of the optimal cost-to-go is closely related to the task cost M , but also includes future task costs weighted by the passive and closed-loop *T W- M dynamics. Specific motor control tasks are considered below. Here we generate 100 random problems in the above form, compute the optimal control law in each case, and correlate the quantities Q 4 5 and corr. As the “dv : corr” curve in Figure 1 shows, they are positively correlated at all times. We also show in Figure 1 that the Hessian of the optimal cost-to-go has similar shape to the task cost (“dv : dq” curve), and that the state covariance is smaller along dimensions where the task cost is larger; i.e., the correlation “dcov : dq” is negative. See the figure legend for details.
! " #$ "% " ! &' ( &) * + ,.-0/2143 3 5768:9<; ; =?> @BADCE> FHGJIK@MLONL PDQ RS TU> VW@EFMXO@EYIH@ML NL 6HZ48 [H\ ] ] ^4_ ^4`bac^ Figure 2: Simulations of motor control tasks – see text. 3 Applications to motor coordination We have used the modified LQG framework to model a wide range of specific motor control tasks [14, 15], and always found that optimal feedback controllers generate variability that is elongated in redundant dimensions. Here we illustrate two such models. The first model (Figure 2, Bimanual Tasks) includes two 1D point masses with positions X1 and X2, each driven with a force actuator whose output is a noisy second-order low-pass filtered version of the corresponding control signal. The feedback contains noisy position, velocity, and force information—delayed by 50 msec (by augmenting the system state with a sequence of recent sensor readings). The “ Difference” task requires the two points to start moving 20cm apart, and stop at identical but unspecified locations. The covariance of the final state is elongated in the task-irrelevant dimension: the two points always stop close to each other, but the final location can vary substantially from trial to trial. A related phenomenon has been observed in the more complex bimanual task of inserting a pointer in a cup [6]. We now modify the task: in “Sum,” the two points start at the same location and have to stop so that the midpoint between them is at zero. Note that the state covariance is reoriented accordingly. We also illustrate a Via Point task, where a 2D point mass has to pass through a sequence of two intermediate targets and stop at a final target (tracing an S-shaped curve). Variability is minimal at the via points. Furthermore, when one via point is made smaller (i.e., the weight of the corresponding positional constraint is increased), the variability decreases at that point. Due to space limitations, we refer the reader to [14] for details of the models. In [14] we also report a via point experiment that closely matches the predicted effect. 4 Multi-attribute costs and desired trajectory tracking As we stated earlier, replacing the task goal with a desired trajectory (which achieves the goal if executed precisely) is generally suboptimal. A number of examples of such suboptimality are provided in [14]. Here we present a more general view of desired trajectory tracking which clarifies its relationship to optimal control. Desired trajectory tracking can be incorporated in the present framework by using a modified cost, one that specifies a desired state at each point in time, and penalizes the deviations from that state. Such a modified cost would normally include the original task cost (e.g., the terms that specify the desired terminal state), but also a large number of additional terms that do not need to be minimized in order to accomplish the actual task. This raises the question: what happens to the expected values of the terms in the original cost, when we attempt to minimize other costs simultaneously? Intuitively, one would expect the original costs to increase (relative to the costs obtained by the task-optimal controller). The geometric argument below formalizes these ideas, and confirms our intuition.
Consider a family of optimal control problems parameterized by the vector , with cost functions $%:
z $, . Here are different component costs, and are the corresponding non-negative weights. Without loss of generality we can assume that k , i.e., the weight vector ! lies in the positive quadrant of the unit sphere. Let 3 7 be an optimal control law3, and " #1"%$&! be the vector of expected component costs achieved by 3 ; i.e., ' #1H EyG ?)( D I K * + 3 + . Consider a weight vector and its corresponding " " 1 , such that the mapping " )1 is locally smooth and invertible. Then we can define the inverse mapping a " from the expected component cost manifold $ to the weight manifold , as illustrated in Figure 3. From the definitions of and ' , the total expected cost achieved by 3 is `# " " e . Since 3 is an optimal control law for the problem defined by the weight vector , no other control law can achieve a smaller total expected cost, and so `#a " " e,+ -.a " "0/1 for all "/ 2$ . Therefore, if we construct the T k dimensional hyperplane 3) " that contains " and is orthogonal to a " , the entire manifold $ has to lie in the half-space not containing the origin. Thus 3) " is tangent to the manifold $ at point " , $ has non-negative curvature, and the unit vector 4 " which is normal to $ at " satisfies 4 41 " #5a " . Let " 76%R8$ , 6 X be a parametric curve that passes through the point of interest " : " +O " . Define 4#6%94 " #6% and a#6$: " 76% . By differentiating " 76$ at 6 we obtain the tangent "<; to the curve " 76, at " . Since 4 is normal to $ , we have ` 4# " ; e . Differentiating the latter equality once again yields ` 4# " ; ; e$& ` 4 ; " ; e l . The non-negative curvature of $ implies ` 4# " ; ; eHX ; i.e., the tangent " ; cannot turn away from the normal 4 without " crossing the hyperplane 3 . Therefore ` 4 ; " ; e<+ , and since 4 = , we have ` ; ">; e<+* . 3If we assume that the optimal control law is unique, all inequalities below become strict. 4For a general 2D manifold ? embedded in @A , the mapping ?CBED on the unit sphere D that satisfies F GHG J 2CI GHG J is known as the Gauss map, and plays an important role in surface differential geometry. The above result means that whenever we change the weight vector , the corresponding vector " #1 of expected component costs achieved by the (new) optimal control law will change in an “opposite” direction. More precisely, suppose we vary along a great circle that passes through one of the corners of , say k ^O , so that z decreases and all z increase. Then the component cost ' z #1 will increase. References [1] Bernstein, N.I. The Coordination and Regulation of Movements. Pergamon Press, (1967). [2] Bizzi, E., Accornero, N., Chapple, W. & Hogan, N. Posture control and trajectory formation during arm movement. J Neurosci 4, 2738-44 (1984). [3] Cole, K.J. & Abbs, J.H. Kinematic and electromyographic responses to perturbation of a rapid grasp. J Neurophysiol 57, 1498-510 (1987). [4] D’Avella, A. & Bizzi, E. Low dimensionality of supraspinally induced force fields. PNAS 95, 7711-7714 (1998). [5] Davis, M.H.A. & Vinter, R. Stochastic Modelling and Control. Chapman and Hall, (1985). [6] Domkin D., Laczko, J., Jaric, S., Johansson, H., & Latash, M. Structure of joint variability in bimanual pointing tasks. Exp Brain Res 143, 11-23 (2002). [7] Fleming, W. and Soner, H. (1993). Controlled Markov Processes and Viscosity Solutions. Applications of Mathematics, Springer-Verlag, Berlin. [8] Flash, T. & Hogan, N. The coordination of arm movements: an experimentally confirmed mathematical model. J Neuroscience 5, 1688-1703 (1985). [9] Gelfand, I., Gurfinkel, V., Tsetlin, M. & Shik, M. In Models of the structuralfunctional organization of certain biological systems. Gelfand, I., Gurfinkel, V., Fomin, S. & Tsetlin, M. (eds.) MIT Press, 1971. [10] Harris, C.M. & Wolpert, D.M. Signal-dependent noise determines motor planning. Nature 394, 780-784 (1998). [11] Hinton, G.E. Parallel computations for controlling an arm. Journal of Motor Behavior 16, 171-194 (1984). [12] Robertson, E.M. & Miall, R.C. Multi-joint limbs permit a flexible response to unpredictable events. Exp Brain Res 117, 148-52 (1997). [13] Sutton, G.G. & Sykes, K. The variation of hand tremor with force in healthy subjects. Journal of Physiology 191(3), 699-711 (1967). [14] Todorov, E. & Jordan, M. Optimal feedback control as a theory of motor coordination. Nature Neuroscience, 5(11), 1226-1235 (2002). [15] Todorov, E. Optimal feedback control under signal-dependent noise: Methodology for modeling biological movement. Neural Computation, under review. Available at http://cogsci.ucsd.edu/˜todorov. (2002). [16] Scholz, J.P. & Schoner, G. The uncontrolled manifold concept: Identifying control variables for a functional task. Exp Brain Res 126, 289-306 (1999). [17] Uno, Y., Kawato, M. & Suzuki, R. Formation and control of optimal trajectory in human multijoint arm movement: Minimum torque-change model. Biological Cybernetics 61, 89-101 (1989). [18] Santello, M. & Soechting, J.F. Force synergies for multifingered grasping. Exp Brain Res 133, 457-67 (2000).
|
2002
|
120
|
2,126
|
A Probabilistic Approach to Single Channel Blind Signal Separation Gil-Jin Jang Spoken Language Laboratory KAIST, Daejon 305-701, South Korea jangbal@bawi.org http://speech.kaist.ac.kr/˜jangbal Te-Won Lee Institute for Neural Computation University of California, San Diego La Jolla, CA 92093, U.S.A. tewon@inc.ucsd.edu Abstract We present a new technique for achieving source separation when given only a single channel recording. The main idea is based on exploiting the inherent time structure of sound sources by learning a priori sets of basis filters in time domain that encode the sources in a statistically efficient manner. We derive a learning algorithm using a maximum likelihood approach given the observed single channel data and sets of basis filters. For each time point we infer the source signals and their contribution factors. This inference is possible due to the prior knowledge of the basis filters and the associated coefficient densities. A flexible model for density estimation allows accurate modeling of the observation and our experimental results exhibit a high level of separation performance for mixtures of two music signals as well as the separation of two voice signals. 1 Introduction Extracting individual sound sources from an additive mixture of different signals has been attractive to many researchers in computational auditory scene analysis (CASA) [1] and independent component analysis (ICA) [2]. In order to formulate the problem, we assume that the observed signal is an addition of independent source signals
(1) where is the sampled value of the source signal, and is the gain of each source which is fixed over time. Note that superscripts indicate sample indices of time-varying signals and subscripts indicate the source identification. The gain constants are affected by several factors, such as powers, locations, directions and many other characteristics of the source generators as well as sensitivities of the sensors. It is convenient to assume all the sources to have zero mean and unit variance. The goal is to recover all given only a single sensor input . The problem is too ill-conditioned to be mathematically tractable since the number of unknowns is "!
given only ! observations. Several earlier attempts [3, 4, 5, 6] to this problem have been proposed based on the presumed properties of the individual sounds in the frequency domain. ICA is a data driven method which relaxes the strong characteristical frequency structure assumptions. However, ICA algorithms perform best when the number of the observed ⋅ = λ ⋅ + λ ⋅ =
⋅ + ⋅ + ! #" $&% ' A B C q=0.99 q=0.52 q=0.26 q=0.12 Figure 1: Generative models for the observed mixture and original source signals (A) A single channel observation is generated by a weighted sum of two source signals with different characteristics. (B) Individual source signals are generated by weighted ( ( *) ) linear superpositions of basis functions ( + *) ). (C) Examples of the actual coefficient distributions. They generally have more sharpened summits and longer tails than a Gaussian distribution, and would be classified as super-Gaussian. The distributions are modeled by generalized Gaussian density functions in the form of ,.-/( *)103254768 -9;: ( *) : < 0 , which provide good matches to the non-Gaussian distributions by varying exponents. From left to right, the exponent decreases, and the distribution becomes more super-Gaussian. signals is greater than or equal to the number of sources [2]. Although some recent overcomplete representations may relax this assumption, the problem of separating sources from a single channel observation remains difficult. ICA has been shown to be highly effective in other aspects such as encoding speech signals [7] and natural sounds [8]. The basis functions and the coefficients learned by ICA constitute an efficient representation of the given time-ordered sequences of a sound source by estimating the maximum likelihood densities, thus reflecting the statistical structures of the sources. The method presented in this paper aims at exploiting the ICA basis functions for separating mixed sources from a single channel observation. Sets of basis functions are learned a priori from a training data set and these sets are used to separate the unknown test sound sources. The algorithm recovers the original auditory streams in a number of gradientascent adaptation steps maximizing the log-likelihood of the separated signals, calculated using the basis functions and the probability density functions (pdf’s) of their coefficients —the output of the ICA basis filters. The object function not only makes use of the ICA basis functions as a strong prior for the source characteristics, but also their associated coefficient pdf’s modeled by generalized Gaussian distributions [9]. Experiments showing the separation of the two different sources was quite successful in the simulated mixtures of rock and jazz music, and male and female speech signals. 2 Generative Models for Mixture and Source Signals The algorithm first involves the learning of the time-domain basis functions of the sound sources that we are interested in separating from a given training database. This corresponds to the prior information necessary to successfully separate the signals. We assume two different types of generative models in the observed single channel mixture as well as in the original sources. The first one is depicted in Figure 1-A. As described in Equation 1, at every >=@?BA !DC the observed instance is assumed to be a weighted sum of different sources. In our approach only the case of FE is regarded. This corresponds to the situation defined in Section 1 in that two different signals are mixed and observed in a single sensor. For the individual source signals, we adopt a decomposition-based approach as another generative model. This approach was employed formerly in analyzing sound sources [7, 8] by expressing a fixed-length segment drawn from a time-varying signal as a linear superposition of a number of elementary patterns, called basis functions, with scalar multiples (Figure 1-B). Continuous samples of length with ! are chopped out of a source, from to
9 A , and the subsequent segment is denoted as an -dimensional column vector in a boldface letter, ? C , attaching the lead-off sample index for the superscript and representing the transpose operator with . The constructed column vector is then expressed as a linear combination of the basis functions such that
) + *) ( *) (2) where is the number of basis functions, + ) is the basis function of source in the form of -dimensional column vector, ( *) its coefficient (weight) and ? ( ( ( C . The r.h.s. is the matrix-vector notation. The second subscript followed by the source index in ( *) represents the component number of the coefficient vector . We assume that and has full rank so that the transforms between and be reversible in both directions. The inverse of the basis matrix, , refers to the ICA filters that generate the coefficient vector: . The purpose of this decomposition is to model the multivariate distribution of in a statistically efficient manner. The ICA learning algorithm is equivalent to searching for the linear transformation that make the components as statistically independent as possible, as well as maximizing the marginal densities of the transformed coordinates for the given training data [10], ! 6 "$#&% ' -( : 0 )* ! 6 "+#,% % ) ' -/( *) 0 (3) where ' -.0 denotes the probability of the value of a variable - . Independence between the components and over time samples factorizes the joint probabilities of the coefficients into the product of marginal ones. What matters is therefore how well matched the model distribution is to the true underlying distribution of ' ( ) 0 . The coefficient histogram of real data reveals that the distribution has a highly sharpened point at the peak with a long tail (Figure 1-C). Therefore we use a generalized Gaussian prior [9] that provides an accurate estimate for symmetric non-Gaussian distributions by fitting the exponent / in the set of parameters 0 in its simplest form ,.-/(: 0 0 2 476!821 943 3 3 3 (D9+5 6 3 3 3 3 <7 0 98 5 6 /;: (4) where 5 =< ? (7C , 6 ?> @ ? (7C , and , -(0 is a realized pdf of variable - and should be noted distinctively with ' -(0 . With the generalized Gaussian ICA learning algorithm [9], the basis functions and their individual parameter set 0 *) are obtained beforehand and used as prior information for the following source separation algorithm. 3 Separation Algorithm The method is motivated by the pdf approximation property of ICA transformation (Equation 3). The probability of the source signals is computed by the generalized Gaussian parameters in the transformed domain, and the method performs maximum a posteriori (MAP) estimation in a number of adaptation steps on the source signals to maximize the data likelihood. Scaling factors of the generative model are learned as well. 3.1 MAP estimation of Source Signals We have demonstrated that the learned basis filters maximize the likelihood of the given data. Suppose we know what kind of sound sources have been mixed and we were given the set of basis filters from a training set. Could we infer the learning data? The answer is generally “no” when ! and no other information is given. In our problem of single channel separation, half of the solution is already given by the constraint
, where constitutes the basis learning data (Figure 1-B). Essentially, the goal of the source inferring algorithm presented in this paper is to complement the remaining half with the statistical information given by a set of coefficient density parameters 0 *) . If model parameters are given, we can perform maximum a posteriori (MAP) estimation simply by optimizing the data likelihood computed by the model parameters. At every time point a segment ? C generates the independent coefficient vector and respectively. The likelihood of is ' - : 0 ,. : 0 : 4 : (5) where , - 0 is the generalized Gaussian density function, and 0
— parameter group of all the coefficients, with the notation ‘ ’ meaning an ordered set of the elements from index to . Assuming the independence over time, the probability of the whole signal is obtained from the marginal ones of all the possible segments, '
: 0 % ' -( : 0 % ,. : 0 : 4 : (6) where, for convenience, ! ! 9
A . The objective function to be maximized is the multiplication of the data likelihoods of both sound sources, and we denote its log by : ' : 0 ' : 0
) ,. : 0
) ,. : 0
! : 4 : : 4 : (7) Our interest is in adapting and for = ?BA !DC , toward the maximum of . We introduce a new variable , a scaled value of with the contribution factor. The adaptation is done on the values of instead of , in order to infer the sound sources and their contribution factors simultaneously. The learning rule is derived in a gradient-ascent manner by summing up the gradients of all the segments where the sample lies: ! !
" 1 ! ! , $# : 0
! ! , $# : 0 7
" %
) '&)( -/( $# ) 0* ) " ,+ 9
) )&'( ( $# ) 0-* ) " .+0/ 2
" %
) 1( ( # ) 0 * ) " 9
) 2( -/( # ) 0 * ) " / (8) which is derived by the fact that 3465 # 7 398 5 3;:=< 7> 5 #@? 3BA 5 3BA 5 3B8 5 DC 7 # E and 398GF 398
H 3;:I 8 H ? 3B8JH 9 A , where " 9LK
A , ( -/( 0 3NM OGPRQ:4S T ? 3B4 , and * *) " -. K 0 . Note that the gradient of for , ! VU ! 9 ! WU ! , always makes the condition
satisfy, so learning rule on either or subsumes the counterpart. The overall process of the proposed method is summarized as 4 steps in Figure 2. The figure shows one iteration of the adaptation of each sample. y
A t t x x 2 1,∆ ∆ x !ˆ "x #ˆ ( ) ( ) ( ) $ $ $ $ $ % & ' ' ' ' ' ( ) *,+ * * . ./ .. ϕ ϕ ϕ 0 ( ) ( ) ( ) 1 1 1 1 1 2 3 4 4 4 4 4 5 6 798 7 7 : : : ; ;9; ;< ϕ ϕ ϕ = B ( ) ( ) ( ) > > > > > ? @ A A A A A B C ⋅ ⋅ ⋅ DE EGF D F D F H I H I H I J J JK JK JJ JJ ϕ ϕ ϕ L ( ) ( ) ( ) M M M M M N O P P P P P Q R ⋅ ⋅ ⋅ S9T TGU S U S U V W V W V W X X X9X XX XY XY ϕ ϕ ϕ Z [ \ A B C C D Figure 2: The overall structure of the proposed method. We are given single channel data ] , and we have the estimates of the source signals, ^ , at every adaptation step. (A) `_ ( *) : At each timepoint, the current estimates of the source signals are passed through basis filters , generating sparse codes ( *) that are statistically independent. (B) ( *) _ba ( ) : The stochastic gradient for each code is obtained by taking derivative of log-likelihood. (C) a ( *) _ca : The gradient is transformed to the source domain. (D) The individual gradients are combined to be added to the current estimates of the source signals. 3.2 Estimating and Updating the contribution factors can be accomplished by simply finding the maximum a posteriori values. To simplify the inferring steps, we force the sum of the factors to be constant: e.g.
A . Then is completely dependent on as A 9 , and we need to consider only. Given the basis functions and the current estimate of the sources , the posterior probability of is ' :
0 2 ' : 0 '
: 0 , E 0 (9) where , E - 0 is the prior density function of . The value of maximizing the posterior probability also maximizes its log, ! 6 E H 8
, E 0 : (10) where is the log-likelihood of the estimated sources defined in Equation 7. Assuming that is uniformly distributed, ! 8
) , E 0 :U ! ! WU ! , which is calculated as ! ! 9 d
d where d
) 2( ( ) 0e )gf (11) derived by the chain rule ! ) ,.-/( *) 0 ! ! ) ,.-/( ) 0 ! ( *) ! ( ) ! ( ( ) 0 e ) f ih 9 A kj (12) Solving equation ! WU ! ml subject to
A and = ? l A C gives > : d : > : d :
> : d : > : d : > : d :
> : d : (13) These values guarantee the local maxima of w.r.t. the current estimates of source signals. The algorithm updates the contribution factors periodically during the learning steps. (a) Rock music (b) Jazz music (c) Male speech (d) Female speech -2 0 2 0 10 20 30 q=0.29 -2 0 2 0 5 10 15 q=0.34 -2 0 2 0 5 10 q=0.36 -2 0 2 0 5 10 q=0.36 -5 0 5 0 2 4 6 q=0.41 -2 0 2 0 20 40 60 q=0.26 -5 0 5 0 10 20 30 40 q=0.26 -2 0 2 0 5 10 15 20 q=0.30 -2 0 2 0 10 20 30 q=0.29 -2 0 2 0 10 20 30 q=0.29 -2 0 2 0 0.5 1 1.5 2 q=0.61 -2 0 2 0 0.5 1 q=0.82 -5 0 5 0 0.5 1 q=0.80 -5 0 5 0 1 2 3 4 q=0.47 -5 0 5 0 1 2 3 q=0.53 -5 0 5 0 2 4 6 q=0.43 -5 0 5 0 0.5 1 1.5 q=0.64 -5 0 5 0 0.2 0.4 0.6 0.8 q=1.19 -5 0 5 0 5 10 15 q=0.34 -5 0 5 0 0.5 1 1.5 q=0.78 Signal Basis Functions Coef’s PDF Signal Basis Functions Coef’s PDF Figure 3: Waveforms of four sound sources, examples of the learned basis functions (5 were chosen out of 64), and the corresponding coefficient distributions modeled by generalized Gaussians. The full set of basis functions is available at the website also. 0 1000 2000 3000 4000 0 10 20 Average Powerspectrum Frequency (Hz) Rock Jazz Male Female Figure 4: Average powerspectra of the 4 sound sources. Frequency scale ranges in 0 4kHz ( -axis), since all the signals are sampled at 8kHz. The powerspectra are averaged and represented in the -axis. 4 Experiments and Discussion We have tested the performance of the proposed method on the single channel mixtures of four different sound types. They were monaural signals of rock and jazz music, male and female speech. We used different sets of speech signals for learning basis functions and for generating the mixtures. For the mixture generation, two sentences of the target speakers ‘mcpm0’ and ‘fdaw0’, one for each, were selected from the TIMIT speech database. The training set consisted of 21 sentences for each gender, 3 for each of randomly chosen 7 males (or females) from the same database excluding the 2 target speakers. Rock music was mainly composed of guitar and drum sounds, and jazz was generated by a wind instrument. Vocal parts of both music sounds were excluded. All signals were downsampled to 8kHz, from original 44.1kHz (music) and 16kHz (speech) data. The training data were segmented in 64 samples (8ms) starting at every sample. Audio files for all the experiments are accessible at the website1. Figure 3 displays the actual sources, adapted basis functions, and their coefficient distributions. Music basis functions exhibit consistent amplitudes with harmonics, and the speech basis functions are similar to Gabor wavelets. Figure 4 compares 4 sources by the average spectra. Each covers all the frequency bands, although they are different in amplitude. One might expect that simple filtering or masking cannot separate the mixed sources clearly. Before actual separation, the source signals were initialized to the values of mixture signal: , and the initial were all l to satisfy Equation 1. The adaptation was repeated more than 300 steps on each sample, and the scaling factors were updated every 10 steps. Table 1 reports the signal-to-noise ratios (SNRs) of the mixed signal ( ) and the recovered results ( ^ ) with the original sources ( ). In terms of total SNR increase the mixtures containing music were recovered more cleanly than the male-female mixture. Separation of jazz music and male speech was the best, and the waveforms are illustrated 1 http://speech.kaist.ac.kr/˜jangbal/ch1bss/ 2.5 3 3.5 4 −5 0 5 z1+z2 Time (sec) 2.5 3 3.5 4 −5 0 5 z1 Time (sec) 2.5 3 3.5 4 −5 0 5 z2 Time (sec) 2.5 3 3.5 4 −5 0 5 ez1 Time (sec) 2.5 3 3.5 4 −5 0 5 ez2 Time (sec) Figure 5: Separation result for the mixture of jazz music and male speech. In the vertical order: original sources ( and ), mixed signal (
), and the recovered signals. in Figure 5. We conjecture by the average spectra of the sources in Figure 4 that although there exists plenty of overlap between jazz and speech, the structures are dissimilar, i.e. the frequency components of jazz change less, so we were able to obtain relatively good SNR results. However rock music exhibits scattered spectrum and less characteristical structure. This explains the relatively poorer performances of rock mixtures. It is very difficult to compare a separation method with other CASA techniques, because their approaches are so different in many ways that an optimal tuning of their parameters would be beyond the scope of this paper. However, we compared our method with Wiener filtering [4], that provides optimal masking filters in the frequency domain if true spectrogram is given. So, we assumed that the other source was completely known. The filters were computed every block of 8 ms (64 samples), 0.5 sec, and 1.0 sec. In this case, our blind results were comparable in SNR with results obtained when the Wiener filters were computed at 0.5 sec. In summary, our method has several advantages over traditional approaches to signal separation. They involve either spectral techniques [5, 6] or time-domain nonlinear filtering techniques [3, 4]. Spectral techniques assume that sources are disjoint in the spectrogram, which frequently result in audible distortions of the signal in the region where the assumption mismatches. Recent time-domain filtering techniques are based on splitting the whole signal space into several disjoint subspaces. Although they overcome the limit of spectral representation, they consider second-order statistics only, such as autocorrelation, which restricts the separable cases to orthogonal subspaces [4]. Our method avoids these strong assumptions by utilizing a prior set of basis functions that captures the inherent statistical structures of the source signal. This generative model therefore makes use of spectral and temporal structures at the same time. The constraints are dictated by the ICA algorithm that forces the basis functions to result in an efficient representation, i.e. the linearly independent source coefficients; and both, the basis functions Table 1: SNR results. R, J, M, F stand for rock, jazz music, male, and female speech. All the values are measured in dB. ‘Mix’ columns are the sources that are mixed to , and ‘ # ’s are the calculated SNR of mixed signal ( ) and recovered sources (
) with the original sources ( ). Mix snr H snr F Total Mix snr H snr F Total inc. inc. R + J -3.7 3.3 3.7 7.0 10.3 J + M 0.1 5.6 -0.1 5.5 11.1 R + M -3.7 3.1 3.7 6.8 9.9 J + F -0.1 5.1 0.1 5.3 10.4 R + F -3.9 2.2 3.9 6.1 8.3 M + F -0.2 2.5 0.2 2.7 5.2 and their corresponding pdf are key to obtaining a faithful MAP based inference algorithm. An important question is how well the traing data has to match the test data. We have also performed experiments with the set of basis functions learned from the test sounds and the SNR decreased on average by 1dB. 5 Conclusions We presented a technique for single channel source separation utilizing the time-domain ICA basis functions. Instead of traditional prior knowledge of the sources, we exploited the statistical structures of the sources that are inherently captured by the basis and its coefficients from a training set. The algorithm recovers original sound streams through gradient-ascent adaptation steps pursuing the maximum likelihood estimate, contraint by the parameters of the basis filters and the generalized Gaussian distributions of the filter coefficients. With the separation results, we demonstrated that the proposed method is applicable to the real world problems such as blind source separation, denoising, and restoration of corrupted or lost data. Our current research includes the extension of this framework to perform model comparision to estimate which set of basis functions to use given a dictionary of basis functions. This is achieved by applying a variational Bayes method to compare different basis function models to select the most likely source. This method also allows us to cope with other unknown parameters such the as the number of sources. Future work will address the optimization of the learning rules towards real-time processing and the evaluation of this methodology with speech recognition tasks in noisy environments, such as the AURORA database. References [1] G. J. Brown and M. Cooke, “Computational auditory scene analysis,” Computer Speech and Language, vol. 8, no. 4, pp. 297–336, 1994. [2] P. Comon, “Independent component analysis, A new concept?,” Signal Processing, vol. 36, pp. 287–314, 1994. [3] E. Wan and A. T. Nelson, “Neural dual extended kalman filtering: Applications in speech enhancement and monaural blind signal separation,” in Proc. of IEEE Workshop on Neural Networks and Signal Processing, 1997. [4] J. Hopgood and P. Rayner, “Single channel signal separation using linear time-varying filters: Separability of non-stationary stochastic signals,” in Proc. ICASSP, vol. 3, (Phoenix, Arizona), pp. 1449–1452, March 1999. [5] S. T. Roweis, “One microphone source separation,” Advances in Neural Information Processing Systems, vol. 13, pp. 793–799, 2001. [6] S. Rickard, R. Balan, and J. Rosca, “Real-time time-frequency based blind source separation,” in Proc. of International Conference on IndependentComponent Analysis and Signal Separation (ICA2001), (San Diego, CA), pp. 651–656, December 2001. [7] T.-W. Lee and G.-J. Jang, “The statistical structures of male and female speech signals,” in Proc. ICASSP, (Salt Lake City, Utah), May 2001. [8] A. J. Bell and T. J. Sejnowski, “Learning the higher-order structures of a natural sound,” Network: Computation in Neural Systems, vol. 7, pp. 261–266, July 1996. [9] T.-W. Lee and M. S. Lewicki, “The generalized Gaussian mixture model using ICA,” in International Workshop on Independent Component Analysis (ICA’00), (Helsinki, Finland), pp. 239–244, June 2000. [10] B. Pearlmutter and L. Parra, “A context-sensitive generalization of ICA,” in Proc. ICONIP, (Hong Kong), pp. 151–157, September 1996.
|
2002
|
121
|
2,127
|
A Digital Antennal Lobe for Pattern Equalization: Analysis and Design Alex Holub, Gilles Laurent and Pietro Perona Computation and Neural Systems, California Institute of Technology holub@caltech.edu, laurentg@caltech.edu, perona@caltech.edu Abstract Re-mapping patterns in order to equalize their distribution may greatly simplify both the structure and the training of classifiers. Here, the properties of one such map obtained by running a few steps of discrete-time dynamical system are explored. The system is called 'Digital Antennal Lobe' (DAL) because it is inspired by recent studies of the antennallobe, a structure in the olfactory system of the grasshopper. The pattern-spreading properties of the DAL as well as its average behavior as a function of its (few) design parameters are analyzed by extending previous results of Van Vreeswijk and Sompolinsky. Furthermore, a technique for adapting the parameters of the initial design in order to obtain opportune noise-rejection behavior is suggested. Our results are demonstrated with a number of simulations. 1 Introduction The complexity of classifiers and the difficulty of learning their parameters is affected by the distribution of the input patterns. It is easier to obtain simple and accurate classifiers when the patterns associated with different classes are spaced far apart and evenly in the input space. Distributions which are lumpy, with classes bunched up in some regions of space leaving other regions of space empty may be more difficult to classify. This problem is particularly evident in sensory processing. In olfaction numerous odors which we wish to discriminate are chemically very similar, for example the citrus family (orange, lemon, lime ... ), while many odors that are in principle possible never occur in practice. The uneven chemical spacing for the odors of interest is expensive: in biological systems there is a premium in the simplicity of the classifiers that will recognize each individual odor. When the dimension ofthe pattern space is large (e.g. D > 100), and the number of classes to be discriminated is relatively small (e.g. N < 1000), one may transform an uneven distribution of patterns into an evenly distributed one by means of a map that 'randomizes' the position of each pattern, i.e. that takes (small) neighborhoods of the input space and remaps them to random locations. In large-dimensional spaces it is exceedingly likely that two contiguous regions will be remapped to locations whose distance is comparable with the diameter of the space, and thus the distribution of patterns is equalized. We explore a simple dynamical system which realizes one such map for spreading patterns in a high-dimensional space. The input space is the analog D-dimensional hypercube (0,1)D and the output space the digital hypercube {0,1}D. The map is implemented by iterating a discrete-time first-order dynamical system consisting of two steps at each iteration: a first-order linear dynamical system followed by memory less thresholding. The interest of the map is that it makes very parsimonious use of computational hardware (e.g. on the order of D neurons or transistors) and yet it achieves good equalization in a few time steps. The ideas that we present are inspired by a computation that may take place in the olfactory system as suggested in Friedrichs and Laurent [1J and Laurent [2, 3J. In insects, the anatomical structure where this computation is presumed to take place is called the 'Antennal Lobe'. Because of this we call the map a 'Digital Antennal Lobe' (DAL). 2 The digital antennal lobe The dynamical system we propose is inspired by the overall architecture of the antennal lobe and is designed to explore its computational capabilities. We apply two key simplifications: we discretize time into equally spaced 'epochs', updating synchronously the state of all the neurons in the network at each epoch, and we discretize the value of the state of each unit to the binary set {O, 1}. The physiological justification for these simplifications goes beyond the scope of this paper. Consider a collection of N binary neurons which are randomly connected and updated synchronously. The network is initially quiescent (i.e. all the neurons have constant state zero). At some time an input is applied causing the network to take values that are different from zero. The state of the network evolves in time. The state of the network after a given constant number of time-steps (e.g. 10-20 time-steps) is the desired output of the system. Let us introduce the following notation: x~ E {O, 1 }V'i Xl c KE,KI,Ku A Aij aE, aI, au T it B gt Xl = 1(gt) mt mu Number of excitatory, inhibitory, and external input units. Total number of excitatory and inhibitory units (N = N E + N I ) Neuron index: i E {1, ... ,N E} for excitatory and i E {N E + 1, ... ,N} for inhibitory. Value of unit i at time t. Vector of values for all excitatory and inhibitory units at time t. Connectivity: cN is the number of inputs to a given neuron. Excitatory, inhibitory, and external input (i.e. KE = eN E) . Matrix of connections. A has eN2 nonzero entries. Connection weight of unit j to unit i. Excitatory, inhibitory, input weights (Aij E {aI,O,aE}). Activation thresholds for all the neurons Vector of pattern inputs. Matrix of excitatory connections from pattern inputs to units. Vector of neuronal input currents, i.e. gt+l = AXl + Bat - T. Update equation for x. 1(·) is the Heaviside function. Mean activity in the network at time t, i.e. mt = Li xi/No Fraction of the external inputs which are active. A DAL may be generated once the value of 5 parameters are chosen. Assume excitatory connection weight aE = au = 1 (this is a normalization constant). Choose a value for aI, c, T, NI , N E. Generate random connection matrices A and B with average connectivity e and connection weights aE, aI. Solve the following dynamical system forward in time from a zero initial condition: """"''''''·''' '-«'''''''1\1'_ '1.2_'''"'''' 1'''''''1''''''''''''''''''1''''')'''''''' f"-••• ~" ••• -.-.-.e--. ' I~· ·--e"·--·''-·>i I • , \ I', r 1\ I ', / II "", I /~--.~ II "'" ! -----,:;------;;;------c: .. ' +'" .-.-; Figure 1: Example of pattern spreading by the a DAL. (Left) Response of a DAL to 10 uniformly distributed random olfactory input patterns applied at time epoch t = 3. Each vertical panel represents the state of excitatory units at a given time epoch (epochs 2,4,8,10 and excitatory units 1-200 are shown) in response to all stimuli. In a given panel the row index refers to a given excitatory unit and the column index to a given input pattern (200 of 1024 excitatory units shown and 10 input patterns). A white dot represents a state of '1' and a dark dot represent a state of '0'. Around 10% of the neurons are active (i.e. state = '1') by the 8th time-epoch. The salt-and-pepper pattern present in each panel indicates that excitatory units respond differently to each input pattern. (Center) Activity of the DAL in response to 10 stimuli that differ only in one out of 1024 input dimensions, i.e. 0.1%. The horizontal streaks in the panels corresponding to early epochs (t = 4 and t = 6) indicate that the excitatory units respond equally or similarly to all input patterns. The salt-and-pepper pattern in later epochs indicates that the time course of each excitatory units state becomes increasingly different in time. (Right) Time-course of the normalized average distance between the patterns corresponding to different families of input patterns: the red curve corresponds to input patterns that are very different (average difference 20%), while the green and blue curve correspond to families of similar input patterns: 0.1% average difference for the green curve and 0.2% average difference for the blue curve. The parameters used in this network were aJ = 10, c = .05, T = 10, NE = 1024, NJ = 256. o Axt- 1 + Bit - T, t > 0 l(yt) zero initial condition neuronal input state update for some (constant) input pattern it. The notation 1(·) indicates the Heaviside step function. The overall behavior of the DAL in response to different olfactory inputs is illustrated in Figure 1. Notice the main features of the DAL. (1) In response to an input each unit exhibits a complex temporal pattern of activity. (2) The pattern is different for different inputs. (3) The average activity rate of the neurons is approximately independent of the input pattern. (4) When very different input patterns are applied the average normalized Hamming distance between excitatory unit states is almost maximal immediately after the onset of the input stimulus. (5) When very similar input patterns are applied (e.g. 0.1 % average difference), the average normalized Hamming distance between excitatory unit patterns is initially very small, i.e. initially the excitatory units respond similarly to similar inputs. The difference increases with time and reaches almost maximal value within 8-9 time-epochs. The 'chaotic' properties of sparsely connected networks of neurons were noticed and studied by Van Vreeswijk and Sompolinsky [5] in the limit of 00 neurons. In this paper we study networks with a small number of neurons comparable to the number observed within the antennal lobe. Additionally, we propose a technique for the design of such networks, and demonstrate the possibility of 'stabilizing' some trajectories by parameter learning. 2.1 Analytic solution and equilibrium of network The use of simplified neural elements, namely McCulloch-Pitts units [4], allows us to represent the system as a simple discrete time dynamical system. Furthermore, we are able to create expressions for various network properties. Several distributions can be used to approximate the number of active units in the population of excitatory, inhibitory, and external units, including: (1) the Binomial distribution, (2) the Poisson distribution, and (3) the Gaussian distribution. An approximation common to all three is that the activities of all units are uncorrelated. The Gaussian approximation will yield Van Vreeswijk and Sompolinsky's analysis [5]. Given the population activity at a time t, mt, we can calculate the expected value for the population activity at the next time step, mH1 : KE KJ Ku E(mt+1) = 2..= 2..= 2..=p(e)p(i)p(u)l(aEe + ali + auu - T) e=O i= O u=O Where pee), p(i), and p(u) are the probabilities of e excitatory, i inhibitory, and u external inputs being active. Both e and i are binomially distributed with mean activity m = mt, while the external input is binomially distributed with mean activity m = mu: The Poisson distribution can be used to approximate the binomial distribution for reasonable values of A, where for instance Ae = K emt. Using the Poisson approximation, the probability of j units being active is given by: In the limit as N ---+ 00, the distributions for the sum of the number of excitatory, inhibitory, and external units active approach normal distributions. Since the sum of Gaussian random variables is itself a Gaussian random variable, we can model the net input to a unit as the sum of the excitatory, inhibitory, and external input shifted by a constant representing the threshold. The mean f-L and variance (J2 of the Gaussian representing the input to an individual unit are then: f-L = aEmt KE + almt Kl + aumuKu T (J2 = NE[a~mtc a~c2mt] + Nl[aJmtc - aJc2mt] + Nu[a~muc a~c2mu] The fraction of active input units can be determined by considering the area under the gaussian corresponding to positive cumulative input: The predicted population mean activity was calculated by imposing that the system is at equilibrium. The equilibrium condition is satisfied when mt = mHl. Figure 2: Design of a DAL. (Left) Behavior of the system for a given connectivity value. Light gray indicates inhibition-threshold values that yield a stable dynamical system. That is, small perturbations of firing activity do not result in large fluctuations in activity later in time. The dark blue line indicates equilibria, i.e. inhibition-threshold values for which the dynamical system rests at a constant mean-firing rate. (Center) The stable portions of the equilibrium curves for a number of connectivity values. Using this chart one may design an antennal lobe: for any given connectivity choose inhibition and threshold values that produce a desired mean firing rate. (Right) The design procedure produces networks that behave as desired. The arrows indicate parameter sets for which Monte Carlo simulation were performed in order to test the accuracy of the predictions. The values indexing the arrows correspond to the absolute difference ofthe predicted activity (.15) using a binomial approximation and the mean simulation activity across 10 random inputs to 10 different networks with the specified parameters sets. We found the binomial approximation to yield the most accurate predictions in parameter ranges of interest to us, namely 500-4000 total units and connectivities ranging from .05-.15 (see Figure 2). The binomial approximation was always within 1 standard deviation of the Monte Carlo means. The Gaussian approximation yielded slightly less accurate predictions but required a fraction of the time to compute. 3 Design of the Antennal Lobe The analysis described above allows us to design well behaved DALs. Specifically, we can predict which subsets of parameters in a given parameter range yield good network behavior. These predictions are made by solving the update equation for multiple sets of parameters and then determining which parameter ranges yield networks which are both stable and at equilibrium. Figure 2 outlines the design technique for a network of 512 excitatory and 512 inhibitory units and a population mean activity of .15. The predicted activity of the network for different parameter sets corresponds well with that observed in Monte Carlo simulations. There is an average difference of .0061 between the predicted mean activity and that found in the simulations (see Figure 2, right plot). 4 Learning for trajectory stabilization Consider a 'physical' implementation of the DAL, either by means of neurons in a biological system or by transistors in an electronic circuit. The inevitable presence of noise points to a fatal flaw of the DAL as we have seen it so far. The key property of the DAL is input decorrelation. In the presence of noise the same input applied multiple times to the same network will produce divergent trajectories, hence different final conditions, thus making the use of DALs for pattern classification problematic. Consider the possibility that noise is present in the system: as a result of fluctuations in the level of the input ii, fluctuations in the biophysical properties of the neurons, etc. We may represent this noise as an additional term fi in the dynamical system: ifAX't + Biit - T X'tH l(if + fit) Whatever the statistics of the noise, it is clear that it may influence the trajectory X' of the dynamical system. Indeed, if yf, the nominal input to a neuron, is sufficiently close to zero, then even a small amount of noise may change the state xf of that neuron. As we saw in earlier sections this implies that the ensuing trajectory will diverge from the trajectory of the same system with the same inputs and no noise or the same inputs and a different realization of the same noise process. This is shown in the left panel of Figure 3. On the other hand, if yf is far from zero, then xf will not change even with large amounts of noise. This raises the possibility that, if a DAL is appropriately designed, it may exhibit a high degree of robustness to noise. Ideally, for any given initial condition and input, and for any E, there exists a constant Yo > 0 such that any initial condition and input in a Yo-ball around the original input and initial condition will produce trajectories that differ at most by E. Clearly, if E = 0 (i.e. the trajectory is required to be identical to the one of the noiseless system) then all trajectories of the system must coincide, not very useful. Similarly, if E <~ Yo the map will not spread different inputs. Therefore, this formulation of the problem does not have a satisfactory solution. One may, however, consider a weaker requirement. If the total number of patterns to be discriminated is not too large (probably 10-1000 in the case of olfaction) one could think of requiring noise robustness only for the trajectories X'that are specific to those patterns. We therefore explored whether it was in principle possible to stabilize trajectories corresponding to different odor presentations rather than all trajectories. We wish to change the connection weights A, B and thresholds T so that the network is robust with respect to noise around a given trajectory X'(ii). In order to achieve this we wish to ensure that at no time t neuron i has an input that is close to the threshold. If neuron i is not firing at time t (i.e. xf = 0) then its input must be comfortably less than zero (i.e. for some constant Yo > 0, yf < -Yo) and viceversa for xf = 1. We do so by minimizing an appropriate cost function: call g(.) an appropriate penalty function, e.g. g(y) = exp(y/yo) , then the cost of neuron i at time t if xf = 0 is Cf = g(yf) and if xf = 1 then Cf = g( -yf). Therefore: cf g( (1 - 2xDyf) C(A,B,T) LLCf The minimization may proceed by gradient descent. The equations for the gradient are: aCf --' aAij ayf aAij similarly, ayf aBij DivergerlCe Of 22 Traje<;tories 8efore Leaming Divergence of Trajectories After Leaming f O Time_Steps Figure 3: Robustness of trajectories to noise resulting from network learning. (Left) Pattern spreading in a DAL before learning. Each curve corresponds to the divergence rate between 10 identical trajectories in the presence of 5% gaussian synaptic noise added to each active presynaptic synapse. All patterns achieve maximum spreading in 9-10 steps as also shown in Figure 1. (Right) The divergence rate of the same trajectories after learning the first 10 steps of each trajectory. Each trajectory was learned sequentially, with the trajectory labelled 1 learned first. Note that trajectories learned later, for instance trajectory 20, diverge more slowly than earlier learned trajectories. Thus, the trajectories learned earlier are forgotten while more recently acquired trajectories are maintained. Furthermore, the trajectories maintain their stereotyped ability to decorrelate both after they are forgotten (e.g. trajectory 8) and after the 10 step learning period is over (e.g. trajectory 20). Untrained trajectories behave the same as trajectories in the left panel. -1 In Figure 3 the results of one learning experiment are shown. Before learning all trajectories are susceptible to synaptic noise. After learning, those trajectories learned last exhibit robustness to noise, while trajectories learned earlier are slowly forgotten. We can compare each learned trajectory to a curve in multi-dimensional space with a 'robustness pipe' surrounding it. Any points lying within this pipe will be part of trajectories that remain within the pipe. In the case of olfactory processing, different odors correspond to unique trajectories, while trajectories lying within a common pipe correspond to the same input odor presentation. A few details on the experiment: The network contained 2048 neurons, half of which were excitatory and the other half inhibitory. The values of the constants were: c = 0.08, aE = 1, a[ = 1.5, T = 7.2, and the mean firing rate was set at about .05. The optimization took 60 gradient-descent steps. 5 Discussion and Conclusions Sparsely connected networks of neurons have 'chaotic' properties which may be used for equalizing a set of patterns in order to make their classification easier. In studying the properties of such networks we extend previous results on networks with 00 neurons by van Vreeswijk and Sompolinsky to the case of small number of neurons. We also provide techniques for designing networks that have desired average properties. Moreover, we propose a learning technique to make the network immune to noise around chosen trajectories while preserving the equalization property elsewhere. A number of issues are left open. A precise characterization of the effects of the DAL on the distribution of the input parameters, and the consequent improvement in the ease of pattern classification is still missing. The geometry of the map implemented by the DAL is also unclear. Finally, it would be useful to obtain a quantitative estimate for the 'capacity' of the DAL, i.e. the number of trajectories which can be learned in any given network before older trajectories are forgotten. Acknowledgements We would like to thank Or Neeman for useful suggestions and feedback. This work was supported in part by the Engineering Research Centers Program of the National Science Foundation under Award Number EEC-9402726. References [1] Friedrich R. & Laurent, G. (2001) Dynamical optimization of odor representations by slow temporal patterning of mitral cell activity. Science 291:889-894. [2] Laurent G, Stopfer M, Friedrich RW, Rabinovich MI, Volkovskii A, Abarbanel HD. (2001) Odor encoding as an active, dynamical process: experiments, computation, and theory. Ann Rev Neurosci. 24:263-97. [3] Laurent G. (2002) Olfactory network dynamics and the encoding of multidimensional signals. Nat Rev Neurosci 3(11):884-95. [4] McCulloch WS, Pitts W. (1943). A logical calculus of ideas immanent in nervous activity. Bulletin of Mathematical Biophysics 5: 115-133. [5] van Vreeswijk C, Sompolinsky H (1998) Chaotic balanced state in a model of cortical circuits. Neural Computation. 10(6): 1321-71.
|
2002
|
122
|
2,128
|
Knowledge-Based Support Vector Machine Classifiers Glenn M. Fung, Olvi L. Mangasarian and Jude W. Shavlik Computer Sciences Department, University of Wisconsin Madison, WI 53706 gfung, olvi, shavlik@cs.wisc.edu Abstract Prior knowledge in the form of multiple polyhedral sets, each belonging to one of two categories, is introduced into a reformulation of a linear support vector machine classifier. The resulting formulation leads to a linear program that can be solved efficiently. Real world examples, from DNA sequencing and breast cancer prognosis, demonstrate the effectiveness of the proposed method. Numerical results show improvement in test set accuracy after the incorporation of prior knowledge into ordinary, data-based linear support vector machine classifiers. One experiment also shows that a linear classifier, based solely on prior knowledge, far outperforms the direct application of prior knowledge rules to classify data. Keywords: use and refinement of prior knowledge, support vector machines, linear programming 1 Introduction Support vector machines (SVMs) have played a major role in classification problems [18,3, 11]. However unlike other classification tools such as knowledge-based neural networks [16, 17, 7], little work [15] has gone into incorporating prior knowledge into support vector machines. In this work we present a novel approach to incorporating prior knowledge in the form of polyhedral knowledge sets in the input space of the given data. These knowledge sets, which can be as simple as cubes, are supposed to belong to one of two categories into which all the data is divided. Thus, a single knowledge set can be interpreted as a generalization of a training example, which typically consists of a single point in input space. In contrast, each of our knowledge sets consists of a region in the same space. By using a powerful tool from mathematical programming, theorems of the alternative [9, Chapter 2], we are able to embed such prior data into a linear program that can be efficiently solved by any of the publicly available solvers. We briefly summarize the contents of the paper now. In Section 2 we describe the linear support vector machine classifier and give a linear program for it. We then describe how prior knowledge, in the form of polyhedral knowledge sets belonging to one of two classes can be characterized. In Section 3 we incorporate these polyhedral sets into our linear programming formulation which results in our knowledge-based support vector machine (KSVM) formulation (19). This formulation is capable of generating a linear classifier based on real data and/or prior knowledge. Section 4 gives a brief summary of numerical results that compare various linear and nonlinear classifiers with and without the incorporation of prior knowledge. Section 5 concludes the paper. We now describe our notation. All vectors will be column vectors unless transposed to a row vector by a prime I. The scalar (inner) product of two vectors x and y in the n-dimensional real space Rn will be denoted by x' y. For a vector x in Rn, the sign function sign(x) is defined as sign(x)i = 1 if Xi > a else sign(x)i = -1 if Xi::; 0, for i = 1, ... ,no For x ERn, Ilxll p denotes the p-norm, p = 1,2,00. The notation A E Rmxn will signify a real m x n matrix. For such a matrix, A' will denote the transpose of A and Ai will denote the i-th row of A. A vector of ones in a real space of arbitrary dimension will be denoted bye. Thus for e E Rm and y E R m the notation e'y will denote the sum of the components of y. A vector of zeros in a real space of arbitrary dimension will be denoted by O. The identity matrix of arbitrary dimension will be denoted by I. A separating plane, with respect to two given point sets A and B in Rn , is a plane that attempts to separate R n into two halfspaces such that each open halfspace contains points mostly of A or B. A bounding plane to the set A is a plane that places A in one of the two closed halfspaces that the plane generates. The symbol 1\ will denote the logical "and". The abbreviation "s.t." stands for "such that" . 2 Linear Support Vector Machines and Prior Knowledge We consider the problem, depicted in Figure l(a), of classifying m points in the n-dimensional input space Rn , represented by the m x n matrix A, according to membership of each point Ai in the class A + or A-as specified by a given m x m diagonal matrix D with plus ones or minus ones along its diagonal. For this problem, the linear programming support vector machine [11, 2] with a linear kernel, which is a variant of the standard support vector machine [18, 3], is given by the following linear program with parameter v > 0: min {ve'y + Ilwlll I D(Aw - WI') + y ~ e, y ~ a}, (1) (W ,"Y,y)ERn+l += where II . III denotes the I-norm as defined in the Introduction, y is a vector of slack variables measuring empirical error and (w, 'Y) characterize a separating plane depicted in Figure 1. That this problem is indeed a linear program, can be easily seen from the equivalent formulation: min {ve'y+e't I D(Aw - q) +y ~ e,t ~ w ~ -t,y ~ a}, (2) (W ,"Y ,y ,t)ERn+l+=+n where e is a vector of ones of appropriate dimension. For economy of notation we shall use the first formulation (1) with the understanding that computational implementation is via (2). As depicted in Figure l(a), w is the normal to the bounding planes: x'w = 'Y + 1, x'w = 'Y - 1, (3) that bound the points belonging to the sets A + and A-respectively. The constant 'Y determines their location relative to the origin. When the two classes are strictly linearly separable, that is when the error variable y = a in (1) (which is the case shown in Figure 1 (a)), the plane x' w = 'Y + 1 bounds all of the class A + points, while the plane x' w = 'Y - 1 bounds all of the class A-points as follows: AiW ~ 'Y + 1, for Dii = 1, AiW ::; 'Y - 1, for Dii = -1. (4) Consequently, the plane: x'w = 'Y, (5) midway between the bounding planes (3), is a separating plane that separates points belonging to A + from those belonging to A-completely if y = 0, else only approximately. The I-norm term Ilwlll in (1), which is half the reciprocal of the distance 11,,7111 measured using the oo-norm distance [10] between the two bounding planes of (3) (see Figure l(a)), maximizes this distance, often called the "margin". Maximizing the margin enhances the generalization capability of a support vector machine [18, 3]. If the classes are linearly inseparable, then the two planes bound the two classes with a "soft margin" (i.e. bound approximately with some error) determined by the nonnegative error variable y, that is: AiW + Yi 2: ry + 1, for Dii = 1, AiW Yi ::; ry - 1, for Dii = -1. (6) The I-norm of the error variable Y is minimized parametrically with weight /J in (1), resulting in an approximate separating plane (5) which classifies as follows: x E A+ if sign(x'w - ry) = 1, x E A- if sign(x'w - ry) = -1. (7) Suppose now that we have prior information of the following type. All points x lying in the polyhedral set determined by the linear inequalities: Bx ::; b, (8) belong to class A +. Such inequalities generalize simple box constraints such as a ::; x ::; d. Looking at Figure 1 (a) or at the inequalities (4) we conclude that the following implication must hold: Bx::; b ===? x'w 2: ry+ 1. (9) That is, the knowledge set {x I Bx ::; b} lies on the A + side of the bounding plane x'w = ry+ 1. Later, in (19), we will accommodate the case when the implication (9) cannot be satisfied exactly by the introduction of slack error variables. For now, assuming that the implication (9) holds for a given (w, ry), it follows that (9) is equivalent to: Bx ::; b, x'w < ry + 1, has no solution x. (10) This statement in turn is implied by the following statement: B'u+w = 0, b'u+ry+ 1::; 0, u 2: 0, has a solution (u,w). (11) To see this simple backward implication: (10)¢=(11), we suppose the contrary that there exists an x satisfying (10) and obtain the contradiction b'u > b'u as follows: b'u 2: u'Bx = -w'x > -ry-l2: b'u, (12) where the first inequality follows by premultiplying Bx ::; b by u 2: O. In fact, under the natural assumption that the prior knowledge set {x I Bx ::; b} is nonempty, the forward implication: (10)===?(11) is also true, as a direct consequence of the nonhomogeneous Farkas theorem of the alternative [9, Theorem 2.4.8]. We state this equivalence as the following key proposition to our knowledge-based approach. Proposition 2.1 Knowledge Set Classification. Let the set {x I Bx ::; b} be nonempty. Then for a given (w, ry), the implication (9) is equivalent to the statement (11). In other words, the set {x I Bx ::; b} lies in the halfspace {x I w' x 2: ry + I} if and only if there exists u such that B'u + w = 0, b'u + ry + 1 ::; 0 and u 2: O. Proof We establish the equivalence of (9) and (11) by showing the equivalence (10) and (11). By the nonhomogeneous Farkas theorem [9, Theorem 2.4.8] we have that (10) is equivalent to either: B'u + w = 0, b'u + ry + 1::; 0, u 2: 0, having solution (u, w), (13) or B'u = 0, b'u < 0, u 2: 0, having solution u. (14) However, the second alternative (14) contradicts the nonemptiness ofthe knowledgeset {x I Bx::; b}, because for x in this set and u solving (14) gives the contradiction: 02: u'(Bx - b) = x' B'u - b'u = -b'u > O. (15) Hence (14) is ruled out and we have that (10) is equivalent to (13) which is (11). D This proposition will play a key role in incorporating knowledge sets, such as {x I Bx ::; b}, into one of two categories in a support vector classifier formulation as demonstrated in the next section. - 15 -15 -20 X'W= Y +1 -30 ~---:---j x'w= y -40 -~·~0------~'5~-----~ '0~-----~ 5 ------~----~ -45 '--------~----~------~----~----~ -20 - 15 - 10 -5 (a) (b) Figure 1: (a): A linear SVM separation for 200 points in R2 using the linear programming formulation (1). (b): A linear SVM separation for the salTIe 200 points in R2 as those in Figure l(a) but using the linear programming forlTIulation (19) which incorporates three knowledge sets: {x I B ' x :'0 b'} into the halfspace of A + , and {x I C'x :'0 c'}, {x I C 2 x :'0 c2 } into the halfspace of A - , as depicted above. Note the substantial difference between the linear classifiers x' w = , of both figures. 3 Knowledge-Based SVM Classification We describe now how to incorporate prior knowledge in the form of polyhedral sets into our linear programming SVM classifier formulation (1). We assume that we are given the following knowledge sets: k sets belonging to A+ : {x I Bix ::; bi }, i = 1, ... ,k IZ sets belonging to A- : {x I eix::; ci }, i = 1, ... ,IZ It follows by Proposition 2.1 that, relative to the bounding planes (3): There exist ui , i = 1, ... ,k, vj , j = 1, ... ,IZ, such that: B i ' i 0 bi ' i 1 < 0 i > O· 1 k u+w= , u+ 1'+ _, u _, Z= , ... , ej ' j 0 j' j 1 < 0 j > 0 . - 1 fi V W ,c v l' + _ ,v _ ,J , ... ,1:(16) (17) We now incorporate the knowledge sets (16) into the SVM linear programming formulation (1) classifier, by adding the conditions (17) as constraints to it as follows: min ve'y + Ilwlll w" ,(y ,u i ,vj )2':O s.t. D(Aw - q ) +y > e ., . B" u" + w 0 (18) ., . i = 1, ... , k b" u" + l' + 1 < 0, ej'vj - w 0 ., . cJ vJ -1'+1 < 0, j = 1, ... ,IZ This linear programming formulation will ensure that each of the knowledge sets { x I BiX::; bi } , i = 1, ... ,k and { x I eix::; ci } , i = 1, ... ,IZ lie on the appropriate side of the bounding planes (3). However, there is no guarantee that such bounding planes exist that will precisely separate these two classes of knowledge sets, just as there is no a priori guarantee that the original points belonging to the sets A + and A-are linearly separable. We therefore add error variables ri, pi, i = 1, ... ,k, sj, (J"j, j = 1, ... ,£, just like the slack error variable y of the SVM formulation (1), and attempt to drive these error variables to zero by modifying our last formulation above as follows: k . min .. ve'y + j.L(l,)ri + /) W'f , (y , u~,r~,pt , vJ ,sJ ,aJ)~O i=l s.t. D(Aw - wy) + y _ri ::; Bil ui + W ·1 . b"u"+I'+1 -sj ::; ejl vj - w ·1 . cJ vJ -I'+1 e + l,)sj + (J"j)) + Ilwlll j=l > e < ri < pi,i=I, ... ,k < sj < (J"j, j = 1, . .. ,£ (19) This is our final knowledge-based linear programming formulation which incorporates the knowledge sets (16) into the linear classifier with weight j.L, while the (empirical) error term e'y is given weight v. As usual, the value of these two parameters, v, j.L, are chosen by means of a tuning set extracted from the training set. If we set j.L = a then the linear program (19) degenerates to (1), the linear program associated with an ordinary linear SVM. However, if set v = 0, then the linear program (19) generates a linear SVM that is strictly based on knowledge sets, but not on any specific training data. This might be a useful paradigm for situations where training datasets are not easily available, but expert knowledge, such as doctors' experience in diagnosing certain diseases, is readily available. This will be demonstrated in the breast cancer dataset of Section 4. Note that the I-norm term Ilwlll can be replaced by one half the 2-norm squared, ~llwll~, which is the usual margin maximization term for ordinary support vector machine classifiers [18, 3]. However, this changes the linear program (19) to a quadratic program which typically takes longer time to solve. For standard SVMs, support vectors consist of all data points which are the complement of the data points that can be dropped from the problem without changing the separating plane (5) [18, 11]. Thus for our knowledge-based linear programming formulation (19), support vectors correspond to data points (rows of the matrix A) for which the Lagrange multipliers are nonzero, because solving (19) with these data points only will give the same answer as solving (19) with the entire matrix A. The concept of support vectors has to be modified as follows for our knowledge sets. Since each knowledge set in (16) is represented by a matrix Bi or ej , each row of these matrices can be thought of as characterizing a boundary plane of the knowledge set. In our formulation (19) above, such rows are wiped out if the corresponding components of the variables u i or vj are zero at an optimal solution. We call the complement of these components of the the knowledge sets (16), support constraints. Deleting constraints (rows of Bi or ej ), for which the corresponding components of u i or v j are zero, will not alter the solution of the knowledge-based linear program (19). This in fact is corroborated by numerical tests that were carried out. Deletion of non-support constraints can be considered a refinement of prior knowledge [17]. Another type of of refinement of prior knowledge may occur when the separating plane x' w = I' intersects one of the knowledge sets. In such a case the plane x'w = I' can be added as an inequality to the knowledge set it intersects. This is illustrated in the following example. We demonstrate the geometry of incorporating knowledge sets by considering a synthetic example in R2 with m = 200 points, 100 of which are in A + and the other 100 in A -. Figure 1 (a) depicts ordinary linear separation using the linear SVM formulation (1). We now incorporate three knowledge sets into the the problem: {x I Blx ::; bl } belonging to A+ and {x I Clx ::; cl } and {x I C 2x ::; c2 } belonging to A -, and solve our linear program (19) with f-l = 100 and v = 1. We depict the new linear separation in Figure 1 (b) and note the substantial change generated in the linear separation by the incorporation of these three knowledge sets. Also note that since the plane x'w = "( intersects the knowledge set {x I BlX ::; bl }, this knowledge set can be refined to the following {x I B 1 X ::; bl, w' x 2: "(}. 4 Numerical Testing Numerical tests, which are described in detail in [6], were carried out on the DNA promoter recognition dataset [17] and the Wisconsin prognostic breast cancer dataset WPBC (ftp:j /ftp.cs.wisc.edu/math-prog/cpo-dataset/machinelearn/cancer/WPBC/). We briefly summarize these results here. Our first dataset, the promoter recognition dataset, is from the the domain of DNA sequence analysis. A promoter, which is a short DNA sequence that precedes a gene sequence, is to be distinguished from a nonpromoter. Promoters are important in identifying starting locations of genes in long uncharacterized sequences of DNA. The prior knowledge for this dataset, which consists of a set of 14 prior rules, matches none of the examples of the training set. Hence these rules by themselves cannot serve as a classifier. However, they do capture significant information about promoters and it is known that incorporating them into a classifier results in a more accurate classifier [17]. These 14 prior rules were converted in a straightforward manner [6] into 64 knowledge sets. Following the methodology used in prior work [17], we tested our algorithm on this dataset together with the knowledge sets, using a "leave-one-out" cross validation methodology in which the entire training set of 106 elements is repeatedly divided into a training set of size 105 and a test set of size 1. The values of v and f-l associated with both KSVM and SVM l [2] where obtained by a tuning procedure which consisted of varying them on a square grid: {2-6, 2-5 , ... ,26} X {2-6, 2-5 , ... ,26}. After expressing the prior knowledge in the form of polyhedral sets and applying KSVM, we obtained 5 errors out of 106 (5/106). KSVM gave a much better performance than five other different methods that do not use prior knowledge: Standard I-norm support vector machine [2] (9/106), Quinlan's decision tree builder [13] (19/106), PEBLS Nearest algorithm [4] with k = 3 (13/106), an empirical method suggested by a biologist based on a collection of "filters" to be used for promoter recognition known as O'Neill's Method [12] (12/106), neural networks with a simple connected layer of hidden units trained using back-propagation [14] (8/106). Except for KSVM and SVM l , all of these results are taken from an earlier report [17]. KSVM was also compared with [16] where a hybrid learning system maps problem specific prior knowledge, represented in propositional logic into neural networks and then, refines this reformulated knowledge using back propagation. This method is known as Knowledge Based Artificial Neural Networks (KBANN). KBANN was the only approach that performed slightly better than our algorithm and obtained 4 misclassifications compared to our 5. However, it is important to note that our classifier is a much simpler linear classifier, sign(x'w - "(), while the neural network classifier of KBANN is a considerably more complex nonlinear classifier. Furthermore, we note that KSVM is simpler to implement than KBANN and requires merely a commonly available linear programming solver. In addition, KSVM which is a linear support vector machine classifier, improves by 44.4% the error of an ordinary linear I-norm SVM classifier that does not utilize prior knowledge sets. The second dataset used in our numerical tests was the Wisconsin breast cancer prognosis dataset WPBC using a 60-month cutoff for predicting recurrence or nonrecurrence of the disease [2]. The prior knowledge utilized in this experiment consisted of the prognosis rules used by doctors [8] which depended on two features from the dataset: tumor size (T)(feature 31), that is the diameter of the excised tumor in centimeters and lymph node status (L) which refers to the number of metastasized axillary lymph nodes (feature 32). The rules are: (L:2: 5) 1\ (T:2: 4) ===} RECUR and (L = 0) 1\ (T S 1.9) ===} NON RECUR It is important to note that the rules described above can be applied directly to classify only 32 of the given 110 given points of the training dataset and correctly classify 22 of these 32 points. The remaining 78 points are not classifiable by the above rules. Hence, if the rules are applied as a classifier by themselves the classification accuracy would be 20%. As such, these rules are not very useful by themselves and doctors use them in conjunction with other rules [8]. However, using our approach the rules were converted to linear inequalities and used in our KSVM algorithm without any use of the data, i.e. l/ = 0 in the linear program (19). The resulting linear classifier in the 2-dimensional space of L(ymph) and T(umor) achieved 66.4% accuracy. The ten-fold, cross-validated test set correctness achieved by standard SVM using all the data is 66.2% [2]. This result is remarkable because our knowledge-based formulation can be applied to problems where training data may not be available whereas expert knowledge may be readily available in the form of knowledge sets. This fact makes this method considerably different from previous hybrid methods like KBANN where training examples are needed in order to refine prior knowledge. If training data are added to this knowledge-based formulation, no noticeable improvement is obtained. 5 Conclusion & Future Directions We have proposed an efficient procedure for incorporating prior knowledge in the form of knowledge sets into a linear support vector machine classifier either in combination with a given dataset or based solely on the knowledge sets. This novel and promising approach of handling prior knowledge is worthy of further study, especially ways to handle and simplify the combinatorial nature of incorporating prior knowledge into linear inequalities. A class of possible future applications might be to problems where training data may not be easily available whereas expert knowledge may be readily available in the form of knowledge sets. This would correspond to solving our knowledge based linear program (19) with l/ = O. A typical example of this type was breast cancer prognosis [8] where knowledge sets by themselves generated a linear classifier as good as any classifier based on data points. This is a new way of incorporating prior knowledge into powerful support vector machine classifiers. Also, the concept of support constraints as discussed at the end of Section 3, warrants further study that may lead to a systematic simplification of prior knowledge sets. Other avenues of research include, knowledge sets characterized by nonpolyhedral convex sets as well as nonlinear kernels [18, ll] which are capable of handling more complex classification problems, as well as the incorporation of prior knowledge into multiple instance learning [1, 5] which might lead to improved classifiers in that field. Acknowledgments Research in this UW Data Mining Institute Report 01-09, November 2001, was supported by NSF Grants CCR-9729842, IRI-9502990 and CDA-9623632, by AFOSR Grant F49620-00-1-0085, by NLM Grant 1 ROI LM07050-01, and by Microsoft. References [1] P. Auer. On learning from multi-instance examples: Empirical evaluation of a theoretical approach. pages 21- 29, 1987. [2] P. S. Bradley and O. L. Mangasarian. Feature selection via concave minimization and support vector machines. In J. Shavlik, editor, Machine Learning Proceedings of the Fifteenth International Conference{ICML '98), pages 82-90, San Francisco, California, 1998. Morgan Kaufmann. ftp:/ /ftp.cs.wisc.edu/mathprog/ tech-reports / 98-03. ps. [3] V. Cherkassky and F. Mulier. Learning from Data - Concepts, Theory and Methods. John Wiley & Sons, New York, 1998. [4] S. Cost and S. Salzberg. A weighted nearest neighbor algorithm for learning with symbolic features. Machine Learning, 10:57-58, 1993. [5] T. G. Dietterich, R. H. Lathrop, and T. Lozano-Perez. Solving the multipleinstance problem with axis-parallel rectangles. Artificial Intelligence, 89:31-71, 1998. [6] G. Fung, O. L. Mangasarian, and J. Shavlik. Knowledge-based support vector machine classifiers. Technical Report 01-09, Data Mining Institute, Computer Sciences Department, University of Wisconsin, Madison, Wisconsin, November 2001. ftp:/ /ftp.cs.wisc.edu/pub/dmi/tech-reports/01-09.ps. [7] F. Girosi and N. Chan. Prior knowledge and the creation of "virtual" examples for RBF networks. In Neural networks for signal processing, Proceedings of the 1995 IEEE-SP Workshop, pages 201-210, New York, 1995. IEEE Signal Processing Society. [8] Y.-J. Lee, O. L. Mangasarian, and W. H. Wolberg. Survival-time classification of breast cancer patients. Technical Report 01-03, Data Mining Institute, Computer Sciences Department, University of Wisconsin, Madison, Wisconsin, March 2001. Computational Optimization and Applications, to appear. ftp:/ /ftp.cs.wisc.edu/pub/dmi/tech-reports/Ol-03.ps. [9] O. L. Mangasarian. Nonlinear Programming. SIAM, Philadelphia, PA, 1994. [10] O. L. Mangasarian. Arbitrary-norm separating plane. Operations Research Letters, 24:15- 23, 1999. ftp:/ /ftp.cs.wisc.edu/math-prog/tech-reports/97-07r.ps. [11] O. L. Mangasarian. Generalized support vector machines. In A. Smola, P. Bartlett, B. Scholkopf, and D. Schuurmans, editors, Advances in Large Margin Classifiers, pages 135-146, Cambridge, MA, 2000. MIT Press. ftp:/ /ftp.cs.wisc.edu/math-prog/tech-reports/98-14.ps. [12] M. C. O'Neill. Escherchia coli promoters: I. concensus as it relates to spacing class, specificity, repeat substructure, and three dimensional organization. Journal of Biological Chemistry, 264:5522- 5530, 1989. [13] J. R. Quinlan. Induction of Decision Trees, volume 1. 1986. [14] D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal representations by error propagation. In D. E. Rumelhart and J. L. McClelland, editors, Parallel Distributed Processing, pages 318- 362, Cambridge, Massachusetts, 1986. MIT Press. [15] B. Scholkopf, P. Simard, A. Smola, and V. Vapnik. Prior knowledge in support vector kernels. In M. Jordan, M. Kearns, and S. Solla, editors, Advances in Neural Information Processing Systems 10, pages 640 - 646, Cambridge, MA, 1998. MIT Press. [16] G. G. Towell and J. W. Shavlik. Knowledge-based artificial neural networks. Artificial Intelligence, 70:119-165, 1994. [17] G. G. Towell, J. W. Shavlik, and M. N oordewier. Refinement of approximate domain theories by knowledge-based artificial neural networks. In Proceedings of the Eighth National Conference on Artificial Intelligence (AAAI-90), pages 861-866, 1990. [18] V. N. Vapnik. The Nature of Statistical Learning Theory. Springer, New York, second edition, 2000.
|
2002
|
123
|
2,129
|
Conditional Models on the Ranking Poset Guy Lebanon School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 lebanon@cs.cmu.edu John Lafferty School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 lafferty@cs.cmu.edu Abstract A distance-based conditional model on the ranking poset is presented for use in classification and ranking. The model is an extension of the Mallows model, and generalizes the classifier combination methods used by several ensemble learning algorithms, including error correcting output codes, discrete AdaBoost, logistic regression and cranking. The algebraic structure of the ranking poset leads to a simple Bayesian interpretation of the conditional model and its special cases. In addition to a unifying view, the framework suggests a probabilistic interpretation for error correcting output codes and an extension beyond the binary coding scheme. 1 Introduction Classification is the task of associating a single label with a covariate . A generalization of this problem is conditional ranking, the task of assigning to a full or partial ranking of the items in . This paper studies the algebraic structure of this problem, and proposes a combinatorial structure called the ranking poset for building probability models for conditional ranking. In ensemble approaches to classification and ranking, several base models are combined to produce a single ranker or classifier. An important distinction between different ensemble methods is whether they use discrete inputs, ranked inputs, or confidence-rated predictions. In the case of discrete inputs, the base models provide a single item in , and no preference for a second or third choice is given. In the case of ranked input, the base classifiers output a full or partial ranking over . Of course, discrete input is a special case of ranked input, where the partial ranking consists of the single topmost item. In the case of confidence-rated predictions, the base models again output full or partial rankings, but in addition provide a confidence score, indicating how much one class should be preferred to another. While confidence-rated predictions are sometimes preferable as input to an ensemble method, such confidence scores are often not available (as is typically the case in metasearch), and even when they are available, the scores may not be well calibrated. This paper investigates a unifying algebraic framework for ensemble methods for classification and conditional ranking, focusing on the cases of discrete and ranked inputs. Our approach is based on the ranking poset on items, denoted , which consists of the collection of all full and partial rankings equipped with the partial order given by refinement of rankings. The structure of the poset of partial ranking over gives rise to natural invariant distance functions that generalize Kendall’s Tau and the Hamming distance. Using these distance functions we define a conditional model
where
. This conditional model generalizes several existing models for classification and ranking, and includes as a special case the Mallows model [11]. In addition, the model represents algebraically the way in which input classifiers are combined in certain ensemble methods, including error correcting output codes [4], several versions of AdaBoost [7, 1], and cranking [10]. In Section 2 we review some basic algebraic concepts and in Section 3 we define the ranking poset. The new model and its Bayesian interpretation are described in Section 4. A derivation of some special cases is given in Section 5, and we conclude with a summary in Section 6. 2 Permutations and Cosets We begin by reviewing some basic concepts from algebra, with some of the notation and definitions borrowed from Critchlow [2]. Identifying the items to be ranked
with the numbers
, if denotes a permutation of
, then !" denotes the rank given to item " and $# " denotes the item assigned to rank " . The collection of all permutations of -items forms the nonabelian symmetric group of order , denoted % . The multiplicative notation '& )(* is used to denote function composition. The subgroup of % consisting of all permutations that fix the top + positions is denoted % # ; thus, % # ,(- .% !" !(/"0
" (1
+2 (1) The right coset % # 34(53 67 '% # 8 (2) is equivalent to a partial ranking, where there is a full ordering of the + top-ranked items. The set of all such partial rankings forms the quotient space % :9% # . An ordered partition of is a sequence ;<(
= of positive integers that sum to . Such an ordered partition corresponds to a partial ranking of type ; with items in the first position, > items in the second position and so on. No further information is conveyed about orderings within each position. A partial ranking of the top + items is a special case with ?@(A+CB*
DE(F ( @(FG
IHDE( .JK+ . More formally, let L (*GG
G
L >M(N BOG
B >G
&&&
L = (* BO&&&PB = # BO
. Then the subgroup %RQOS ( % GTVUW&&&2UX% ZY contains all permutations W% for which the set equality ! LR[ \( LR[ holds for each " ; that is, all permutations that only permute within each LR[ . A partial ranking of type ; is equivalent to a coset % Q and the set of such partial rankings forms the quotient space % :9%]Q . We now describe a convenient notation for permutations and cosets. In the following, we list items separated by vertical lines, indicating that the items on the left side of the line are preferred to (ranked higher than) the items on the right side of the line. For example, the permutation !73M(*^
7!_^G`(a
!cbd(*b is denoted by ^:e b . A partial ranking %gf #h where the top 3 items are b
^
is denoted by b2 ^i8 j
k . A classification h may thus be denoted by be
^
7j
k . A partial ranking % Q where ;l(mb:
^ with items
b
k ranked in the first position is denoted by G
b:
k: ^n
j . A distance function o on % is a function oNpq% rUl% Nsut that satisfies the usual properties: o2
7g(wv , o2
xyv when az (A , o2
C(wo2c
, and the triangle PSfrag replacements PSfrag replacements T
T
T
T
T
T
T
T
T
T
T
T
Figure 1: The Hasse diagram of h (left) and a partial Hasse diagram of (right). Some of the lines are dotted for easier visualization. inequality oc
o2
:B o2
for all
.% . In addition, since the indexing of the items 8
is arbitrary, it is appropriate to require invariance to relabeling of . Formally, this amounts to right invariance o2
(/o2:
: , for all
'% A popular right invariant distance on % is Kendall’s Tau ]
, given by ]c
( # [ [ # c" JX # _7 (3) where 2E( for ,xAv and 2E(4v otherwise [8]. Kendall’s Tau ]
can be interpreted as the number of discordant pairs of items between and , or the minimum number of adjacent transpositions needed to bring $# to D# . An adjacent transposition flips a pair of items that have adjacent ranks. Critchlow [2] derives extensions of Kendall’s Tau and other distances on % to distances on partial rankings. 3 The Ranking Poset We first define partially ordered sets and then proceed to define the ranking poset. Some of the definitions below are taken from [12], where a thorough introduction to posets can be found. A partially ordered set or poset is a pair "!d
$#q , where ! is a set and # is a binary relation that satisfies (1) %# , (2) if %# and &# then )( , and (3) if %# and &#(' then )#*' for all
2
+' ,! . We write , when ,# and ,z ( . We say that covers and write /. when 0 and there is no ' 1! such that 0-)' and '2 . A finite poset is completely described by the covering relation. The planar Hasse diagram of "!d
3#q is the graph for which the elements of ! are the nodes and the edges are given by the covering relation. In addition, we require that if 4. then is drawn higher than . The ranking poset is the poset in which the elements are all possible cosets %65 , where 7 is an ordered partition of and % . The partial order of is defined by refinement; that is, &-l if we can get from to by adding vertical lines. Note that is different from the poset of all set partitions of G
2
ordered by partition refinement since in the order of the partition elements matters. Figure 1 shows the Hasse diagram of h and a portion of the Hasse diagram of . A subposet 8E
$#:9! of "!d
3#<;$ is defined by 8>=?! and &#:9 if and only if @#:; . A chain is a poset in which every two elements are comparable. A saturated chain A of length + is a sequence of elements
! that satisfy . .m&&&:. . A chain of ! is a maximal chain if there is no other saturated chain of ! that contains it. A graded poset of rank is a poset in which every maximal chain has length . In a graded poset, there is a rank or grade function )p!As 3v:
such that (mv if is a minimal element and !(2 2B if 4. . It is easy to see that is a graded poset of rank XJ, and the rank of every element is the number of vertical lines in its denotation. We use to denote the subposet of consisting of \ . In particular, the elements in the + th grade, all of which are incomparable, are denoted by . Full orderings occupy the topmost grade # . Classifications "
" reside in . Other elements of are multilabel classifications liG
2
where = GG
. 4 Conditional Models on the Ranking Poset We now present a family of conditional models defined in terms of the ranking poset. To begin, suppose that o is a right invariant function on . That is, o2
( oc
: for all
and % . Here right invariance is defined with respect to the natural action of % on , given by [ T3 TZi [ I&&&i [ &$y(53 [ T0 T [ 0 &&&3i3 [ I (4) The function o may or may not be a metric; its interpretation as a measure of dissimilarity, however, remains. We will examine several distances that are based on the covering relation . of . Down and up moves on the Hasse diagram will be denoted by and respectively. A distance o defined in terms of and moves is easily shown to be right invariant because the group action of % does not change the covering relation between any two elements; that is, the group action of % on commutes with and moves: JJJJ0s JJJJ0s JJ:JJ0s JJ:JJ0s (5) While the metric properties of o are not required in our model, the right invariance property is essential since we want to treat all [ in the same manner. We are now ready to give the general form of a conditional model on . Let o be an invariant function, as above. The model takes as input + rankings
>
contained in some subset = of the ranking poset. For example, each ! could be an element of # . Let " be a probability mass function on , which will be the “carrier density” or default model. Then o and " specify an exponential model $#M given by c6$#M ( % '&
(#M ) 8+*",./0 ! 21 !o2
+!$34 (6) where & 6587 t , :9 = , and .! ; = . The term % '&
#M is the normalizing constant % '&D
(#d ( <.=?> +*",./ 0 ! 1 ! oc
! 3 4 (7) Thus, conditional on # , 7&3 #M forms a probability distribution over 9(= . Given a data set c [
# [ , the parameters 1 5 will typically be selected by maximizing the conditional loglikelihood & ( [
2 [ $# [ , a marginal likelihood or posterior. Under mild regularity conditions, 1 will be convex and have a unique global maximum. 4.1 A Bayesian interpretation We now derive a Bayesian interpretation for the model given by (6). Our result parallels the interpretation of multistage ranking models given by Fligner and Verducci [6]. The key fact is that, under appropriate assumptions, the normalizing term does not depend on the partial ordering in the one-dimensional case. Proposition 4.1. Suppose that o is right invariant and that is invariant under the action of % . If % acts transitively on 9 then = < ( = < (8) for all
7 9 and 1 t . Proof. First, note that since = is invariant under the action of % , it follows that ( for each % . Indeed, 7 by the invariance assumption, and :7 since for l(< [ T8 &&&3 [! we have "!(<3# [ T0nI&&&i3# [! 0 such that "(/ . Now, since % acts transitively on 9 , for all
9 there is # .% such that "#E( $ . We thus have that % 1
( = < (9) ( = <% % (by right invariance of & ) (10) ( = <' % (11) ( = <' (by invariance of ( ) (12) Thus, we can write % 1
7M( % 1 since the normalizing constant for 9 does not in fact depend on . The underlying generative model is given as follows. Assume that 9 is drawn from the prior " c and that
are independently drawn from generalized Mallows models *) c .! ( % 1 !3 ) < ) (13) where .! . Then under the conditions of Proposition 4.1, we have from Bayes’ rule that the posterior distribution over is given by ) 8,+ ! *) _ +! <+=?> ) 8,+ ! *) _ +! ( ) 8 ) ) < ) + ! % 1 !30# + ! % 1 ! # <+=?> c ) .)/ <0 *) (14) ( $#M (15) We thus have the following characterization of 7&3$#M . Proposition 4.2. If o is right invariant, is invariant under the action of % , and % acts transitively on 9 , then the model &$#M defined in equation (6) is the posterior under independent sampling of generalized Mallows models, 2! S *) & , with prior S " . The conditions of this proposition are satisfied, for example, when 91(*% :9G% 5 and ( [ % :9G% 5 as is assumed in the special cases of the next section. 5 Special Cases This section derives several special cases of model (6), corresponding to existing ensemble methods. The special cases correspond to different choices of 9]
5 and o in the definition of the model. In each case c is taken to be uniform, though the extension to non-uniform is immediate. Following [9], the unnormalized versions of all the models may be easily derived, corresponding to the exponential loss used in boosting. 5.1 Cranking and Mallows model Let 5N( t
(9 ( /( # (N% , and let oc
be the minimum number of downup ( ) moves on the Hasse diagram of needed to bring to . Since adjacent transpositions of permutations may be identified with a down move followed by an up move over the Hasse diagram, oc
is equal to Kendall’s Tau ]c
. For example, ]7 ^: b
b2 ^i ( b and the corresponding path in Figure 1 is ^: b
^ b ^i8 b ^iG
b ^: be ^n
be b ^:e In this case model (6) becomes the cranking model [10] 6&2 ( % #V
&2 ) T ) < )
& t
+! '% (16) The Bayesian interpretation in this case is well known, and is derived in [6]. The generative model is independent sampling of ! from a Mallows model whose location parameter is and whose scale parameter is 1 ! . Other special cases that fall into this category are the models of Feigin [5] and Critchlow and Verducci [3]. 5.2 Logistic models Let 5a( t , 9 (6 *(a% 9% # , and let oc
be the minimum number of up-down ( ) moves in the Hasse diagram. Since 9 ( ( % :9% # o2
(/o2 % # :
0% # #!( v if # !( #2# ^ otherwise (17) In this case model (6) becomes equivalent to the multiclass generalization of logistic regression. If the normalization constraints in the corresponding convex primal problem are removed, the model becomes discrete AdaBoost.M2; that is, o2
!8 29Z^ becomes the (discrete) multiclass weak learner [
3v:
in the usual boosting notation. See [9] for details on the correspondence between exponential models and the unnormalized models that correspond to AdaBoost. 5.3 Error correcting output codes A more interesting special case of the algebraic structure described in Sections 3 and 4 is where the ensemble method is error correcting output coding (ECOC) [4]. Here we set 9 (,% :9G% # , l(
9 , and take the parameter space to be 55(-& Ot 1 ( 1 >q(*&&&G( 1
and 1 [ vG (18) As before, o2
is the minimal number of up-down ( ) moves in the Hasse diagram needed to bring to . Since ( % # , the model computes probabilities of classifications T . On input , the base rankers output !G 2
?9 , which corresponds to one of the binary classifiers in ECOC for the appropriate column of the binary coding matrix. For example, consider a binary classifier trained on the coding column
v
v:
v:
vG . On an input , the classifier outputs 0 or 1, corresponding to the partial rankings ( ^n
7j
k
e
b and @(*G
b ^n
j:
0kn
, respectively. Since @% 9G% # and
9 o2
( o2 % #
[ [ = i [ [ = (19) ( if # ^ otherwise. (20) For example, if (*^:e
b
j:
k
and X(N^n
j:
k
2iG
b , then o2
`( , as can be seen from the sequence of moves ^:e
b
j:
k
^: j:
0kn
2iG
b ^n
j:
k
2iG
b` (21) If '(*8 ^
b:
7j:
0kn
and '( ^n
7j
kn
e
b , then o2
( ^ , with the sequence of moves ^n
b
7j
k
^n
7j
k
b
^
7j
kn
b ^
7j:
0kn
e b ^
7j:
0kn
e
bM (22) Since 1 [ ( 1 ! , the exponent of the model becomes 1 ! o2
+! . At test time, the model thus selects the label corresponding to the partial ranking ( arg
, < n6$#M . Now, since 1 is strictly negative, #M is a monotonically decreasing function in ! o2
+! . Equivalence with the ECOC decision rule thus follows from the fact that ! oc
+!nJ+ is the Hamming distance between the appropriate row of the coding matrix and the concatenation of the bits returned from the binary classifiers. Thus, with the appropriate definitions of 9\
and o , the conditional model on the ranking poset is a probabilistic formulation of ECOC that yields the same classification decisions. This suggests ways in which ECOC might be naturally extended. First, relaxing the constraint 1 `( 1 > (*&&&8( 1 results in a more general model that corresponds to ECOC with a weighted Hamming distance, or index sensitive “channel,” where the learned weights may adapt to the precision of the various base classifiers. Another simple generalization results from using a nonuniform carrier density . A further generalization is achieved by considering that for a given coding matrix, the trained classifier for a given column outputs either [ [ = i [ [ = or [ [ = [ [ = depending on the input . Allowing the output of the classifier instead to belong to other grades of results in a model that corresponds to error correcting output codes with nonbinary codes. While this is somewhat antithetic to the original spirit of ECOC—reducing multiclass to binary—the base classifiers in ECOC are often multiclass classifiers such as decision trees in [4]. For such classifiers, the task instead can be viewed as reducing multiclass to partial ranking. Moreover, there need not be an explicit coding matrix. Instead, the input rankers may output different partial rankings for different inputs, which are then combined according to model (6). In this way, a different coding matrix is built for each example in a dynamic manner. Such a scheme may be attractive in bypassing the problem of designing the coding matrix. 6 Summary An algebraic framework has been presented for classification and ranking, leading to conditional models on the ranking poset that are defined in terms of an invariant distance or dissimilarity function. Using the invariance properties of the distances, we derived a generative interpretation of the probabilistic model, which may prove to be useful in model selection and validation. Through different choices of the components
9]
and o , the family of models was shown to include as special cases the Mallows model, and the classifier combination methods used by logistic models, boosting, cranking, and error correcting output codes. In the case of ECOC, the poset framework shows how probabilities may be assigned to partial rankings in a way that is consistent with the usual definitions of ECOC, and suggests several natural extensions. Acknowledgments We thank D. Critchlow, G. Hulten and J. Verducci for helpful input on the paper. This work was supported in part by NSF grant CCR-0122581. References [1] M. Collins, R. E. Schapire, and Y. Singer. Logistic regression, AdaBoost and Bregman distances. Machine Learning, 48, 2002. [2] D. E. Critchlow. Metric Methods for Analyzing Partially Ranked Data. Lecture Notes in Statistics, volume 34, Springer, 1985. [3] D. E. Critchlow and J. S. Verducci. Detecting a trend in paired rankings. Journal of the Royal Statistical Society C, 41(1):17–29, 1992. [4] T. G. Dietterich and G. Bakiri. Solving multiclass learning problems via errorcorrecting codes. Journal of Artificial Intelligence Research, 2:263–286, 1995. [5] P. D. Feigin. Modeling and analyzing paired ranking data. In M. A. Fligner and J. S. Verducci, editors, Probability Models and Statistical Analyses for Ranking Data. Springer, 1992. [6] M. A. Fligner and J. S. Verducci. Posterior probabilities for a concensus ordering. Psychometrika, 55:53–63, 1990. [7] Y. Freund and R. E. Schapire. Experiments with a new boosting algorithm. In International Conference on Machine Learning, 1996. [8] M. G. Kendall. A new measure of rank correlation. Biometrika, 30, 1938. [9] G. Lebanon and J. Lafferty. Boosting and maximum likelihood for exponential models. In Advances in Neural Information Processing Systems, 15, 2001. [10] G. Lebanon and J. Lafferty. Cranking: Combining rankings using conditional probability models on permutations. In International Conference on Machine Learning, 2002. [11] C. L. Mallows. Non-null ranking models. Biometrika, 44:114–130, 1957. [12] R. P. Stanley. Enumerative Combinatorics, volume 1. Wadsworth & Brooks/Cole Mathematics Series, 1986.
|
2002
|
124
|
2,130
|
A Formulation for Minimax Probability Machine Regression Thomas Strohmann Department of Computer Science University of Colorado, Boulder strohman@cs.colorado.edu Gregory Z. Grudic Department of Computer Science University of Colorado, Boulder grudic@cs.colorado.edu Abstract We formulate the regression problem as one of maximizing the minimum probability, symbolized by Ω, that future predicted outputs of the regression model will be within some ±ε bound of the true regression function. Our formulation is unique in that we obtain a direct estimate of this lower probability bound Ω. The proposed framework, minimax probability machine regression (MPMR), is based on the recently described minimax probability machine classification algorithm [Lanckriet et al.] and uses Mercer Kernels to obtain nonlinear regression models. MPMR is tested on both toy and real world data, verifying the accuracy of the Ωbound, and the efficacy of the regression models. 1 Introduction The problem of constructing a regression model can be posed as maximizing the minimum probability of future predictions being within some bound of the true regression function. We refer to this regression framework as minimax probability machine regression (MPMR). For MPMR to be useful in practice, it must make minimal assumptions about the distributions underlying the true regression function, since accurate estimation of these distribution is prohibitive on anything but the most trivial regression problems. As with the minimax probability machine classification (MPMC) framework proposed in [1], we avoid the use of detailed distribution knowledge by obtaining a worst case bound on the probability that the regression model is within some ε > 0 of the true regression function. Our regression formulation closely follows the classification formulation in [1] by making use of the following theorem due to Isii [2] and extended by Bertsimas and Sethuraman [3]: supE[z]=¯z,Cov[z]=Σz Pr{aT z ≥b} = 1 1 + ω2 , ω2 = infaT z≥b(z −¯z)T Σ−1 z (z −¯z) (1) where a and b are constants, z is a random vector, and the supremum is taken over all distributions having mean ¯z and covariance matrix Σz. This theorem assumes linear boundaries, however, as shown in [1], Mercer kernels can be used to obtain nonlinear versions of this theorem, giving one the ability to estimate upper and lower bounds on probability that points generated form any distribution having mean ¯z and covariance Σz, will be on one side of a nonlinear boundary. In [1], this formulation is used to construct nonlinear classifiers (MPMC) that maximize the minimum probability of correct classification on future data. In this paper we exploit the above theorem (??) for building nonlinear regression functions which maximize the minimum probability that the future predictions will be within an ε to the true regression function. We propose to implement MPMR by using MPMC to construct a classifier that separates two sets of points: the first set is obtained by shifting all of the regression data +ε along the dependent variable axis; and the second set is obtained by shifting all of the regression data −ε along the dependent variable axis. The the separating surface (i.e. classification boundary) between these two classes corresponds to a regression surface, which we term the minimix probability machine regression model. The proposed MPMR formulation is unique because it directly computes a bound on the probability that the regression model is within ±ε of the true regression function (see Theorem 1 below). The theoretical foundations of MPMR are formalized in Section 2. Experimental results on synthetic and real data are given in Section 3, verifying the accuracy of the minimax probability regression bound and the efficacy of the regression models. Proofs of the two theorems presented in this paper are given in the appendix. Matlab and C source code for generating MPMR models can be downloaded from http://www.cs.colorado.edu/∼grudic/software. 2 Regression Model We assume that learning data is generated from some unknown regression function f : ℜd 7→ℜthat has the form: y = f(x) + ρ (2) where x ∈ℜd are generated according to some bounded distribution Λ, y ∈ℜ, E[ρ] = 0, V ar[ρ] = σ2, and σ ∈ℜis finite. We are given N learning examples Γ = {(x1, y1), ..., (xN, yN)}, where ∀i ∈{1, ..., N}, xi = (xi1, ..., xid) ∈ℜd is generated from the distribution Λ, and yi ∈ℜ. The goal of our formulation is two-fold: first we wish to use Γ to construct an approximation ˆf of f, such that, for any x generated from the distribution Λ, we can approximate ˆy using ˆy = ˆf(x) (3) The second goal of our formulation is, for any ε ∈ℜ, ε > 0, estimate the bound on the minimum probability, symbolized by Ω, that ˆf(x) is within ε of y (define in (2)): Ω= inf Pr {|ˆy −y| ≤ε} (4) Our proposed formulation of the regression problem is unique because we obtain direct estimates of Ω. Therefore we can estimate the predictive power of a regression function by a bound on the minimum probability that we are within ε of the true regression function. We refer to a regression function that directly estimates (4) as a mimimax probability machine regression (MPMR) model. The proposed MPMR formulation is based on the kernel formulation for mimimax probability machine classification (MPMC) presented in [1]. Therefore, the MPMR model has the form: ˆy = ˆf (x) = N X i=1 βiK (xi, x) + b (5) where, K (xi, x) = ϕ(xi)ϕ(x) is a kernel satisfying Mercer’s Conditions, xi, ∀i ∈ {1, ..., N}, are obtained from the learning data Γ, and βi, b ∈ℜare outputs of the MPMR learning algorithm. 2.1 Kernel Based MPM Classification Before formalizing the MPMR algorithm for calculating βi and b from the training data Γ, we first describe the MPMC formulation upon which it is based. In [1], the binary classification problem is posed as one of maximizing the probability of correctly classifying future data. Specifically, two sets of points are considered, here symbolized by {u1, ..., uNu}, where ∀i ∈{1, ..., Nu}, ui ∈ℜm, belonging to the first class, and {v1, ..., vNv}, where ∀i ∈{1, ..., Nv}, vi ∈ℜm, belonging to the second class. The points ui are assumed to be generated from a distribution that has mean u and a covariance matrix Σu, and correspondingly, the points vi are assumed to be generated from a distribution that has mean v and a covariance matrix Σv. For the nonlinear kernel formulation, these points are mapped into a higher dimensional space ϕ : ℜm 7→ℜh as follows: u 7→ϕ(u) with corresponding mean and covariance matrix (ϕ(u), Σϕ(u)), and v 7→ϕ(v) with corresponding mean and covariance matrix (ϕ(v), Σϕ(v)). The binary classifier derived in [1] has the form (c = −1 for the first class and c = +1 for the second): c = sign "Nu+Nv X i=1 γiKc (zi, z) + bc # (6) where Kc (zi, z) = ϕ(zi)ϕ(z), zi = ui for i = 1, ..., Nu, zi = vi−Nu for i = Nu + 1, ..., Nu + Nv, and γ = (γ1, ..., γNu+Nv), bc obtained by solving the following optimization problem: min γ (
˜Ku √Nu γ
2 +
˜Kv √Nv γ
2 ) s.t.γT ˜ku −˜kv = 1 (7) where ˜Ku = Ku −1Nu˜ku; where ˜Kv = Kv −1Nv˜kv; where ˜kv, ˜ku ∈ℜNu+Nv defined as: [˜kv]i = 1 Nv PNv j=1 Kc(vj, zi) and [˜ku]i = 1 Nu PNu j=1 Kc(uj, zi); where 1k is a k dimensional column vector of ones; where Ku contains the first Nu rows of the Gram matrix K (i.e. a square matrix consisting of the elements Kij = Kc(zi, zj)); and finally Kv contains the last Nv rows of the Gram matrix K. Given that γ solves the minimization problem in (7), bc can be calculated using: bc = γT ˜ku −κ r 1 Nu γT ˜KTu ˜Kuγ = γT ˜kv + κ r 1 Nv γT ˜KTv ˜Kvγ (8) where, κ = r 1 Nu γT ˜KTu ˜Kuγ + r 1 Nv γT ˜KTv ˜Kvγ −1 (9) One significant advantage of this framework for binary classification is that, given perfect knowledge of the statistics u, Σu, v, Σv, the maximum probability of incorrect classification is bounded by 1 −α, where α can be directly calculated from κ as follows: α = κ2 1 + κ2 (10) This result is used below to formulate a lower bound on the probability that that the approximated regression function is within ε of the true regression function. 2.2 Kernel Based MPM Regression In order to use the above MPMC formulation for our proposed MPMR framework, we first take the original learning data Γ and create two classes of points ui ∈ℜd+1 and vi ∈ℜd+1, for i = 1, ..., N, as follows: ui = (yi + ε, xi1, xi2, ..., xid) vi = (yi −ε, xi1, xi2, ..., xid) (11) Given these two sets of points, we obtain γ by minimizing equation (7). Then, from (6), the MPM classification boundary between points ui and vi is given by 2N X i=1 γiKc (zi, z) + bc = 0 (12) We interpret this classification boundary as a regression surface because it acts to separate points which are ε above the y values in the learning set Γ, and ε below the y values in Γ. Furthermore, given any point x = (x1, ..., xd) generated from the distribution Λ, calculating ˆy the regression model output (5), involves finding a ˆy that solves equation (12), where z = (ˆy, x1, ..., xd), and, recalling from above, zi = ui for i = 1, ..., N, zi = vi−N for i = N + 1, ..., 2N (note that Nu = Nv = N). If Kc (zi, z) is nonlinear, solving (12) for ˆy is in general a nonlinear single variable optimization problem, which can be solved using a root finding algorithm (for example the Newton-Raphson Method outlined in [4]). However, below we present a specific form of nonlinear Kc (zi, z) that allows (12) to be solved analytically. It is interesting to note that the above formulation of a regression model can be derived using any binary classification algorithm, and is not limited to the MPMC algorithm. Specifically, if a binary classifier is built to separate any two sets of points (11), then finding a crossing point ˆy at where the classifier separates these classes for some input x = (x1, ..., xd), is equivalent to finding the output of the regression model for input x = (x1, ..., xd). It would be interesting to explore the efficacy of various classification algorithms for this type of regression model formulation. However, as formalized in Theorem 1 below, using the MPM framework gives us one clear advantage over other techniques. We now state the main result of this paper: Theorem 1: For any x = (x1, ..., xd) generated according to the distribution Λ, assume that there exists only one ˆy that solves equation (12). Assume also perfect knowledge of the statistics u, Σu, v, Σv. Then, the minimum probability that ˆy is within ε of y (as defined in (2)) is given by: Ω= inf Pr {|ˆy −y| ≤ε} = κ2 1 + κ2 (13) where κ is defined in (9). Proof: See Appendix. Therefore, from the above theorem, the MPMC framework directly computes the lower bound on the probability that the regression model is within ε of the function that generated the learning data Γ (i.e. the true regression function). However, one key requirement of the theorem is perfect knowledge of the statistics u, Σu, v, Σv. In the actual implementation of MPMR, these statistics are estimated from Γ, and it is an open question (which we address in Section 3) as to how accurately Ωcan be estimated from real data. In order to avoid the use of nonlinear optimizations techniques to solve (12) for ˆy, we restrict the form of the kernel Kc (zi, z) to the following: Kc (zi, z) = y′ iˆy + K (xi, x) (14) where K (xi, x) = ϕ(xi)ϕ(x) is a kernel satisfying Mercer’s Conditions; where z = (ˆy, x1, ..., xd); where zi = ui, y′ i = yi +ϵ for i = 1, ..., N; and where zi = vi−N, y′ i−N = yi −ϵ for i = N + 1, ..., 2N. Given this restriction on Kc (zi, z), we now state our final theorem which uses the following lemma: Lemma 1: ˜ku −˜kv = 2ϵy′ (15) Proof: See Appendix. Theorem 2: Assume that (14) is true. Then all of the following are true: Part 1: Equation (12) has an analytical solution as defined in (5), where βi = −2ϵ(γi + γi+N) b = −2ϵbc Part 2: ˜Ku = ˜Kv Table 1: Results over 100 random trials for sinc data: mean squared errors and the standard deviation; MPTDε: fraction of test points that are within ϵ = 0.2 of y; predicted Ω: predicted probability that the model is within ε = 0.2 of y. mean squared error MPTDε predicted Ω σ2 = 0 mean (std) 0.0 (0.0) 1.0 (0.0) 1.0 (0.0) σ2 = 0.5 mean (std) 0.0524 (0.0386) 0.6888 (0.1133) 0.1610 (0.0229) σ2 = 1.0 mean (std) 0.2592 (0.3118) 0.3870 (0.1110) 0.0463 (0.0071) Part 3: The problem of finding an optimal γ in (7) is reduced to solving the following linear least squares problem for t ∈ℜ2N−1: min t
˜Ku (γo + Ft)
2 where γ = γo + Ft , γo = ˜ku −˜kv /
˜ku −˜kv
2 , and F ∈ℜ2N×(2N−1) is an orthogonal matrix whose columns span the subspace of vectors orthogonal to ˜ku −˜kv . Proof: See Appendix. Therefore, Theorem 2 establishes that the MPMR formulation proposed in this paper has a closed form analytical solution, and its computational complexity is equivalent to solving a linear system of 2N −1 equations in 2N −1 unknowns. 3 Experimental Results For complete implementation details of the MPMR algorithm used in the following experiments, see the Matlab and C source code available at http://www.cs.colorado.edu/∼grudic/software. Toy Sinc Data: Our toy example uses the noisy sinc function yi = sin(πxi)/(πxi) + νi i = 1, ..., N, where νi is drawn from a Gaussian distribution with mean 0 and variance σ2 [5]. We use a RBF kernel K(a, b) = exp(−|a −b|2) and N = 100 training examples. Figure 1 (a), (b), and (c), and Table 1 show the results for different variances σ2 and a constant value of ε = 0.2. Figure 1 (d) and (e) illustrate how different tube sizes 0.05 ≤ ε ≤2 affect the mean squared error (on 100 random test points), the predicted Ωand measured percentage of test data within ε (here called MPTDε) of the regression model. Each experiment consists of 100 random trials. The average mean squared error in (e) has a small deviation (0.0453) over all tested ε and always was within the range 0.19 to 0.35. This indicates that the accuracy of the regression model is essentially independent from the choice of ε. Also note that the mean predicted Ωis a lower bound on the mean MPTDε. The tightness of this lower bound varies for different amounts of noise (Table 1) and different choices of ε (Figure 1 d). Boston Housing Data: We test MPMR on the widely used Boston housing regression data available from the UCI repository. Following the experiments done in [5], we use the RBF kernel K(a, b) = exp(−∥a −b∥/(2σ2)), where (2σ2)) = 0.3 · d and d = 13 for this data set. No attempt was made to pick optimal values for σ using cross validation. The Boston housing data contains 506 training examples, which we randomly divided into N = 481 training examples and 25 testing examples for each test run. 100 such random tests where run for each of ε = 0.1, 1.0, 2.0, ..., 10.0. Results are reported in Table 2 for 1) average mean squared errors and the standard deviation; 2) MPTDε: fraction of test points that are within ϵ of y and the standard deviation; 3) predicted Ω: predicted probability that the model is within ε of y and standard deviation. We first note that the results compare favorably to those reported for other state of the art regression algorithms [5], even though −3 −2 −1 0 1 2 3 0 0.5 1 1.5 2 x y learning examples true regression function MPMR function MPMR function + ε MPMR function − ε a) ε = 0.2, σ2 = 0 −3 −2 −1 0 1 2 3 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 3 x y learning examples true regression function MPMR function MPMR function + ε MPMR function − ε b) ε = 0.2, σ2 = 0.5 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 4 x y learning examples true regression function MPMR function MPMR function + ε MPMR function − ε c) ε = 0.2, σ2 = 1.0 Percentage of Test Data within ± ε Probability ε 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 0 0.2 0.4 0.6 0.8 1 MPTDε ± std estimated Ω ± std d) MPTDε and predicted Ω w.r.t. ε, σ2 = 1.0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Average MSE (100 runs) Average MSE (100 runs) ε e) mean squared error on test data w.r.t. ε, σ2 = 1.0 Figure 1: Experimental results on toy sinc data. Table 2: Results over 100 random trials for the Boston Housing Data for ε = 0.1, 1.0, 2.0, ..., 10.0: mean squared errors and the standard deviation; MPDTε: fraction of test points that are within ϵ of y and the standard deviation; predicted Ω: predicted probability that the model is within ε of y and standard deviation. ε 0.1 1.0 2.0 3.0 4.0 4.0 6.0 7.0 8.0 9.0 10.0 MSE 9.9 10.5 10.9 9.5 10.3 9.9 10.5 10.5 9.2 10.1 10.6 STD 5.9 9.5 8.6 5.9 8.1 8.0 8.5 8.1 5.3 6.9 7.6 MPDTε 0.05 0.33 0.58 0.76 0.84 0.89 0.93 0.95 0.97 0.97 0.98 STD 0.04 0.09 0.09 0.08 0.07 0.06 0.05 0.04 0.03 0.03 0.02 Ω 0.002 0.19 0.51 0.69 0.80 0.87 0.90 0.92 0.94 0.95 0.96 STD 0.0005 0.03 0.06 0.05 0.04 0.03 0.01 0.01 0.009 0.009 0.008 no attempt was made to optimize for σ. Second, as with the toy data, the errors are relatively independent of ε. Finally, we note that the mean predicted Ωis lower than the measured average MPTDε, thus validating the the MPMR algorithm does indeed predict an effective lower bound on the probability that the regression model is within ε of the true regression function. 4 Discussion and Conclusion We formalize the regression problem as one of maximizing the minimum probability, Ω, that the regression model is within ±ε of the true regression function. By estimating mean and covariance matrix statistics of the regression data (and making no other assumptions on the underlying true regression function distributions), the proposed minimax probability machine regression (MPMR) algorithm obtains a direct estimate of Ω. Two theorems are presented proving that, given perfect knowledge of the mean and covariance statistics of the true regression function, the proposed MPMR algorithm directly computes the exact lower probability bound Ω. We are unaware of any other nonlinear regression model formulation that has this property. Experimental results are given showing: 1) the regression models produced are competitive with existing state of the art models; 2) the mean squared error on test data is relatively independent of the choice of ε; and 3) estimating mean and covariance statistics directly from the learning data gives accurate lower probability bound Ωestimates that the regression model is within ±ε of the true regression function - thus supporting our theoretical results. Future research will focus on a theoretical analysis of the conditions under which the accuracy of the regression model is independent of ε. Also, we are analyzing the rate, as a function of sample size, at which estimates of the lower probability bound Ωconverge to the true value. Finally, the proposed minimax probability machine regression framework is a new formulation of the regression problem, and therefore its properties can only be fully understood through extensive experimentation. We are currently applying MPMR to a wide variety of regression problems and have made Matlab / C source code available (http://www.cs.colorado.edu/∼grudic/software) for others to do the same. References [1] G. R. G. Lanckriet, L. E. Ghaoui, C. Bhattacharyya, and M. I. Jordan. Minimax probability machine. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14, Cambridge, MA, 2002. MIT Press. [2] A. W. Marshall and I. Olkin. Multivariate chebyshev inequalities. Annals of Mathematical Statistics, 31(4):1001–1014, 1960. [3] I. Popescu and D. Bertsimas. Optimal inequalities in probability theory: A convex optimization approach. Technical Report TM62, INSEAD, Dept. Math. O.R., Cambridge, Mass, 2001. [4] W. H. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling. Numerical Recipes in C. Cambridge University Press, New York NY, 1988. [5] Bernhard Sch¨olkopf, Peter L. Bartlett, Alex J. Smola, and Robert Williamson. Shrinking the tube: A new support vector regression algorithm. In D. A. Cohn M. S. Kearns, S. A. Solla, editor, Advances in Neural Information Processing Systems, volume 11, Cambridge, MA, 1999. The MIT Press. Appendix: Proofs of Theorems 1 and 2 Proof of Theorem 1: Consider any point x = (x1, ..., xd) generated according to the distribution Λ. This point will have a corresponding y (defined in (2)), and from (10), the probability that z+ε = (y + ε, x1, ..., xd) will be classified correctly (as belonging to class u) by (6) is α. Furthermore, the classification boundary occurs uniquely at the point where z = (ˆy, x1, ..., xd), where, from the assumptions, ˆy is the unique solution to (12). Similarly, for the same point y, the probability that z−ε = (y −ε, x1, ..., xd) will be classified correctly (as belonging to class v) by (6) is also α, and the classifications boundary occurs uniquely at the point where z = (ˆy, x1, ..., xd). Therefore, both z+ε = (y + ε, x1, ..., xd) and z−ε = (y − ε, x1, ..., xd) are, with probability α, on the correct side of the regression surface, defined by z = (ˆy, x1, ..., xd). Therefore, z+ε differs from z by at most +ε in the first dimension, and z−ε differs from z by at most −ε in the first dimension. Thus, the minimum bound on the probability that |y −ˆy| ≤ε is α (defined in (10)), which has the same form as Ω. This completes the proof. Proof of Lemma 1: [ ˜ku]i −[ ˜kv]i = 1 N (PN l=1 Kc(ul, zi)) −1 N (PN l=1 Kc(vl, zi)) = 1 N PN l=1(yl + ϵ)y′ i + K(xl, xi) −((yl −ϵ)y′ i + K(xl, xi)) = 1 N N2ϵy′ i = 2ϵy′ i Proof of Theorem 2: Part 1: Plugging (14) into (12), we get: 0 = 2N P i=1 γi [y′ iˆy + K (xi, x)] + bc 0 = N P i=1 γi [(yi + ε) ˆy + K (xi, x)] + N P i=1 γi+N [(yi −ε) ˆy + K (xi, x)] + bc 0 = N P i=1 {(γi + γi+N) [yiˆy + K (xi, x)] + (γi −γi+N) εˆy} + bc When we solve analytically for ˆy, giving (5), the coefficients βi and the offset b have a denominator that looks like: − N P i=1 [(γi + γi+N) yi + (γi −γi+N) ε] = −γTy′ Applying Lemma 1 and (7) we obtain: 1 = γT(˜(ku) −˜kv) = γT2ϵy′ ⇔−γTy′ = −1 2ϵ for the denominator of βi and b. Part 2: The values zi are defined as: z1 = u1, ..., zN = uN, zN+1 = v1 = u1 − (2ϵ, 0, · · · , 0)T , ..., z2N = vN = uN −(2ϵ, 0, · · · , 0)T . Since ˜ Ku = Ku −1N ˜ ku we have the following term for a single matrix entry: [ ˜ Ku]i,j = Kc(ui, zj) −1 N PN l=1 Kc(ul, zj) i = 1, .., N j = 1, ..., 2N Similarly the matrix entries for ˜ Kv look like: [ ˜ Kv]i,j = Kc(vi, zj) −1 N PN l=1 Kc(vl, zj) i = 1, .., N j = 1, ..., 2N We show that these entries are the same for all i and j: [ ˜ Ku]i,j = Kc(vi + (2ϵ 0 · · · 0)T , zj) −1 N PN l=1 Kc(vl + (2ϵ 0 · · · 0)T , zj) = Kc(vi, zj) + 2ϵ[zj]1 −1 N (PN l=1 Kc(vl, zj) + 2ϵ[zj]1) = Kc(vi, zj) + 2ϵ[zj]1 −1 N PN l=1 Kc(vl, zj) −1 N PN l=1 2ϵ[zj]1 = Kc(vi, zj) + 2ϵ[zj]1 −1 N PN l=1 Kc(vl, zj) −1 N N2ϵ[zj]1 = Kc(vi, zj) −1 N PN l=1 Kc(vl, zj) = [ ˜ Kv]i,j This completes the proof of Part 2. Part 3: From Part 2 we know that ˜ Ku = ˜ Kv. Therefore, the minimization problem (7) collapses to min∥˜ Kuγ∥2 2 with respect to γ (the N is constant and can be removed). Formulating this minimization with the use of the orthogonal matrix F and an initial vector γo this becomes (see [1]): min∥˜ Ku(γo + Ft)∥2 2 with respect to t ∈ℜ2N−1. We set h(t) = ∥˜ Ku(γ + Ft)∥2 2. Therefore in order to find the minimum we must solve 2N −1 linear equations: 0 = d dti h(t) i = 1, ..., 2N −1. This completes the proof of Part 3.
|
2002
|
125
|
2,131
|
Multiclass Learning by Probabilistic Embeddings Ofer Dekel and Yoram Singer School of Computer Science & Engineering The Hebrew University, Jerusalem 91904, Israel {oferd,singer}@cs.huji.ac.il Abstract We describe a new algorithmic framework for learning multiclass categorization problems. In this framework a multiclass predictor is composed of a pair of embeddings that map both instances and labels into a common space. In this space each instance is assigned the label it is nearest to. We outline and analyze an algorithm, termed Bunching, for learning the pair of embeddings from labeled data. A key construction in the analysis of the algorithm is the notion of probabilistic output codes, a generalization of error correcting output codes (ECOC). Furthermore, the method of multiclass categorization using ECOC is shown to be an instance of Bunching. We demonstrate the advantage of Bunching over ECOC by comparing their performance on numerous categorization problems. 1 Introduction The focus of this paper is supervised learning from multiclass data. In multiclass problems the goal is to learn a classifier that accurately assigns labels to instances where the set of labels is of finite cardinality and contains more than two elements. Many machine learning applications employ a multiclass categorization stage. Notable examples are document classification, spoken dialog categorization, optical character recognition (OCR), and partof-speech tagging. Dietterich and Bakiri [6] proposed a technique based on error correcting output coding (ECOC) as a means of reducing a multiclass classification problem to several binary classification problems and then solving each binary problem individually to obtain a multiclass classifier. More recent work of Allwein et al. [1] provided analysis of the empirical and generalization errors of ECOC-based classifiers. In the above papers, as well as in most previous work on ECOC, learning the set of binary classifiers and selecting a particular error correcting code are done independently. An exception is a method based on continuous relaxation of the code [3] in which the code matrix is post-processed once based on the learned binary classifiers. The inherent decoupling of the learning process from the class representation problem employed by ECOC is both a blessing and a curse. On one hand it offers great flexibility and modularity, on the other hand, the resulting binary learning problems might be unnatural and therefore potentially difficult. We instead describe and analyze an approach that ties the learning problem with the class representation problem. The approach we take perceives the set of binary classifiers as an embedding of the instance space and the code matrix as an embedding of the label set into a common space. In this common space each instance is assigned the label from which it’s divergence is smallest. To construct these embeddings, we introduce the notion of probabilistic output codes. We then describe an algorithm that constructs the label and instance embeddings such that the resulting classifier achieves a small empirical error. The result is a paradigm that includes ECOC as a special case. The algorithm we describe, termed Bunching, alternates between two steps. One step improves the embedding of the instance space into the common space while keeping the embedding of the label set fixed. This step is analogous to the learning stage of the ECOC technique, where a set of binary classifiers are learned with respect to a predefined code. The second step complements the first by updating the label embedding while keeping the instance embedding fixed. The two alternating steps resemble the steps performed by the EM algorithm [5] and by Alternating Minimization [4]. The techniques we use in the design and analysis of the Bunching algorithm also build on recent results in classification learning using Bregman divergences [8, 2]. The paper is organized as follows. In the next section we give a formal description of the multiclass learning problem and of our classification setting. In Sec. 3 we give an alternative view of ECOC which naturally leads to the definition of probabilistic output codes presented in Sec. 4. In Sec. 5 we cast our learning problem as a minimization problem of a continuous objective function and in Sec. 6 we present the Bunching algorithm. We describe experimental results that demonstrate the merits of our approach in Sec. 7 and conclude in Sec. 8. 2 Problem Setting Let X be a domain of instance encodings from m and let Y be a set of r labels that can be assigned to each instance from X. Given a training set of instance-label pairs S = (xj, yj)n j=1 such that each xj is in X and each yj is in Y, we are faced with the problem of learning a classification function that predicts the labels of instances from X. This problem is often referred to as multiclass learning. In other multiclass problem settings it is common to encode the set Y as a prefix of the integers {1, . . ., r}, however in our setting it will prove useful to assume that the labels are encoded as the set of r standard unit vectors in r. That is, the i’th label in Y is encoded by the vector whose i’th component is set to 1, and all of its other components are set to 0. s Figure 1: An illustration of the embedding model used. The classification functions we study in this paper are composed of a pair of embeddings from the spaces X and Y into a common space Z, and a measure of divergence between vectors in Z. That is, given an instance x ∈X, we embed it into Z along with all of the label vectors in Y and predict the label that x is closest to in Z. The measure of distance between vectors in Z builds upon the definitions given below: The logistic transformation σ : s →(0, 1)s is defined ∀k = 1, ..., s σk(ω) = (1 + e−ωk)−1 The entropy of a multivariate Bernoulli random variable with parameter p ∈[0, 1]s is H[p] = − s X k=1 [pk log(pk) + (1 −pk) log(1 −pk)] . The Kullback-Leibler (KL) divergence between a pair of multivariate Bernoulli random variables with respective parameters p, q ∈[0, 1]s is D[p ∥q] = s X k=1 pk log pk qk + (1 −pk) log 1 −pk 1 −qk . (1) Returning to our method of classification, let s be some positive integer and let Z denote the space [0, 1]s. Given any two linear mappings T : m → s and C : r → s, where T is given as a matrix in s×m and C as a matrix in s×r, instances from X are embedded into Z by σ(T x) and labels from Y are embedded into Z by σ(Cy). An illustration of the two embeddings is given in Fig. 1. We define the divergence between any two points z1, z2 ∈Z as the sum of the KLdivergence between them and the entropy of z1, D[z1 ∥z2] + H[z1]. We now define the loss ℓof each instance-label pair as the divergence of their respective images, ℓ(x, y|C, T ) = D[σ(Cy) ∥σ(T x)] + H[σ(Cy)] . (2) This loss is clearly non-negative and can be zero iff x and y are embedded to the same point in Z and the entropy of this point is zero. ℓis our means of classifying new instances: given a new instance we predict its label to be ˆy if ˆy = argmin y∈Y ℓ(x, y|C, T ) . (3) For brevity, we restrict ourselves to the case where only a single label attains the minimum loss, and our classifier is thus always well defined. We point out that our analysis is still valid when this constraint is relaxed. We name the loss over the entire training set S the empirical loss and use the notation L(S|C, T ) = X (x,y)∈S ℓ(x, y|C, T ) . (4) Our goal is to learn a good multiclass prediction function by finding a pair (C, T ) that attains a small empirical loss. As we show in the sequel, the rationale behind this choice of empirical loss lies in the fact that it bounds the (discrete) empirical classification error attained by the classification function. 3 An Alternative View of Error Correcting Output Codes The technique of ECOC uses error correcting codes to reduce an r-class classification problem to multiple binary problems. Each binary problem is then learned independently via an external binary learning algorithm and the learned binary classifiers are combined into one r-class classifier. We begin by giving a brief overview of ECOC for the case where the binary learning algorithm used is a logistic regressor. A binary output code C is a matrix in {0, 1}s×r where each of C’s columns is an s-bit code word that corresponds to a label in Y. Recall that the set of labels Y is assumed to be the standard unit vectors in r. Therefore, the code word corresponding to the label y is simply the product of the matrix C and the vector y, Cy. The distance ρ of a code C is defined as the minimal Hamming distance between any two code words, formally ρ(C) = min i̸=j s X k=1 Ck,i(1 −Ck,j) + Ck,j(1 −Ck,i) . For any k ∈{1, . . ., s}, the k’th row of C, denoted henceforth by Ck, defines a partition of the set of labels Y into two disjoint subsets: the first subset constitutes labels for which Ck · y = 0 (i.e., the set of labels in Y which are mapped according to Ck to the binary label 0) and the labels for which Ck · y = 1. Thus, each Ck induces a binary classification problem from the original multiclass problem. Formally, we construct for each k a binarylabeled sample Sk = {(xj, Ck · yj)}n j=1 and for each Sk we learn a binary classification function Tk : X → using a logistic regression algorithm. That is, for each original instance xj and induced binary label Ck · yj we posit a logistic model that estimates the conditional probability that Ck · yj equals 1 given xj, Pr[Ck · yj = 1| xj ; Tk] = σ(Tk · xj) . (5) Given a predefined code matrix C the learning task at hand is to find T ⋆ k that maximizes the log-likelihood of the labelling given in Sk, T ⋆ k = argmax Tk∈ m n X j=1 log(Pr[Ck · yj | xj ; Tk]) . (6) Defining 0 log 0 = 0, we can use the logistic estimate in Eq. (5) and the KL-divergence from Eq. (1) to rewrite Eq. (6) as follows T ⋆ k = argmin Tk∈ m n X j=1 D[Ck · yj ∥σ(Tk · xj)] . In words, a good set of binary predictors is found by minimizing the sample-averaged KLdivergence between the binary vectors induced by C and the logistic estimates induced by T1, . . . , Ts. Let T ⋆be the matrix in s×m constructed by the concatenation of the row vectors {T ⋆ k }s k=1. For any instance x ∈X, σ(T ⋆x) is a vector of probability estimates that the label of x is 1 for each of the s induced binary problems. We can summarize the learning task defined by the code C as the task of finding a matrix T ⋆such that T ⋆= argmin T ∈ s×m n X j=1 D[Cyj ∥σ(T xj)] . Given a code matrix C and a transformation T we classify a new instance as follows, ˆy = argmin y∈Y D[Cy ∥σ(T x)] . (7) A classification error occurs if the predicted label ˆy is different from the correct label y. Building on Thm. 1 from Allwein et al. [1] it is straightforward to show that the empirical classification error (ˆy ̸= y) is bounded above by the empirical KL-divergence between the correct code word Cy and the estimated probabilities σ(T x) divided by the code distance, |{ˆyj ̸= yj}n j=1| ≤ Pn j=1 D[Cyj ∥σ(T xj)] ρ(C) . (8) This bound is a special case of the bound given below in Thm. 1 for general probabilistic output codes. We therefore defer the discussion on this bound to the following section. 4 Probabilistic Output Codes We now describe a relaxation of binary output codes by defining the notion of probabilistic output codes. We give a bound on the empirical error attained by a classifier that uses probabilistic output codes which generalizes the bound in Eq. (8). The rationale for our construction is that the discrete nature of ECOC can potentially induce difficult binary classification problems. In contrast, probabilistic codes induce real-valued problems that may be easier to learn. Analogous to discrete codes, A probabilistic output code C is a matrix in s×r used in conjunction with the logistic transformation to produce a set of r probability vectors that correspond to the r labels in Y. Namely, C maps each label y ∈Y to the probabilistic code word σ(Cy) ∈[0, 1]s. As before, we assume that Y is the set of r standard unit vectors in {0, 1}r and therefore each probabilistic code word is the image of one of C’s columns under the logistic transformation. The natural extension of code distance to probabilistic codes is achieved by replacing Hamming distance with expected Hamming distance. If for each y ∈Y and k ∈{1, . . . , s} we view the k’th component of the code word that corresponds to y as a Bernoulli random variable with parameter p = σk(Cy) then the expected Hamming distance between the code word for classes i and j is, s X k=1 σk(Cyi)(1 −σk(Cyj)) + σk(Cyj)(1 −σk(Cyi)) . Analogous to discrete codes we define the distance ρ of a code C as the minimum expected Hamming distance between all pairs of code words in C, that is, ρ(C) = min i̸=j s X k=1 σk(Cyi)(1 −σk(Cyj)) + σk(Cyj)(1 −σk(Cyi)) . Put another way, we have relaxed the definition of code words from deterministic vectors to multivariate Bernoulli random variables. The matrix C now defines the distributions of these random variables. When C’s entries are all ±∞then the logistic transformation of C’s entries defines a deterministic code and the two definitions of ρ coincide. Given a probabilistic code matrix C ∈ s×r and a transformation T ∈ s×m we associate a loss ℓ(x, y|C, T ) with each instance-label pair (x, y) using Eq. (2) and we measure the empirical loss over the entire training set S as defined in Eq. (4). We classify new instances by finding the label ˆy that attains the smallest loss as defined in Eq. (3). This construction is equivalent to the classification method discussed in Sec. 2 that employs embeddings except that instead of viewing C and T as abstract embeddings C is interpreted as a probabilistic output code and the rows of T are viewed as binary classifiers. Note that when all of the entries of C are ±∞then the classification rule from Eq. (3) is reduced to the classification rule for ECOC from Eq. (7) since the entropy of σ(Cy) is zero for all y. We now give a theorem that builds on our construction of probabilistic output codes and relates the classification rule from Eq. (3) with the empirical loss defined by Eq. (4). As noted before, the theorem generalizes the bound given in Eq. (8). Theorem 1 Let Y be a set of r vectors in r. Let C ∈ s×r be a probabilistic output code with distance ρ(C) and let T ∈ s×m be a transformation matrix. Given a sample S = {(xj, yj)}n i=j of instance-label pairs where xj ∈X and yj ∈Y, denote by L the loss on S with respect to C and T as given by Eq. (4) and denote by ˆyj the predicted label of xj according to the classification rule given in Eq. (3). Then, |{ˆyj ̸= yj}n j=1| ≤L(S|C, T ) ρ(C) . The proof of the theorem is omitted due to the lack of space. 5 The Learning Problem We now discuss how our formalism of probabilistic output codes via embeddings and the accompanying Thm. 1 lead to a learning paradigm in which both T and C are found concurrently. Thm. 1 implies that the empirical error over S can be reduced by minimizing the empirical loss over S while maintaining a large distance ρ(C). A naive modification of C so as to minimize the loss may result in a probabilistic code whose distance is undesirably small. Therefore, we assume that we are initially provided with a fixed reference matrix C0 ∈ s×r that is known to have a large code distance. We now require that the learned matrix C remain relatively close to C0 (in a sense defined shortly) throughout the learning procedure. Rather than requiring that C attain a fixed distance to C0 we add a penalty proportional to the distance between C and C0 to the loss defined in Eq. (4). This penalty on C can be viewed as a form of regularization (see for instance [10]). Similar paradigms have been used extensively in the pioneering work of Warmuth and his colleagues on online learning (see for instance [7] and the references therein) and more recently for incorporating prior knowledge into boosting [11]. The regularization factor we employ is the KL-divergence between the images of C and C0 under the logistic transformation, R(S|C, C0) = n X j=1 D[σ(Cyj) ∥σ(C0yj)] . The influence of this penalty term is controlled by a parameter α ∈[0, ∞]. The resulting objective function that we attempt to minimize is O(S|C, T ) = L(S|C, T ) + αR(S|C, C0) (9) where α and C0 are fixed parameters. The goal of learning boils down to finding a pair (C⋆, T ⋆) that minimizes the objective function defined in Eq. (9). We would like to note that this objective function is not convex due to the concave entropic term in the definition of ℓ. Therefore, the learning procedure described in the sequel converges to a local minimum or a saddle point of O. 6 The Learning Algorithm BUNCH S, α ∈ +, C0 ∈ s×r, T0 ∈ s×m For t = 1, 2, ... Tt = IMPROVE-T (S, Ct−1, Tt−1) Ct = IMPROVE-C (S, α, Tt, C0) IMPROVE-T (S, C, T) For k = 1, 2, ..., s and i = 1, 2, ..., m W + k,i = (x,y)∈S σ(Cky) σ(−Tkx) xi W − k,i = (x,y)∈S σ(−Cky) σ(Tkx) xi Θk,i = 1 2 ln W + k,i W − k,i Return T + Θ IMPROVE-C (S, α, T, C0) For each y ∈Y Sy = {(x, ¯y) ∈S : ¯y = y} C(y) = C(y) 0 + 1 α|Sy| x∈Sy Tx Return C = C(1), . . . , C(r) Figure 2: The Bunching Algorithm. The goal of the learning algorithm is to find C and T that minimize the objective function defined above. The algorithm alternates between two complementing steps for decreasing the objective function. The first step, called IMPROVE-T, improves T leaving C unchanged, and the second step, called IMPROVE-C, finds the optimal matrix C for any given matrix T . The algorithm is provided with initial matrices C0 and T0, where C0 is assumed to have a large code distance ρ. The IMPROVE-T step makes the assumption that all of the instances in S satisfy the constraints Pm i=1 xi ≤1 and for all i ∈{1, 2, ..., m}, 0 ≤xi. Any finite training set can be easily shifted and scaled to conform with these constraints and therefore they do not impose any real limitation. In addition, the IMPROVE-C step is presented for the case where Y is the set of standard unit vectors in r. Since the regularization factor R is independent of T we can restrict our description and analysis of the IMPROVE-T step to considering only the loss term L of the objective function O. The IMPROVE-T step receives the current matrices C and T as input and calculates a matrix Θ that is used for updating the current T additively. Denoting the iteration index by t, the update is of the form Tt+1 = Tt + Θ. The next theorem states that updating T by the IMPROVE-T step decreases the loss or otherwise T remains unchanged and is globally optimal with respect to C. Again, the proof is omitted due to space constraints. Theorem 2 Given matrices C ∈ s×r and T ∈ s×m, let W + k,i, W − k,i and Θ be as defined in the IMPROVE-T step of Fig. 2. Then, the decrease in the loss L is bounded below by, s X k=1 m X i=1 q W + k,i − q W − k,i 2 ≤L(S|C, T ) −L(S|C, T + Θ) . Based on the theorem above we can derive the following corollary Corollary 1 If Θ is generated by a call to IMPROVE-T and L(S|C, T + Θ) = L(S|C, T ) then Θ is the zero matrix and T is globally optimal with respect to C. In the IMPROVE-C step we fix the current matrix T and find a code matrix C that globally minimizes the objective function. According to the discussion above, the matrix C defines an embedding of the label vectors from Y into Z and the images of this embedding constitute the classification rule. For each y ∈Y denote its image under C and the logistic transformation by py = σ(Cy) and let Sy be the subset of S that is labeled y. Note that the objective function can be decomposed into r separate summands according to y, O(S|C, T ) = X y∈Y O(Sy|C, T ) , where O(Sy|C, T ) = X (x,y)∈Sy D[py ∥σ(T x)] + H[py] + αD[py ∥σ(C0y0)] . We can therefore find for each y ∈Y the vector py that minimizes O(Sy) independently and then reconstruct the code matrix C that achieves these values. It is straightforward to show that O(Sy) is convex in py, and our task is reduced to finding it’s stationary point. We examine the derivative of O(Sy) with respect to py,k and get, ∂Oy(Sy) ∂py,k = X (x,y)∈Sy −log σ(Tk · x) 1 −σ(Tk · x) −α|Sy| C0,k · y + log py,k 1 −py,k . We now plug py = σ(Cy) into the equation above and evaluate it at zero to get that, Cy = C0y + 1 α|Sy| X (x,y)∈Sy T x . Since Y was assumed to be the set of standard unit vectors, Cy is a column of C and the above is simply a column wise assignment of C. We have shown that each call to IMPROVE-T followed by IMPROVE-C decreases the objective function until convergence to a pair (C⋆, T ⋆) such that C⋆is optimal given T ⋆and T ⋆is optimal given C⋆. Therefore O(S|C⋆, T ⋆) is either a minimum or a saddle point. 7 Experiments glass isolet letter mnist satimage soybean vowel −10 0 10 20 30 40 50 60 70 Relative performance % random one−vs−rest Figure 3: The relative performance of Bunching compared to ECOC on various datasets. To assess the merits of Bunching we compared it to a standard ECOCbased algorithm on numerous multiclass problems. For the ECOCbased algorithm we used a logistic regressor as the binary learning algorithm, trained using the parallel update described in [2]. The two approaches share the same form of classifiers (logistic regressors) and differ solely in the coding matrix they employ: while ECOC uses a fixed code matrix Bunching adapts its code matrix during the learning process. We selected the following multiclass datasets: glass, isolet, letter, satimage, soybean and vowel from the UCI repository (www.ics.uci.edu/∼mlearn/MLRepository.html) and the mnist dataset available from LeCun’s homepage (yann.lecun.com/exdb/mnist/index.html). The only dataset not supplied with a test set is glass for which we use 5-fold cross validation. For each dataset, we compare the test error rate attained by the ECOC classifier and the Bunching classifier. We conducted the experiments for two families of code matrices. The first family corresponds to the one-vs-rest approach in which each class is trained against the rest of the classes and the corresponding code is a matrix whose logistic transformation is simply the identity matrix. The second family is the set of random code matrices with r log2 r rows where r is the number of different labels. These matrices are used as C0 for Bunching and as the fixed code for ECOC. Throughout all of the experiments with Bunching, we set the regularization parameter α to 1. A summary of the results is depicted in Fig. 3. The height of each bar is proportional to (eE −eB)/eE where eE is the test error attained by the ECOC classifier and eB is the test error attained by the Bunching classifier. As shown in the figure, for almost all of the experiments conducted Bunching outperforms standard ECOC. The improvement is more significant when using random code matrices. This can be explained by the fact that random code matrices tend to induce unnatural and rather difficult binary partitions of the set of labels. Since Bunching modifies the code matrix C along its run, it can relax difficult binary problems. This suggests that Bunching can improve the classification accuracy in problems where, for instance, the one-vs-rest approach fails to give good results or when there is a need to add error correction properties to the code matrix. 8 A Brief Discussion In this paper we described a framework for solving multiclass problems via pairs of embeddings. The proposed framework can be viewed as a generalization of ECOC with logistic regressors. It is possible to extend our framework in a few ways. First, the probabilistic embeddings can be replaced with non-negative embeddings by replacing the logistic transformation with the exponential function. In this case, the KL divergence is replaced with its unormalized version [2, 9]. The resulting generalized Bunching algorithm is somewhat more involved and less intuitive to understand. Second, while our work focuses on linear embeddings, our algorithm and analysis can be adapted to more complex mappings by employing kernel operators. This can be achieved by replacing the k’th scalar-product Tk · x with an abstract inner-product κ(Tk, x). Last, we would like to note that it is possible to devise an alternative objective function to the one given in Eq. (9) which is jointly convex in (T, σ(C)) and for which we can state a bound of a form similar to the bound in Thm. 1. References [1] E.L. Allwein, R.E. Schapire, and Y. Singer. Reducing multiclass to binary: A unifying approach for margin classifiers. Journal of Machine Learning Research, 1:113–141, 2000. [2] Michael Collins, Robert E. Schapire, and Yoram Singer. Logistic regression, adaboost and bregman distances. Machine Learning, 47(2/3):253–285, 2002. [3] K. Crammer and Y. Singer. On the learnability and design of output codes for multiclass problems. In Proc. of the Thirteenth Annual Conference on Computational Learning Theory, 2000. [4] I. Csisz´ar and G. Tusn´ady. Information geometry and alternaning minimization procedures. Statistics and Decisions, Supplement Issue, 1:205–237, 1984. [5] A.P. Dempster, N.M. Laird, and D.B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Ser. B, 39:1–38, 1977. [6] Thomas G. Dietterich and Ghulum Bakiri. Solving multiclass learning problems via errorcorrecting output codes. Journal of Artificial Intelligence Research, 2:263–286, January 1995. [7] Jyrki Kivinen and Manfred K. Warmuth. Additive versus exponentiated gradient updates for linear prediction. Information and Computation, 132(1):1–64, January 1997. [8] John D. Lafferty. Additive models, boosting and inference for generalized divergences. In Proceedings of the Twelfth Annual Conference on Computational Learning Theory, 1999. [9] S. Della Pietra, V. Della Pietra, and J. Lafferty. Duality and auxilary functions for Bregman distances. Technical Report CS-01-10, CMU, 2002. [10] T. Poggio and F. Girosi. Networks for approximation and learning. Proc. of IEEE, 78(9), 1990. [11] R.E. Schapire, M. Rochery, M. Rahim, and N. Gupta. Incorporating prior knowledge into boosting. In Machine Learning: Proceedings of the Nineteenth International Conference, 2002.
|
2002
|
126
|
2,132
|
Improving a Page Classifier with Anchor Extraction and Link Analysis William W. Cohen Center for Automated Learning and Discovery, Carnegie-Mellon University 5000 Forbes Ave, Pittsburgh, PA 15213 william@wcohen.com Abstract Most text categorization systems use simple models of documents and document collections. In this paper we describe a technique that improves a simple web page classifier’s performance on pages from a new, unseen web site, by exploiting link structure within a site as well as page structure within hub pages. On real-world test cases, this technique significantly and substantially improves the accuracy of a bag-of-words classifier, reducing error rate by about half, on average. The system uses a variant of co-training to exploit unlabeled data from a new site. Pages are labeled using the base classifier; the results are used by a restricted wrapper-learner to propose potential “main-category anchor wrappers”; and finally, these wrappers are used as features by a third learner to find a categorization of the site that implies a simple hub structure, but which also largely agrees with the original bag-of-words classifier. 1 Introduction Most text categorization systems use simple models of documents and document collections. For instance, it is common to model documents as “bags of words”, and to model a collection as a set of documents drawn from some fixed distribution. An interesting question is how to exploit more detailed information about the structure of individual documents, or the structure of a collection of documents. For web page categorization, a frequently-used approach is to use hyperlink information to improve classification accuracy (e.g., [7, 9, 15]). Often hyperlink structure is used to “smooth” the predictions of a learned classifier, so that documents that (say) are pointed to by the same “hub” page will be more likely to have the same classification after smoothing. This smoothing can be done either explicitly [15] or implicitly (for instance, by representing examples so that the distance between examples depends on hyperlink connectivity [7, 9]). The structure of individual pages, as represented by HTML markup structure or linguistic structure, is less commonly used in web page classification: however, page structure is often used in extracting information from web pages. Page structure seems to be particularly important in finding site-specific extraction rules (“wrappers”), since on a given site, formatting information is frequently an excellent indication of content [6, 10, 12]. This paper is based on two practical observations about web page classification. The first is that for many categories of economic interest (e.g., product pages, job-posting pages, and press releases) many sites contain “hub” or index pages that point to essentially all pages in that category on a site. These hubs rarely link exclusively to pages of a single category—instead the hubs will contain a number of additional links, such as links back to a home page and links to related hubs. However, the page structure of a hub page often gives strong indications of which links are to pages from the “main” category associated with the hub, and which are ancillary links that exist for other (e.g., navigational) purposes. As an example, refer to Figure 1. Links to pages in the main category associated with this hub (previous NIPS conference homepages) are in the left-hand column of the table, and hence can be easily identified by the page structure. The second observation is that it is relatively easy to learn to extract links from hub pages to main-category pages using existing wrapper-learning methods [8, 6]. Wrapper-learning techniques interactively learn to extract data of some type from a single site using userprovided training examples. Our experience in a number of domains indicates that maincategory links on hub pages (like the NIPS-homepage links from Figure 1) can almost always be learned from two or three positive examples. Exploiting these observations, we describe in this paper a web page categorization system that exploits link structure within a site, as well as page structure within hub pages, to improve classification accuracy of a traditional bag-of-words classifier on pages from a previously unseen site. The system uses a variant of co-training [3] to exploit unlabeled data from a new, previously unseen site. Specifically, pages are labeled using a simple bag-of-words classifier, and the results are used by a restricted wrapper-learner to propose potential “main-category link wrappers”. These wrappers are then used as features by a decision tree learner to find a categorization of the pages on the site that implies a simple hub structure, but which also largely agrees with the original bag-of-words classifier. 2 One-step co-training and hyperlink structure Consider a binary bag-of-words classifier f that has been learned from some set of labeled web pages Dℓ. We wish to improve the performance of f on pages from an unknown web site S, by smoothing its predictions in a way that is plausible given the hyperlink of S, and the page structure of potential hub pages in S. As background for the algorithm, let us consider first co-training, a well-studied approach for improving classifier performance using unlabeled data [3]. In co-training one assumes a concept learning problem where every instance x can be written as a pair (x1, x2) such that x1 is conditionally independent of x2 given the class y. One also assumes that both x1 and x2 are sufficient for classification, in the sense that the target function f(x) can be written either as a function of x1 or x2, i.e., that there exist functions f1(x1) = f(x) and f2(x2) = f(x). Finally one assumes that both f1 and f2 are learnable, i.e., that f1 ∈H1 and f2 ∈H2 and noise-tolerant learning algorithms A1 and A2 exist for H1 and H2. ... Webpages and Papers for Recent NIPS Conferences A. David Redish (dredish@cs.cmu.edu) created and maintained these web pages from 1994 until 1996. L. Douglas Baker (ldbapp+nips@cs.cmu.edu) maintained these web pages from 1997 until 1999. They were maintained in 2000 by L. Douglas Baker and Alexander Gray (agray+nips@cs.cmu.edu). NIPS*2000 NIPS 13, the conference proceedings for 2000 (”Advances in Neural Information Processing Systems 13”, edited by Leen, Todd K., Dietterich, Thomas G. and Tresp, Volker will be available to all attendees in June 2001. ∗Abstracts and papers from this forthcoming volume are available on-line. ∗BibTeX entries for all papers from this forthcoming volume are available on-line. NIPS*99 NIPS 12 is available from MIT Press. Abstracts and papers from this volume are available on-line. NIPS*98 NIPS 11 is available from MIT Press. Abstracts and (some) papers from this volume are available on-line. ... Figure 1: Part of a “hub” page. Links to pages in the main category associated with this hub are in the left-hand column of the table. In this setting, a large amount of unlabeled data Du can be used to improve the accuracy of a small set of labeled data Dℓ, as follows. First, use A1 to learn an approximation f ′ 1 to f1 using Dℓ. Then, use f ′ 1 to label the examples in Du, and use A2 to learn from this training set. Given the assumptions above, f ′ 1’s errors on Du will appear to A2 as random, uncorrelated noise, and A2 can in principle learn an arbitrarily good approximation to f, given enough unlabeled data in Du. We call this process one-step co-training using A1, A2, and Du. Now, consider a set DS of unlabeled pages from a unseen web site S. It seems not unreasonable to assume that the words x1 on a page x ∈S and the hub pages x2 ∈S that hyperlink to x are independent, given the class of x. This suggests that one-step cotraining could be used to improve a learned bag-of-words classifier f ′ 1, using the following algorithm: Algorithm 1 (One-step co-training): 1. Parameters. Let S be a web site, f ′ 1 be a bag-of-words page classifier, and DS be the pages on the site S. 2. Instance generation and labeling. For each page xi ∈DS, represent xi as a vector of all pages in S that hyperlink to xi. Call this vector xi 2. Let yi = f ′ 1(xi). 3. Learning. Use a learner A2 to learn f ′ 2 from the labeled examples D2 = {(xi 2, yi)}i. 4. Labeling. Use f ′ 2(x) as the final label for each page x ∈DS. This “one-step” use of co-training is consistent with the theoretical results underlying cotraining. In experimental studies, co-training is usually done iteratively, alternating between using f ′ 1 and f ′ 2 for tagging the unlabeled data. The one-step version seems more appropriate in this setting, in which there are a limited number of unlabeled examples over which each x2 is defined. 3 Anchor Extraction and Page Classification 3.1 Learning to extract anchors from web pages Algorithm 1 has some shortcomings. Co-training assumes a large pool of unlabeled data: however, if the informative hubs for pages on S are mostly within S (a very plausible assumption) then the amount of useful unlabeled data is limited by the size of S. With limited amounts of unlabeled data, it is very important that A2 has a strong (and appropriate) statistical bias, and that A2 has some effective method for avoiding overfitting. As suggested by Figure 1, the informativeness of hub features can be improved by using knowledge of the structure of hub pages themselves. To make use of hub page structure, we used a wrapper-learning system called WL2, which has experimentally proven to be effective at learning substructures of web pages [6]. The output of WL2 is an extraction predicate: a binary relation p between pages x and substrings a within x. As an example, WL2 might output p = {(x, a) : x is the page of Figure 1 and a is an anchor appearing in the first column of the table}. (An anchor is a substring of a web page that defines a hyperlink.) This suggests a modification of Algorithm 1, in which one-step co-training is carried out on the problem of extracting anchors rather than the problem of labeling web pages. Specifically, one might map f1’s predictions from web pages to anchors, by giving a positive label to anchor a iff a links to a page x such that f ′ 1(x) = 1; then use WL2 algorithm A2 to learn a predicate p′ 2; and finally, map the predictions of p′ 2 from anchors back to web pages. One problem with this approach is that WL2 was designed for user-provided data sets, which are small and noise-free. Another problem is that it unclear how to map class labels from anchors back to web pages, since a page might be pointed to by many different anchors. 3.2 Bridging the gap between anchors and pages Based on these observations we modified Algorithm 1 as follows. As suggested, we map the predictions about page labels made by f ′ 1 to anchors. Using these anchor labels, we then produce many small training sets that are passed to WL2. The intuition here is that some of these training sets will be noise-free, and hence similar to those that might be provided by a user. Finally, we use the many wrappers produced by WL2 as features in a representation of a page x, and again use a learner to combine the wrapper-features and produce a single classification for a page. Algorithm 2: 1. Parameters. Let S be a web site, f ′ 1 be a bag-of-words page classifier, and DS be the pages on the site. 2. Link labeling. For each anchor a on a page x ∈S, label a as tentatively-positive if a points to a page x′ such that x′ ∈S and f ′ 1(x′) = 1. 3. Wrapper proposal. Let P be the set of all pairs (x, a) where a is a tentativelypositive link and x is the page on which a is found. Generate a number of small sets D1, . . . , Dk containing such pairs, and for each subset Di, use WL2 to produce a number of possible extraction predicates pi,1, . . . , pi,ki. (See appendix for details). 4. Instance generation and labeling. We will say that the “wrapper predicate” pij links to x iff pij includes some pair (x′, a) such that x′ ∈DS and a is a hyperlink to page x. For each page xi ∈DS, represent xi as a vector of all wrappers pij that link to x. Call this vector xi 2. Let yi = f ′ 1(xi). 5. Learning. Use a learner A2 to learn f ′ 2 from the labeled examples DS = {(xi 2, yi)}i. 6. Labeling. Use f ′ 2(x) as the final label for each page x ∈DS. A general problem in building learning systems for new problems is exploiting existing knowledge about these problems. In this case, in building a page classifier, one would like to exploit knowledge about the related problem of link extraction. Unfortunately this knowledge is not in any particularly convenient form (e.g., a set of well-founded parametric assumptions about the data): instead, we only know that experimentally, a certain learning algorithm works well on the problem. In general, it is often the case that this sort of experimental evidence is available, even when a learning problem is not formally wellunderstood. The advantage of Algorithm 2 is that one need make no parametric assumptions about the anchor-extraction problem. The bagging-like approach of “feeding” WL2 many small training sets, and the use of a second learning algorithm to aggregate the results of WL2, are a means of exploiting prior experimental results, in lieu of more precise statistical assumptions. 4 Experimental results To evaluate the technique, we used the task of categorizing web pages from company sites as executive biography or other. We selected nine company web sites with non-trivial hub structures. These were crawled using a heuristic spidering strategy intended to find executive biography pages with high recall.1 The crawl found 879 pages, of which 128 were labeled positive. A simple bag-of-words classifier f ′ 1 was trained using a disjoint set of sites (different from the nine above), obtaining an average accuracy of 91.6% (recall 82.0%, precision 61.8%) on the nine held-out sites. Using an implemention of Winnow [2, 11] as A2, Algorithm 2 obtained an average accuracy of 96.4% on the nine held-out sites. Algorithm 2 improves over the baseline classifier f ′ 1 on six of the nine sites, and obtains the same accuracy on two more. This difference is significant at the 98% level with a 2-tailed paired sign test, and at the 95% level with a 2-tailed paired t test. Similar results were also obtained using a sparse-feature implementation of a C4.5-like decision tree learning algorithm [14] for learner A2. (Note that both Winnow and C4.5 are known to work well when data is noisy, irrelevant attributes are present, and the underlying concept is “simple”.) These results are summarized in Table 1. 1The authors wish to thank Vijay Boyaparti for assembling this data set. Site Classifier f ′ 1 Algorithm 2 (C4.5) Algorithm 2 (Winnow) Accuracy (SE) Accuracy (SE) Accuracy (SE) 1 1.000 (0.000) 0.960 (0.028) 0.960 (0.028) 2 0.932 (0.027) 0.955 (0.022) 0.955 (0.022) 3 0.813 (0.028) 0.934 (0.018) 0.939 (0.017) 4 0.904 (0.029) 0.962 (0.019) 0.962 (0.019) 5 0.939 (0.024) 0.960 (0.020) 0.960 (0.020) 6 1.000 (0.000) 1.000 (0.000) 1.000 (0.000) 7 0.918 (0.028) 0.990 (0.010) 0.990 (0.010) 8 0.788 (0.044) 0.882 (0.035) 0.929 (0.028) 9 0.948 (0.029) 0.948 (0.029) 0.983 (0.017) avg 0.916 0.954 0.964 Table 1: Experimental results with Algorithm 2. Paired tests indicate that both versions of Algorithm 2 significantly improve on the baseline classifier. 5 Related work The introduction discusses the relationship between this work and a number of previous techniques for using hyperlink structure in web page classification [7, 9, 15]. The WL2based method for finding document structure has antecedents in other techniques for learning [10, 12] and automatically detecting [4, 5] structure in web pages. In concurrent work, Blei et al [1] introduce a probabilistic model called “scoped learning” which gives a generative model for the situation described here: collections of examples in which some subsets (documents from the same site) share common “local” features, and all documents share common “content” features. Blei et al do not address the specific problem considered here, of using both page structure and hyperlink structure in web page classification. However, they do apply their technique to two closely related problems: they augment a page classification method with local features based on the page’s URL, and also augment content-based classification of “text nodes” (specific substrings of a web page) with page-structure-based local features. We note that Algorithm 2 could be adapted to operate in Blei et al’s setting: specifically, the x2 vectors produced in Steps 2-4 could be viewed as “local features”. (In fact, Blei et al generated page-structure-based features for their extraction task in exactly this way: the only difference is that WL2 was parameterized differently.) The co-training framework adopted here clearly makes different assumptions than those adopted by Blei et al. More experimentation is needed to determine which is preferable—current experimental evidence [13] is ambiguous as to when probabilistic approaches should be prefered to co-training. 6 Conclusions We have described a technique that improves a simple web page classifier by exploiting link structure within a site, as well as page structure within hub pages. The system uses a variant of co-training called “one-step co-training” to exploit unlabeled data from a new site. First, pages are labeled using the base classifier. Next, results of this labeling are propogated to links to labeled pages, and these labeled links are used by a wrapper-learner called WL2 to propose potential “main-category link wrappers”. Finally, these wrappers are used as features by another learner A2 to find a categorization of the site that implies a simple hub structure, but which also largely agrees with the original bag-of-words classifier. Experiments suggest the choice of A2 is not critical. On a real-world benchmark problem, this technique substantially improved the accuracy of a simple bag-of-words classifier, reducing error rate by about half. This improvement is statistically significant. Acknowledgments The author wishes to thank his former colleagues at Whizbang Labs for many helpful discussions and useful advice. Appendix A: Details on “Wrapper Proposal” Extraction predicates are constructed by WL2 using a rule-learning algorithm and a configurable set of components called builders. Each builder B corresponds to a language LB of extraction predicates. Builders support a certain set of operations relative to LB, in particular, the least general generalization (LGG) operation. Given a set of pairs D = {(xi, ai)} such that each ai is a substring of xi, LGGB(D) is the least general p ∈LB such that (x, a) ∈D ⇒(x, a) ∈p. Intuitively, LGGB(D) encodes common properties of the (positive) examples in D. Depending on B, these properties might be membership in a particular syntactic HTML structure (e.g., a specific table column), common visual properties (e.g., being rendered in boldface), etc. To generate subsets Di in Step 3 of Algorithm 2, we used every pair of links that pointed to the two most confidently labeled examples; every pair of adjacent tentatively-positive links; and every triple and every quadruple of tentatively-positive links that were separated by at most 10 intervening tokens. These heuristics were based on the observation that in most extraction tasks, the items to be extracted are close together. Careful implementation allows the subsets Di to be generated in time linear in the size of the site. (We also note that these heuristics were initially developed to support a different set of experiments [1], and were not substantially modified for the experiments in this paper.) Normally, WL2 is parameterized by a list B of builders, which are called by a “master” rule-learning algorithm. In our use of WL2, we simply applied each builder Bj to a dataset Di, to get the set of predicates {pij} = {LGGBj(Di)}, instead of running the full WL2 learning algorithm. References [1] David M. Blei, J. Andrew Bagnell, and Andrew K. McCallum. Learning with scope, with application to information extraction and classification. In Proceedings of UAI2002, Edmonton, Alberta, 2002. [2] Avrim Blum. Learning boolean functions in an infinite attribute space. Machine Learning, 9(4):373–386, 1992. [3] Avrin Blum and Tom Mitchell. Combining labeled and unlabeled data with cotraining. In Proceedings of the 1998 Conference on Computational Learning Theory, Madison, WI, 1998. [4] William W. Cohen. Automatically extracting features for concept learning from the web. In Machine Learning: Proceedings of the Seventeeth International Conference, Palo Alto, California, 2000. Morgan Kaufmann. [5] William W. Cohen and Wei Fan. Learning page-independent heuristics for extracting data from web pages. In Proceedings of The Eigth International World Wide Web Conference (WWW-99), Toronto, 1999. [6] William W. Cohen, Lee S. Jensen, and Matthew Hurst. A flexible learning system for wrapping tables and lists in HTML documents. In Proceedings of The Eleventh International World Wide Web Conference (WWW-2002), Honolulu, Hawaii, 2002. [7] David Cohn and Thomas Hofmann. The missing link - a probabilistic model of document content and hypertext connectivity. In Advances in Neural Information Processing Systems 13. MIT Press, 2001. [8] Lee S. Jensen and William W. Cohen. A structured wrapper induction system for extracting information from semi-structured documents. In Proceedings of the IJCAI2001 Workshop on Adaptive Text Extraction and Mining, Seattle, WA, 2001. [9] T. Joachims, N. Cristianini, and J. Shawe-Taylor. Composite kernels for hypertext categorisation. In Proceedings of the International Conference on Machine Learning (ICML-2001), 2001. [10] N. Kushmeric. Wrapper induction: efficiency and expressiveness. Artificial Intelligence, 118:15–68, 2000. [11] Nick Littlestone. Learning quickly when irrelevant attributes abound: A new linearthreshold algorithm. Machine Learning, 2(4), 1988. [12] Ion Muslea, Steven Minton, and Craig Knoblock. Wrapper induction for semistructured information sources. Journal of Autonomous Agents and Multi-Agent Systems, 16(12), 1999. [13] Kamal Nigam and Rayyid Ghani. Analyzing the effectiveness and applicability of cotraining. In Proceedings of the Ninth International Conference on Information and Knowledge Management (CIKM-2000), 2000. [14] J. Ross Quinlan. C4.5: programs for machine learning. Morgan Kaufmann, 1994. [15] S. Slattery and T. Mitchell. Discovering test set regularities in relational domains. In Proceedings of the 17th International Conference on Machine Learning (ICML2000), June 2000.
|
2002
|
127
|
2,133
|
VIBES: A Variational Inference Engine for Bayesian Networks Christopher M. Bishop Microsoft Research Cambridge, CB3 0FB, U.K. research.microsoft.com/∼cmbishop David Spiegelhalter MRC Biostatistics Unit Cambridge, U.K. david.spiegelhalter@mrc-bsu.cam.ac.uk John Winn Department of Physics University of Cambridge, U.K. www.inference.phy.cam.ac.uk/jmw39 Abstract In recent years variational methods have become a popular tool for approximate inference and learning in a wide variety of probabilistic models. For each new application, however, it is currently necessary first to derive the variational update equations, and then to implement them in application-specific code. Each of these steps is both time consuming and error prone. In this paper we describe a general purpose inference engine called VIBES (‘Variational Inference for Bayesian Networks’) which allows a wide variety of probabilistic models to be implemented and solved variationally without recourse to coding. New models are specified either through a simple script or via a graphical interface analogous to a drawing package. VIBES then automatically generates and solves the variational equations. We illustrate the power and flexibility of VIBES using examples from Bayesian mixture modelling. 1 Introduction Variational methods [1, 2] have been used successfully for a wide range of models, and new applications are constantly being explored. In many ways the variational framework can be seen as a complementary approach to that of Markov chain Monte Carlo (MCMC), with different strengths and weaknesses. For many years there has existed a powerful tool for tackling new problems using MCMC, called BUGS (‘Bayesian inference Using Gibbs Sampling’) [3]. In BUGS a new probabilistic model, expressed as a directed acyclic graph, can be encoded using a simple scripting notation, and then samples can be drawn from the posterior distribution (given some data set of observed values) using Gibbs sampling in a way that is largely automatic. Furthermore, an extension called WinBUGS provides a graphical front end to BUGS in which the user draws a pictorial representation of the directed graph, and this automatically generates the required script. We have been inspired by the success of BUGS to produce an analogous tool for the solution of problems using variational methods. The challenge is to build a system that can handle a wide range of graph structures, a broad variety of common conditional probability distributions at the nodes, and a range of variational approximating distributions. All of this must be achieved whilst also remaining computationally efficient. 2 A General Framework for Variational Inference In this section we briefly review the variational framework, and then we characterise a large class of models for which the variational method can be implemented automatically. We denote the set of all variables in the model by W = (V, X) where V are the visible (observed) variables and X are the hidden (latent) variables. As with BUGS, we focus on models that are specified in terms of an acyclic directed graph (treatment of undirected graphical models is equally possible and is somewhat more straightforward). The joint distribution P(V, X) is then expressed in terms of conditional distributions P(Wi|pai) at each node i, where pai denotes the set of variables corresponding to the parents of node i, and Wi denotes the variable, or group of variables, associated with node i. The joint distribution of all variables is then given by the product of the conditionals P(V, X) = Q i P(Wi|pai). Our goal is to find a variational distribution Q(X|V ) that approximates the true posterior distribution P(X|V ). To do this we note the following decomposition of the log marginal probability of the observed data, which holds for any choice of distribution Q(X|V ) ln P(V ) = L(Q) + KL(Q∥P) (1) where L(Q) = X X Q(X|V ) ln P(V, X) Q(X|V ) (2) KL(Q∥P) = − X X Q(X|V ) ln P(X|V ) Q(X|V ) (3) and the sums are replaced by integrals in the case of continuous variables. Here KL(Q∥P) is the Kullback-Leibler divergence between the variational approximation Q(X|V ) and the true posterior P(X|V ). Since this satisfies KL(Q∥P) ≥0 it follows from (1) that the quantity L(Q) forms a lower bound on ln P(V ). We now choose some family of distributions to represent Q(X|V ) and then seek a member of that family that maximizes the lower bound L(Q). If we allow Q(X|V ) to have complete flexibility then we see that the maximum of the lower bound occurs for Q(X|V ) = P(X|V ) so that the variational posterior distribution equals the true posterior. In this case the Kullback-Leibler divergence vanishes and L(Q) = ln P(V ). However, working with the true posterior distribution is computationally intractable (otherwise we wouldn’t be resorting to variational methods). We must therefore consider a more restricted family of Q distributions which has the property that the lower bound (2) can be evaluated and optimized efficiently and yet which is still sufficiently flexible as to give a good approximation to the true posterior distribution. 2.1 Factorized Distributions For the purposes of building VIBES we have focussed attention initially on distributions that factorize with respect to disjoint groups Xi of variables Q(X|V ) = Y i Qi(Xi). (4) This approximation has been successfully used in many applications of variational methods [4, 5, 6]. Substituting (4) into (2) we can maximize L(Q) variationally with respect to Qi(Xi) keeping all Qj for j ̸= i fixed. This leads to the solution ln Q⋆ i (Xi) = ⟨ln P(V, X)⟩{j̸=i} + const. (5) where ⟨·⟩k denotes an expectation with respect to the distribution Qk(Xk). Taking exponentials of both sides and normalizing we obtain Q⋆ i (Xi) = exp⟨ln P(V, X)⟩{j̸=i} P Xi exp⟨ln P(V, X)⟩{j̸=i} . (6) Note that these are coupled equations since the solution for each Qi(Xi) depends on expectations with respect to the other factors Qj̸=i. The variational optimization proceeds by initializing each of the Qi(Xi) and then cycling through each factor in turn replacing the current distribution with a revised estimate given by (6). The current version of VIBES is based on a factorization of the form (4) in which each factor Qi(Xi) corresponds to one of the nodes of the graph (each of which can be a composite node, as discussed shortly). An important property of the variational update equations, from the point of view of VIBES, is that the right hand side of (6) does not depend on all of the conditional distributions P(Wi|pai) that define the joint distribution but only on those that have a functional dependence on Xi, namely the conditional P(Xi|pai), together with the conditional distributions for any children of node i since these have Xi in their parent set. Thus the expectations that must be performed on the right hand side of (6) involve only those variables lying in the Markov blanket of node i, in other words the parents, children and co-parents of i, as illustrated in Figure 1(a). This is a key concept in VIBES since it allows the variational update equations to be expressed in terms of local operations, which can therefore be expressed in terms of generic code which is independent of the global structure of the graph. 2.2 Conjugate Exponential Models It has already been noted [4, 5] that important simplifications to the variational update equations occur when the distributions of the latent variables, conditioned on their parameters, are drawn from the exponential family and are conjugate with respect to the prior distributions of the parameters. Here we adopt a somewhat different viewpoint in that we make no distinction between latent variables and model parameters. In a Bayesian setting these both correspond to unobserved stochastic variables and can be treated on an equal footing. This allows us to consider conjugacy not just between variables and their parameters, but hierarchically between all parent-child pairs in the graph. Thus we consider models in which each conditional distribution takes the standard exponential family form ln P(Xi|Y ) = φi(Y )Tui(Xi) + fi(Xi) + gi(Y ) (7) where the vector φ(Y ) is called the natural parameter of the distribution. Now consider a node Zj with parent Xi and co-parents cp(i) j , as indicated in Figure 1(a). Xi Y1 YK Z1 Zj cpj ( )i { } (a) (b) Figure 1: (a) A central observation is that the variational update equations for node Xi depend only on expectations over variables appearing in the Markov blanket of Xi, namely the set of parents, children and co-parents. (b) Hinton diagram of ⟨W⟩ from one of the components in the Bayesian PCA model, illustrating how all but three of the PCA eigenvectors have been suppressed. As far as the pair of nodes Xi and Zj are concerned, we can think of P(Xi|Y ) as a prior over Xi and the conditional P(Zj|Xi, cp(i) j ) as a (contribution to) the likelihood function. Conjugacy requires that, as a function of Xi, the product of these two conditionals must take the same form as (7). Since the conditional P(Zj|Xi, cp(i) j ) is also in the exponential family it can be expressed as ln P(Zj|Xi, cp(i) j ) = φj(Xi, cp(i) j )Tuj(Zj) + fj(Zj) + gj(Xi, cp(i) j ). (8) Conjugacy then requires that this be expressible in the form ln P(Zj|Xi, cp(i) j ) = eφj→i(Zj, cp(i) j ) Tui(Xi) + λ(Zj, cp(i) j ) (9) for some choice of functions eφ and λ. Since this must hold for each of the parents of Zj it follows that ln P(Zj|Xi, cp(i) j ) must be a multi-linear function of the uk(Xk) for each of the parents Xk of node XZj. Also, we observe from (8) that the dependence of ln P(Zj|Xi, cp(i) j ) on Zj is again linear in the function uj(Zj). We can apply a similar argument to the conjugate relationship between node Xj and each of its parents, showing that the contribution from the conditional P(Xi|Y ) can again be expressed in terms of expectations of the natural parameters for the parent node distributions. Hence the right hand side of the variational update equation (5) for a particular node Xi will be a multi-linear function of the expectations ⟨u⟩for each node in the Markov blanket of Xi. The variational update equation then takes the form ln Q⋆ i (Xi) = ⟨φi(Y )⟩Y + M X j=1 ⟨eφj→i(Zj, cp(i) j )⟩Zj,cp(i) j T ui(Xi) + const. (10) which involves summation of bottom up ‘messages’ ⟨eφj→i⟩Zj,cp(i) j from the children together with a top-down message ⟨φi(Y )⟩Y from the parents. Since all of these messages are expressed in terms of the same basis ui(Xi), we can write compact, generic code for updating any type of node, instead of having to take account explicitly of the many possible combinations of node types in each Markov blanket. As an example, consider the Gaussian N(X|µ, τ −1) for a single variable X with mean µ and precision (inverse variance) τ. The natural coordinates are uX = [X, X2]T and the natural parameterization is φ = [µτ, −τ/2]T. Then ⟨u⟩= [µ, µ2 + τ −1]T, and the function fi(Xi) is simply zero in this case. Conjugacy allows us to choose a distribution for the parent µ that is Gaussian and a prior for τ that is a Gamma distribution. The corresponding natural parameterizations and update messages are given by uµ = µ µ2 , ⟨eφX→µ⟩= ⟨τ⟩⟨X⟩ −⟨τ⟩/2 , uτ = τ ln τ , ⟨eφX→τ⟩= −⟨(X −µ)2⟩ 1/2 . We can similarly consider multi-dimensional Gaussian distributions, with a Gaussian prior for the mean and a Wishart prior for the inverse covariance matrix. A generalization of the Gaussian is the rectified Gaussian which is defined as P(X|µ, τ) ∝N(X|µ, τ) for X ≥0 and P(X|µ, τ) = 0 for X < 0, for which moments can be expressed in terms of the ‘erf’ function. This rectification corresponds to the introduction of a step function, whose logarithm corresponds to fi(Xi) in (7), which is carried through the variational update equations unchanged. Similarly, we can consider doubly truncated Gaussians, which are non-zero only over some finite interval. Another example is the discrete distribution for categorical variables. These are most conveniently represented using the 1-of-K scheme in which S = {Sk} with k = 1, . . . , K, Sk ∈{0, 1} and P k Sk = 1. This has distribution P(S|π) = QK k=1 πSk k and we can place a conjugate Dirichlet distribution over the parameters {πk}. 2.3 Allowable Distributions We now characterize the class of models that can be solved by VIBES using the factorized variational distribution given by (4). First of all we note that, since a Gaussian variable can have a Gaussian parent for its mean, we can extend this hierarchically to any number of levels to give a sub-graph which is a DAG of Gaussian nodes of arbitrary topology. Each Gaussian can have Gamma (or Wishart) prior over its precision. Next, we observe that discrete variables S = {Sk} can be used to construct ‘pick’ functions which choose a particular parent node bY from amongst several conjugate parents {Yk}, so that bY = Yk when sk = 1, which can be written bY = QK k=1 Y Sk k . Under any non-linear function h(·) we have h(Y ) = QK k=1 h(Yk)Sk. Furthermore the expectation under S takes the form ⟨h(Y )⟩S = P k⟨Sk⟩h(Yk). Variational inference will therefore be tractable for this model provided it is tractable for each of the parents Yk individually. Thus we can handle the following very general architecture: an arbitrary DAG of multinomial discrete variables (each having Dirichlet priors) together with an arbitrary DAG of linear Gaussian nodes (each having Wishart priors) and with arbitrary pick links from the discrete nodes to the Gaussian nodes. This graph represents a generalization of the Gaussian mixture model, and includes as special cases models such as hidden Markov models, Kalman filters, factor analysers and principal component analysers, as well as mixtures and hierarchical mixtures of all of these. There are other classes of models that are tractable under this scheme, for example Poisson variables having Gamma priors, although these may be of limited interest. We can further extend the class of tractable models by considering nodes whose natural parameters are formed from deterministic functions of the states of several parents. This is a key property of the VIBES approach which, as with BUGS, greatly extends its applicability. Suppose we have some conditional distribution P(X|Y, . . .) and we want to make Y some deterministic function of the states of some other nodes ψ(Z1, . . . , ZM). In effect we have a pseudo-parent that is a deterministic function of other nodes, and indeed is represented explicitly through additional deterministic nodes in the graphical interface both to WinBUGS and to VIBES. This will be tractable under VIBES provided the expectation of uψ(ψ) can be expressed in terms of the expectations of the corresponding functions uj(Zj) of the parents. The pick functions discussed earlier are a special case of these deterministic functions. Thus for a Gaussian node the mean can be formed from products and sums of the states of other Gaussian nodes provided the function is linear with respect to each of the nodes. Similarly, the precision of the Gaussian can comprise the products (but not sums) of any number of Gamma distributed variables. Finally, we have seen that continuous nodes can have both discrete and continuous parents but that discrete nodes can only have discrete parents. We can allow discrete nodes to have continuous parents by stepping outside the conjugate-exponential framework by exploiting a variational bound on the logistic sigmoid function [1]. We also wish to be able to evaluate the lower bound (2), both to confirm the correctness of the variational updates (since the value of the bound should never decrease), as well as to monitor convergence and set termination criteria. This can be done efficiently, largely using quantities that have already been calculated during the variational updates. 3 VIBES: A Software Implementation Creation of a model in VIBES simply involves drawing the graph (using operations similar to those in a simple drawing package) and then assigning properties to each node such as the functional form for the distribution, a list of the other variables it is conditioned on, and the location of the corresponding data file if the node is observed. The menu of distributions available to the user is dynamically adjusted at each stage to ensure that only valid conjugate models can be constructed. As in WinBUGS we have adopted the convention of making logical (deterministic) nodes explicit in the graphical representation as this greatly simplifies the specification and interpretation of the model. We also use the ‘plate’ notation of a box surrounding one or more nodes to denote that those nodes are replicated some number of times as specified by the parameter appearing in the bottom right hand corner of the box. 3.1 Example: Bayesian Mixture Models We illustrate VIBES using a Bayesian model for a mixture of M probabilistic PCA distributions, each having maximum intrinsic dimensionality of q, with a sparse prior [6], for which the VIBES implementation is shown in Figure 2. Here there are N observations of the vector t whose dimensionality is d, as indicated by the plates. The dimensionality of the other variables is also determined by which plates they are contained in (e.g. W has dimension d × q × M whereas τ is a scalar). Variables t, x, W and µ are Gaussian, τ and α have Gamma distributions, S is discrete and π is Dirichlet. Once the model is completed (and the file or files containing the observed variables Figure 2: Screen shot from VIBES showing the graph for a mixture of probabilistic PCA distributions. The node t is coloured black to denote that this variable is observed, and the node ‘alpha’ has been highlighted and its properties (e.g. the form of the distribution) can be changed using the menus on the left hand side. The node labelled ‘x.W+mu’ is a deterministic node, and the double arrows denote deterministic relationships. are specified) it is then ‘compiled’, which involves allocation of memory for the variables and initializing the distributions Qi (which is done using simple heuristics but which can also be over-ridden by the user). If desired, monitoring of the lower bound (2) can be switched on (at the expense of slightly increased computation) and this can also be used to set a termination criterion. Alternatively the variational optimization can be run for a fixed number of iterations. Once the optimization is complete various diagnostics can be used to probe the results, such as the Hinton diagram plot shown in Figure 1(b). Now suppose we wish to modify the model, for instance by having a single set of hyper-parameters α whose values are shared by all of the M components in the mixture, instead of having a separate set for each component. This simply involved dragging the α node outside of the M plate using the mouse and then recompiling (since α is now a vector of length q instead of a matrix of size M ×q). This literally takes a few seconds, in contrast to the effort required to formulate the variational inference equations, and develop bespoke code, for a new model! The result is then optimized as before. A screen shot of the corresponding VIBES model is shown in Figure 3. 4 Discussion Our early experiences with VIBES have shown that it dramatically simplifies the construction and testing of new variational models, and readily allows a range of alternative models to be evaluated on a given problem. Currently we are extending VIBES to cater for a broader range of variational distributions by allowing the user to specify a Q distribution defined over a subgraph of the true graph [7]. Finally, there are many possible extensions to the basic VIBES we have described Figure 3: As in Figure 2 but with the vector α of hyper-parameters moved outside the M ‘plate’. This causes there to be only q terms in α which are shared over the mixture components rather than M × q. Note that, with no nodes highlighted, the side menus disappear. here. For example, in order to broaden the range of models that can be tackled we can combine variational with other methods such as Gibbs sampling or optimization (empirical Bayes) to allow for non-conjugate hyper-priors for instance. Similarly, there is scope for exploiting exact methods where there exist tractable sub-graphs. References [1] M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. An introduction to variational methods for graphical models. In M. I. Jordan, editor, Learning in Graphical Models, pages 105–162. Kluwer, 1998. [2] R. M. Neal and G. E. Hinton. A new view of the EM algorithm that justifies incremental and other variants. In M. I. Jordan, editor, Learning in Graphical Models, pages 355–368. Kluwer, 1998. [3] D J Lunn, A Thomas, N G Best, and D J Spiegelhalter. WinBUGS – a Bayesian modelling framework: concepts, structure and extensibility. Statistics and Computing, 10:321–333, 2000. http://www.mrc-bsu.cam.ac.uk/bugs/. [4] Z. Ghahramani and M. J. Beal. Propagation algorithms for variational Bayesian learning. In T. K. Leen, T. Dietterich, and V. Tresp, editors, Advances in Neural Information Processing Systems, volume 13, Cambridge MA, 2001. MIT Press. [5] H. Attias. A variational Bayesian framework for graphical models. In S. Solla, T. K. Leen, and K-L Muller, editors, Advances in Neural Information Processing Systems, volume 12, pages 209–215, Cambridge MA, 2000. MIT Press. [6] C. M. Bishop. Variational principal components. In Proceedings Ninth International Conference on Artificial Neural Networks, ICANN’99, volume 1, pages 509–514. IEE, 1999. [7] Christopher M. Bishop and John Winn. Structured variational distributions in VIBES. In Proceedings Artificial Intelligence and Statistics, Key West, Florida, 2003. Accepted for publication.
|
2002
|
128
|
2,134
|
A Convergent Form of Approximate Policy Iteration Theodore J. Perkins Department of Computer Science University of Massachusetts Amherst Amherst, MA 01003 perkins@cs.umass.edu Doina Precup School of Computer Science McGill University Montreal, Quebec, Canada H3A 2A7 dprecup@cs.mcgill.ca Abstract We study a new, model-free form of approximate policy iteration which uses Sarsa updates with linear state-action value function approximation for policy evaluation, and a “policy improvement operator” to generate a new policy based on the learned state-action values. We prove that if the policy improvement operator produces -soft policies and is Lipschitz continuous in the action values, with a constant that is not too large, then the approximate policy iteration algorithm converges to a unique solution from any initial policy. To our knowledge, this is the first convergence result for any form of approximate policy iteration under similar computational-resource assumptions. 1 Introduction In recent years, methods for reinforcement learning control based on approximating value functions have come under fire for their poor, or poorly-understood, convergence properties. With tabular storage of state or state-action values, algorithms such as Real-Time Dynamic Programming, Q-Learning, and Sarsa [2, 13] are known to converge to optimal values. Far fewer results exist for the case in which value functions are approximated using generalizing function approximators, such as state-aggregators, linear approximators, or neural networks. Arguably, the best successes of the field were generated in this way (e.g., [15]), and there are a few positive convergence results, particularly for the case of linear approximators [16, 7, 8]. However, simple examples demonstrate that many standard reinforcement learning algorithms, such as Q-Learning, Sarsa, and approximate policy iteration, can diverge or cycle without converging when combined with generalizing function approximators (e.g., [1, 6, 4]). One classical explanation for this lack of convergence is that, even if one assumes that the agent’s environment is Markovian, the problem is non-Markovian from the agent’s point of view—the state features and/or the agent’s approximator architecture may conspire to make some environment states indistinguishable. We focus on a more recent observation, which faults the discontinuity of the action selection strategies usually employed by reinforcement learning agents [5, 10]. If an agent uses almost any kind of generalizing function approximator to estimate state-values or state-action values, the values that are learned depend on the visitation frequencies of different states or state-action pairs. If the agent’s behavior is discontinuous in its value estimates, as is the case with greedy and -greedy behavior [14], then slight changes in value estimates may result in radical changes in the agent’s behavior. This can dramatically change the relative frequencies of different states or state-action pairs, causing entirely different value estimates to be learned. One way to avoid this problem is to ensure that small changes in action values result in small changes in the agent’s behavior—that is, to make the agent’s policy a continuous function of its values. De Farias and Van Roy [5] showed that a form of approximate value iteration which relies on linear value function approximations and softmax policy improvement is guaranteed to possess fixed points. For partially-observable Markov decision processes, Perkins and Pendrith [10] showed that observation-action values that are fixed points under Q-Learning or Sarsa update rules are guaranteed to exist if the agent uses any continuous action selection strategy. Both of these papers demonstrate that continuity of the agent’s action selection strategy leads to the existence of fixed points to which the algorithms can converge. In neither case, however, was convergence established. We take this line of reasoning on step further. We study a form of approximate policy iteration in which, at each iteration: (1) Sarsa updating is used to learn weights for a linear approximation to the action value function of the current policy (policy evaluation), and then (2) a “policy improvement operator” determines a new policy based on the learned action values (policy improvement).1 We show that if the policy improvement operator, analogous to the action selection strategy of an on-line agent, is -soft and Lipschitz continuous in the action values, with a constant that is not too large, then the sequence of policies generated is guaranteed to converge. This technical requirement formalizes the intuition that the agent’s behavior should not change too dramatically when value estimates change. 2 Markov Decision Processes and Value Functions We consider infinite-horizon discounted Markov decision problems [3]. We assume that the Markov decision process has a finite state set, , and a finite action set, , with sizes and . When the process is in state and the agent chooses action , the agent receives an immediate reward with expectation
, and the process transitions to next state with probability . Let be the length vector of expected immediate rewards following each state-action pair ( ). A stochastic policy, , assigns a probability distribution over to each ! . The probability that the agent chooses action when the process is in state is denoted "# $ . If is deterministic in state , i.e., if "# $ &% for some and " '( *) for all $"+ , then we write " , . For *) % .'0/1/0/ let 2 ,
2 , and 12 denote, respectively, the state of the process at time - , the action chosen by the agent at time - , and the reward received by the agent at time - . For policy , the state-value function, 354 , and state-action value function (or just action-value function), 6 4 , are defined as: 3 4 # , 87 4 9;: < 2>=?A@ 2 2 ? CBD 6 4 # E 87 4 9;: < 2>=?A@ 2 2 ? ? FBD where the expectation is with respect to the stochasticity of the process and the fact that the agent chooses actions according to , and @ HG ) % is a discount factor. It is well-known [11] that there exists at least one deterministic, optimal policy JI for which 6K4L KM 6K4 for all , , and . Policy is called -soft if "# E NM for all and . For any PO ) , let QSR denote the set of -soft policies. Note that a policy, , can be viewed as an element of TVUXW , and Q R can be viewed as a compact subset of T"UXW . We make the following assumption: Assumption 1 Under any policy , the Markov decision process behaves as an irreducible, aperiodic Markov chain over the state set . 1The algorithm can also be viewed as batch-mode Sarsa with linear action-value function approximation. ———————————————————————————————————— Inputs: initial policy ? , and policy improvement operator . for i=0,1,2,... do Policy evaluation: Sarsa updates under policy , with linear function approximation. Initialize T arbitrarily. With environment in state ? : Choose ? according to # ? . Observe 1? , . Repeat for % . 0/1/0/ until converges: Choose 2 according to 2
. A2 2 2 0> 2 @ # 2 E 2 # 2 E 2 Observe 12 12 . Policy improvement: !" V # . end for ———————————————————————————————————— Figure 1: The version of approximate policy iteration that we study. The approximate policy iteration algorithm we propose learns linear approximations to the action value functions of policies. For this purpose, we assume that each state-action pair # $ is represented by a length $ feature vector # $ . (In this paper, all vectors are columns unless transposed.) For weights T , the approximate action-value for E is % 6!# E # $& , where # $ denotes the transpose of # $ . Letting be the -by$ matrix whose rows correspond to the feature vectors of the state-action pairs, the entire approximate action-value function given by weights is represented by the vector % 6 . We make the following assumption: Assumption 2 The columns of are linearly independent. 3 Approximate Policy Iteration The standard, exact policy iteration algorithm [3] starts with an arbitrary policy ? and alternates between two steps: policy evaluation, in which 3 4(' is computed, and policy improvement, in which a new policy, ) , is computed. 3 4*' can be computed in various ways, including dynamic programming or solving a system of linear equations. !" is taken to be a greedy, deterministic policy with respect to 6 4(' . That is, ) # , ,+.-/0#+21 6K4(' +2-3/0+ 1 54 3 4# for all . Policy iteration terminates when 3 4(')687 3 4(' . It is well-known that the sequence of policies generated is monotonically improving in the sense that 3 4 '"697 # , M 3K4 ' # , for all , and that the algorithm terminates after a finite number of iterations [3]. Bertsekas and Tsitsiklis [4] describe several versions of approximate policy iteration in which the policy evaluation step is not exact. Instead, 3 4(' is approximated by a weighted linear combination of state features, with weights determined by Monte Carlo or TD( : ) learning rules. However, they assume that the policy improvement step is the same as in the standard policy iteration algorithm—the next policy is greedy with respect to the (approximate) action values of the previous policy. Bertsekas and Tsitsiklis show that if the approximation error in the evaluation step is low, then such algorithms generate solutions that are near optimal [4]. However, they also demonstrate by example that the sequence of policies generated does not converge for some problems, and that poor performance can result when the approximation error is high. We study the version of approximate policy iteration shown in Figure 1. Like the versions studied by Bertsekas and Tsitsiklis, we assume that policy evaluation is not performed exactly. In particular, we assume that Sarsa updating is used to learn the weights of a linear approximation to the action-value function. We use action-value functions instead of statevalue functions so that the algorithm can be performed based on interactive experience with the environment, without knowledge of the state transition probabilities. The weights learned in the policy evaluation step converge under conditions specified by Tsitsiklis and Van Roy [17], one of which is Assumption 2. The key difference from previous work is that we assume a generic policy improvement operator, , which maps every 6 T UXW to a stochastic policy. This operator may produce, for example, greedy policies, -greedy policies, or policies with action selection probabilities based on the softmax function [14]. is Lipschitz continuous with constant if, for all 6 6 TJUXW , V6 #6 6 6 , where denotes the Euclidean norm. is -soft if, for all 6 TJUXW , #6 is -soft. The fact that we allow for a policy improvement step that is not strictly greedy enables us to establish the following theorem. Theorem 1 For any infinite-horizon Markov decision process satisfying Assumption 1, and for any O ) , there exists O ) such that if is -soft and Lipschitz continuous with constant , then the sequence of policies generated by the approximate policy iteration algorithm in Figure 1 converges to a unique limiting policy Q R , regardless of the choice of C? . In other words, if the behavior of the agent does not change too greatly in response to changes in its action value estimates, then convergence is guaranteed. The remainder of the paper is dedicated to proving this theorem. First, however, we briefly consider what the theorem means and what some of its limitations are. The strength of the theorem is that it states a simple condition under which a form of model-free reinforcement learning control based on approximating value functions converges for a general class of problems. The theorem does not specify a particular constant, , which ensures convergence; it merely states that such a constant exists. The values of (and hence, range of policy improvement operators) which ensure convergence depend on properties of the decision process, such as its transition probabilities and rewards, which we assume to be unknown. The theorem also offers no guarantee on the quality of the policy to which the algorithm converges. Intuitively, if the policy improvementoperator is Lipschitz continuous with a small constant , then the agent is limited in the extent to which it can optimize its behavior. For example, even if an agent correctly learns that the value of action is much higher than the value of action $ , limits the frequency with which the agent can choose in favor of , and this may limit performance. The practical importance of these considerations remains to be seen, and is discussed further in the conclusions section. 4 Proof of Theorem 1 4.1 Probabilities Related to State-Action Pairs Because the approximate policy iteration algorithm in Figure 1 approximates action-values, our analysis relies extensively on certain probabilities that are associated with state-action pairs. First, we define 4 to be the -by matrix whose entries correspond to the probabilities that one state-action pair follows another when the agent behaves according to . That is, the element on the 2 row and # ,
( 2 column of 4 is "# E $ . K4 can be viewed as the stochastic transition matrix of a Markov chain over state-action pairs. Lemma 1 There exists such that for all , , K4 7 K4
. Proof: Let and be fixed, and let # E and # ,#E $ . Then 4 7 4
> 1
(
# #E $ E # #E $ # $ 0+ 1 1#E $(
# #E $( : . It is readily shown that for any two -by matrices and whose elements different in absolute value by at most , . Hence, K4 7 K4 . Under Assumption 1, fixing a policy, , induces an irreducible, aperiodic Markov chain over . Let 4 O ) denote the stationary probability of state . We define 4 to be the length vector whose # E 2 element is 4 "# E . Note that the elements of 4 sum to one. If "# E O ) for all and , then all elements of F4 are positive and it is easily verified that A4 is the unique stationary distribution of the irreducible, aperiodic Markov chain over state-action pairs with transition matrix 4 . Lemma 2 For any O ) , there exists such that for all E Q R , 427 4 . Proof: For any QPR , let :C4 be the largest eigenvalue of 4 with modulus strictly less than 1. :C4 is well-defined since the transition matrix of any irreducible, aperiodic Markov chain has precisely one eigenvalue equal to one [11]. Since the eigenvalues of a matrix are continuous in the elements of the matrix [9], and since Q R is compact, there exists : 0+ 1 4
: 4 :C4 % for some QPR . Seneta [12], showed that for any two irreducible aperiodic Markov chains with transition matrices and and stationary distributions and , on a state set with elements, 7 : , where : is the largest eigenvalue of with modulus strictly less than one. Let Q R . 427 4 427 4 UXW 7 K427 K4 : UXW K4 7 K4 UXW . Lastly, we define ! 4 to be the matrix whose diagonal is A4 . It is easy to show that for any , !!4 7 ! 4 4 7 4
. 4.2 The Weights Learned in the Policy Evaluation Step Consider the approximate policy evaluation step of the algorithm in Figure 1. Suppose that the agent follows policy and uses Sarsa updates to learn weights , and suppose that " E O ) for all and . Then 4 is the stochastic transition matrix of an irreducible, aperiodic Markov chain over state-action pairs, and ! 4 has the unique stationary distribution of that chain on its diagonal. Under standard conditions on the learning rate parameters for the updates, F2 , Tsitsiklis and Van Roy [17] show that the weights converge to the unique solution to the equation: ! 4 " @ 4 ! 4 (1) (Note that we have translated their result for TD( : ) updating of approximate state-values to Sarsa, or TD(0), updating of approximate state-action values.) In essence, this equation says that the “expected update” to the weights under the stationary distribution, F4 , is zero. Let N4 #!!4$" @ K4 and % 4 #!!4$ . Tsitsiklis and Van Roy [17] show that N4 is invertible, hence we can write 4 N4' % 4 for the unique weights which satisfy Equation 1. Lemma 3 There exist & and (' such that for all , % 4 7 % 4
(& and 427 4 )' . Proof: For the first claim, % 427 % 4 #!!427 !!4 !!427 ! 4 ) . For the second claim, 427 4 G ! 427 $" @ 427 ! 4 $" @ 4 +* ! 4 7 $" @ 4 7 ! 4
$" @ 4
! 4 7 ! 4
@ ! 4 7 4 7 @ ! 4
4
! 427 ! 4 @ ! 4 7 427 4 4 @ ! 4 4 ! 4 7 ! 4
@ ! 4 7 4 7 4
@ $! 4 7 ! 4
4
! 427 ! 4 @ ! 427 427 4 @ ! 427 ! 4 4 % @ @ " where the last line follows from Lemmas 1 and 2 and the facts ! 4 % and 4 % for any QPR . Lemma 4 For any O ) , there exists such that !4 for all Q R . Proof: By Lemmas 1 and 2, and by the continuity of matrix inverses [11], 4 is a continuous function of . Thus, 4 is a continuous function of . Because QSR is a compact subset of T U W , and because continuous functions map compact sets to compact sets, the existence of the bound, , follows. For any -by matrix , let 0
= . That is, measures how small a vector of length one can become under left-multiplication by matrix . Lemma 5 For any O ) , there exists O ) such that for all QPR , N4' M . Proof: Lemma 7, in the Appendix, shows that is a continuous mapping and that is positive for any non-singular matrix. For any DQ R , N4' M 4 7
427 . Since is continuous, and QSR compact, the infimum is attained by some QPR . Thus 4' M N4 E O ) , where the last inequality follows because S4 is non-singular. Lemma 6 For any PO ) , there exists such that for all E Q R , !4 7 !4 . Proof: Let E Q R be arbitrary. From Equation 1, P427 !427 % 427 and N4 !4 % 4 . Thus: 427 4 7 4 4 % 427 % 4 4 7 427 4 4 4 4 % 427 % 4 4 7 4 7 4
# 4 7 4
4
% 4 7 % 4
4 7 4 7 4
% 4 7 % 4
4 7 4
& 4
427 427 4 % 427 % 4 427 4 4 427 4 & ' (2) 427 4 (& )' The left hand side of Equation 2 follows from Lemmas 5 and 7; the right hand side follows from Lemmas 3 and 4. 4.3 Contraction Argument Proof of Theorem 1: For a given infinite-horizon discounted Markov decision problem, let O ) and be fixed. Suppose that is Lipschitz continuous with constant , where is yet to be determined. Let E Q R be arbitrary. The policies that result from and after one iteration of the approximate policy iteration algorithm of Figure 1 are V % 6 4 7 and % 6 4 respectively. Observe that: V % 6 4 7 % 6 4
% 6 4 7 % 6 4
!427 !4 , where the last step follows from Lemma 6. If , then for some ! G ) % we have % 6 4 7 V % 6 4
! . Each iteration of the approximate policy iteration is a contraction. By the Contraction Mapping Theorem [3], there is a unique fixed point of the mapping #" V % 6 4$ , and the sequence of policies generated according to that mapping from any initial policy converges to the fixed point. Note that since the sequence of policies, , converges, and since % 6K4 is a continuous function of , the sequence of approximate action-value functions computed by the algorithm, % 6K4 ' , also converges. 5 Conclusions and Future Work We described a model-free, approximate version of policy iteration for infinite-horizon discounted Markov decision problems. In this algorithm, the policy evaluation step of classical policy iteration is replaced by learning a linear approximation to the action-value function using on-line Sarsa updating. The policy improvement step is given by an arbitrary policy improvement operator, which maps any possible action-value function to a new policy. The main contribution of the paper is to show that if the policy improvement operator is -soft and Lipschitz continuous in the action-values, with a constant that is not too large, then the approximate policy iteration algorithm is guaranteed to converge to a unique, limiting policy from any initial policy. We are hopeful that similar ideas can be used to establish the convergence of other reinforcement learning algorithms, such as on-line Sarsa or Sarsa( : ) control with linear function approximation. The magnitude of the constant that ensures convergence depends on the model of the environment and on properties of the feature representation. If the model is not known, then choosing a policy improvement operator that guarantees convergence is not immediate. To be safe, an operator for which is small should be chosen. However, one generally prefers to be large, so that the agent can exploit its knowledge by choosing actions with higher estimated action-values as frequently as possible. One approach to determining a proper value of would be to make an initial guess and begin the approximate policy iteration procedure. If the contraction property fails on any iteration, one should choose a new policy improvement operator that is Lipschitz continuous with a smaller constant. A potential advantage of this approach is that one can begin with a high choice of , which allows exploitation of action value differences, and switch to lower values of only as necessary. It is possible that convergence could be obtained with much higher values of than are suggested by the bound in the proof of Theorem 1. Discontinuous improvement operators/action selection strategies can lead to nonconvergent behavior for many reinforcement learning algorithms, including Q-Learning, Sarsa, and forms of approximate policy iteration and approximate value iteration. For some of these algorithms, (non-unique) fixed points have been shown to exist when the action selection strategy/improvement operator is continuous [5, 10]. Whether or not convergence also follows remains to be seen. For the algorithm studied in this paper, we have constructed an example demonstrating non-convergence with improvement operators that are Lipschitz continuous but with too large of a constant. In this case, it appears that the Lipschitz continuity assumption we use cannot be weakened. One direction for future work is determining minimal restrictions on action selection (if any) that ensure the convergence of other reinforcement learning algorithms. Ensuring convergence answers one standing objection to reinforcement learning control methods based on approximating value functions. However, an important open issue for our approach, and for other approaches advocating continuous action selection [5, 10], is to characterize the solutions that they produce. We know of no theoretical guarantees on the quality of solutions found, and there is little experimental work comparing algorithms that use continuous action selection with those that do not. Acknowledgments Theodore Perkins was supported in part by National Science Foundation grants ECS0070102 and ECS-9980062. Doina Precup was supported in part by grants from NSERC and FQNRT. References [1] L. C. Baird. Residual algorithms: Reinforcement learning with function approximation. In Proceedings of the Twelfth International Conference on Machine Learning, pages 30–37. Morgan Kaufmann, 1995. [2] A. G. Barto, S. J. Bradtke, and S. P. Singh. Learning to act using real-time dynamic programming. Artificial Intelligence, 72(1):81–138, 1995. [3] D. P. Bertsekas. Dynamic Programming and Optimal Control, Volumes 1 and 2. Athena Scientific, 2001. [4] D. P. Bertsekas and J. N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, 1996. [5] D. P. De Farias and B. Van Roy. On the existence of fixed points for approximate value iteration and temporal-difference learning. Journal of Opt. Theory and Applications, 105(3), 2000. [6] G. Gordon. Chattering in Sarsa( ). CMU Learning Lab Internal Report. Available at www.cs.cmu.edu/ ggordon, 1996. [7] G. Gordon. Approximate Solutions to Markov Decision Processes. PhD thesis, Carnegie Mellon University, 1999. [8] G. J. Gordon. Reinforcement learning with function approximation converges to a region. Advances in Neural Information Processing Systems 13, pages 1040–1046. MIT Press, 2001. [9] C. D. Meyer. Matrix Analysis and Applied Linear Algebra. SIAM, 2000. [10] T. J. Perkins and M. D. Pendrith. On the existence of fixed points for Q-learning and Sarsa in partially observable domains. In Proceedings of the Nineteenth International Conference on Machine Learning, 2002. [11] M. L. Puterman. Markov Decision Processes: Disrete Stochastic Dynamic Programming. John Wiley & Sons, Inc, New York, 1994. [12] E. Seneta. Sensitivity analysis, ergodicity coefficients, and rank-one updates for finite markov chains. In W. J. Stewart, editor, Numerical Solutions of Markov Chains. Dekker, NY, 1991. [13] S. Singh, T. Jaakkola, M. L. Littman, and C. Szepesvari. Convergence results for single-step on-policy reinforcement-learning algorithms. Machine Learning, 38(3):287–308, 2000. [14] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press/Bradford Books, Cambridge, Massachusetts, 1998. [15] G. J. Tesauro. TD-Gammon, a self-teaching backgammon program, achieves master-level play. Neural Computation, 6(2):215–219, 1994. [16] J. N. Tsitsiklis and B. Van Roy. Optimal stopping of markov processes: Hilbert space theory, approximation algorithms, and an application to pricing high-dimensional financial derivatives. IEEE Transactions on Automatic Control, 44(10):1840–1851, 1999. [17] J. N. Tsitsiklis and B. Van Roy. An analysis of temporal-difference learning with function approximation. IEEE Transactions on Automatic Control, 42(5):674–690, 1997. Appendix Lemma 7 For -by matrix , let 0 = . Then: 1. M ) for all , 2. O ) iff is non-singular, 3. for any T , M , 4. is continuous. Proof: The first three points readily follow from elementary arguments. We focus on the last point. We want to show that given a sequence of matrices , % .''1/0/1/ that converge to some I , then " I0 . Note that " I means that "0 : I ) . Let I +.-/ 0
= I . Then )0
: "0 : 0 = "0
: I )0
: I I1 I )0 : I I I I I0 . Now, for all let +2-3/ 0 = . Then )0 : "0 3 : "0 : I I1 M )0 : I I1 M "0 3 : I I I I1 . Thus, )0 : I0 .
|
2002
|
129
|
2,135
|
Graph-Driven Features Extraction from Microarray Data using Diffusion Kernels and Kernel CCA Jean-Philippe Vert Ecole des Mines de Paris Jean-Philippe.Vert@mines.org Minoru Kanehisa Bioinformatics Center, Kyoto University kanehisa@kuicr.kyoto-u.ac.jp Abstract We present an algorithm to extract features from high-dimensional gene expression profiles, based on the knowledge of a graph which links together genes known to participate to successive reactions in metabolic pathways. Motivated by the intuition that biologically relevant features are likely to exhibit smoothness with respect to the graph topology, the algorithm involves encoding the graph and the set of expression profiles into kernel functions, and performing a generalized form of canonical correlation analysis in the corresponding reproducible kernel Hilbert spaces. Function prediction experiments for the genes of the yeast S. Cerevisiae validate this approach by showing a consistent increase in performance when a state-of-the-art classifier uses the vector of features instead of the original expression profile to predict the functional class of a gene. 1 Introduction Microarray technology (DNA chips) is quickly becoming a major data provider in the postgenomics era, enabling the monitoring of the quantity of messenger RNA present in a cell for several thousands genes simultaneously. By submitting cells to various experimental conditions and comparing the expression profiles of different genes, a better understanding of the regulation mechanisms and functions of each gene is expected. As a matter of fact, early experiments confirmed that many genes with similar function yield similar expression patterns [4], and systematic use of state-of-the-art machine learning classification algorithms highlighted the possibility of gene function prediction from microarray data, at least for some functional categories [2]. Independently of microarray technology, decades of research in molecular biology have characterized the roles played by many genes as catalyzing chemical reactions in the cell. This information has now been integrated into databases such as KEGG [8], where series of successive chemical reactions arranged into pathways are represented, together with the genes catalyzing them. In particular one can extract from such a database a graph of genes, where two genes are linked whenever they catalyze two successive reactions. The question motivating this report is whether the knowledge of this graph can help improve the performance of gene function prediction algorithms based on microarray data only. To this end we propose a graph-driven feature extraction process, based on the idea that expression patterns which correspond to actual biological events, such as the activation or inhibition of a particular pathway, are more likely to be shared by genes close to each other in the graph than non-relevant patterns. Our approach consists in translating this intuition as a regularized version of canonical component analysis between the genes mapped to two reproducible kernel Hilbert spaces, defined respectively by a diffusion kernel [9] on the graph and a linear kernel on the expression profiles. This formulation leads to a well-posed problem equivalent to a generalized eigenvector problem [1]. 2 Problem formulation The set of genes is represented by a discrete set of cardinality . The set of expression profiles is a mapping
, where is the number of measurements and is the expression profile of gene . In the sequel we assume that the set of profiles has been centered, i.e., "! . The graph of genes extracted from the pathway database is represented by a simple graph # $ &%(' , with the genes as vertices. Our goal is use this graph to extract features from the expression profiles. To this end we formally define a feature to be a real-valued mapping on the set of genes )* + , , and we denote by -. the set of possible features. The set of centered features is denoted by -0/132)546-78"96): !; . In particular linear features extracted from expression profiles )8<>= ? are defined, for any @ 4 A
, by ) <>= ? 0 @8B , for any C4 (here and often in the sequel we use matrix notations, where @ is a column vector and @ B its transpose). We call DFE*/ the set of linear features. The normalized variance of a linear feature is defined by: G ) <>= ? 4HD %JI K) <>= ? &) <>= ? ML N @ O L P (1) It is a first indicator of the possible relevance of a linear vector. Indeed biological events such as the synthesis of new molecules usually require the coordinated actions of many proteins: they are therefore likely to have characteristic patterns in terms of gene expression which capture variation between the genes involved and the others, and should therefore have large variance. Linear features with a large normalized variance (1) are called relevant in the sequel, as opposed to irrelevant features. Relevant features can be extracted by PCA. While the normalized variance (1) is an intrinsic property of the set of profiles, the knowledge of the graph # suggests another criterion to judge “good” features. As genes linked together in the graph are supposed to participate in successive reactions in the cell, it is likely that the activation/inhibition of a biochemical pathway has a characteristic expression pattern shared by clusters of genes in the graph. More globally, the graph defines a structure on the set of genes, and therefore a notion of smoothness for any feature )Q45. A feature is called smooth if it varies slowly between adjacent nodes in the graph, and rugged otherwise. As just stated, features of interest are more likely to be smooth than other features. We therefore end up with two criteria for extracting “good” features: they should simultaneously be relevant and smooth, the latter being defined with respect to the gene graph. One way to extract such features is to look for pairs of features, K)R % ) L S4T-VUWD , such that )R be smooth, ) L be a relevant linear feature, and the correlation between )R and ) L be as large as possible. The decoupling of the two criteria enables us to state the problem mathematically as follows. Suppose we can define a smoothness functional XYRZ0 []\ for any feature, and a relevance functional X L D ^:\ for linear features, in such a way that lower values of the functional XR (resp. X L ) correspond to smoother (resp. more relevant) features. Then the following optimization problem: =
) B R ) L ) B R ) R X R K) R ) B L ) L X L ) L % (2) where ! is a regularization parameter, is a way to extract smooth and relevant features. Irrelevance and ruggedness penalize any candidate pair through the functionals XR and X L , and controls the trade-off between relevance and smoothness on the one hand, and correlation on the other hand. ! amounts to finding )R and ) L as correlated as possible (which is obtained by taking )8RS ) L ), while ! forces )R to be relevant and ) L to be smooth. In order to turn (2) into an algorithm we remark that if X R and X L can be expressed as norms in reproducible kernel Hilbert spaces (RKHS, see Section 3), then (2) takes the form of a generalization of canonical correlation analysis (CCA) known as kernel-CCA [1], which is equivalent to a generalized eigenvector problem. Let us therefore show how to build two RKHS on the set of genes whose norms are smoothness (Section 4) and relevance (Section 5) functionals, respectively. 3 Reproducible kernel Hilbert spaces and smoothness functionals Let us briefly review basic properties of RKHS relevant for the sequel. The reader is referred to [12, 14] for more details. Let L be a Mercer kernel in the sense that the matrix = = be symmetric positive semidefinite. Let ! E be the linear span of "#T % P % F4 %$ , and consider a decomposition of as: V & ' ( ) R+* (-,(., B ( % (3) where !/ * R0/ P P P / * & are the eigenvalues of and the set , R % P P P % , & 4 & is an associated orthonormal basis of eigenvectors in 1L . The decomposition of any )W42! on this basis can be expressed as )5 & ( )43 \ R65 ( , ( , where 7 is the multiplicity of ! as an eigenvalue. An inner product can be defined in ! as follows: 8 & ' ( )43 \ R 5 ( , ( % & ' ( )43 \ R9 ( , (;:=< & ' ( )>3 \ R 5 ( 9 ( * ( P (4) The resulting Hilbert space ! is called a reproducing kernel Hilbert space, due to the following reproducing property: G % B 4 L %@? C P % % C P % B BA < CC % B P (5) The inner product in ! can be easily expressed in a dual form as follows. Each ) 4D! can be decomposed as ): P 9%E C % P , where E is unique up to the addition of an element of the null space of and is called the dual coordinate of ) . In a matrix form, this reads )HFGE , and using (5) one can easily check that the inner product between two features ) %H 4I!6L with dual coordinates E %BJ 4 L respectively is given by: ? ) %H A < ' = E J LK BC % K CE B J P (6) In particular the ! -norm of a feature )54I! with dual coordinates EF4&is given by: O ) O L < ME B GE % (7) and the inner product between two features K) %BH 4%!&L with dual coordinates E % J 4&L in the original space 1 L9 can also be expressed in dual form: ) B H ' ): H E B L J P (8) When is a subspace of then it is known that the norm in the RKHS defined by several popular kernels such as the Gaussian radial basis kernel are smoothing functionals, in the sense that larger values of N ) N < correspond to functions ) with more energy at high frequency in their Fourier decomposition. This fact has been much exploited e.g. in regularization theory [14, 5], and we now adapt it to the discrete setting. 4 Smoothness functional on a graph A natural way to quantify the smoothness of a feature on a graph is by its energy at high frequency, as computed from its Fourier transform. Fourier transforms on graphs is a classical tool of spectral graph analysis [3, 11] which we briefly recall now. Let be the 6U adjacency matrix of the graph # ( = if there is an edge between and K , ! otherwise) and the diagonal matrix of vertex degrees. Then the U matrix 1C is called the Laplacian of # , and is known to share many properties with the continuous Laplacian [11]. It is symmetric, semidefinite positive, and singular. The eigenvector % P P P % belongs to the eigenvalue * R ! , whose multiplicity is equal to the number of connected components of # . Let us denote by !C * R / P P P / * & the eigenvalues of 1 and " , ( % % P P P % $ an orthonormal set of associated eigenvectors. This basis is a discrete Fourier basis [3], and it is known that ,( oscillates more and more on the graph as increases. The Fourier decomposition of any feature )546is the expansion in terms of this basis: ) & ' ( ) R
) ( ,( % (9) where
) ( , B ( ) and
)6
)R % P P P %
) & is called the discrete Fourier transform of ) . For any monotonic decreasing mapping \ :\ " ! $ , let us now consider the function L defined by: G % K 4 L % % K & ' ( ) R * ( , ( , ( LK P (10) The mapping being assumed to take only positive values, the matrix is definite positive and is therefore a Mercer kernel on the set . The corresponding RKHS is the set of features , with norm given by: G ) 46% O ) N L & ' ( ) R
)L ( * ( P (11) As increases, * ( increases so * ( decreases. As a result the norm (11) has a higher value on features which have a lot of energy at high frequency, and is therefore a natural smoothing functional. An example of valid function with rapid decay is the exponential 7 , where is a parameter. In that case we recover the diffusion kernel introduced and discussed in [9]. Considering other mapping would be beyond the scope of this report, so we restrict ourselves to this diffusion kernel in the sequel. Observe that it can be expressed using the matrix exponential as 1 1 . 5 Relevance functional If @ 4 A
has a projection @ / onto the linear span of " 8 % "4 $ then )<>= ? )9<>= ? . As a result the set of linear features D can be parametrized by directions of the form @ 9 J M , where J 4Wis called the dual coordinate of @ and is defined up to the addition of an element of the null space of the Gram matrix = B LK . The RKHS ! E associated with this semidefinite positive matrix consists of the set of features of the form ): P J C % P 0 ) ? = < , where @ 9 J (8 . In other words this is exactly the set of linear features, ! D . The variance of a feature )54 D can be expressed by (1), (6) and (8) as follows: I K)9< = ?9 9 ) <>= ? L N @ N L J B L J J B J N ) <>= ? O N ) <>= ? O < P As a result, a natural relevance functional to balance the term N ) N in (2) is the norm in the RKHS: X L K) < = ? N ) <>= ? O < , where ! is the RKHS associated with the linear kernel C % K " B LK . 6 Extracting smooth correlations Let R 1 denote the diffusion kernel and L denote the linear kernel L % K 7 B 8K , with associated RKHS !&R and ! L respectively. Taking XR9K) N ) N < as a smoothness function for any ) 4&- , and X L K) N ) N < as a relevance functional for any linear feature )54&D , we can express the maximization Problem (2) in a dual form as: =
E % J E B R L J E B . L R R E] J B . L L L J P (12) At first sight it seems that (12) is the dual formulation of an optimization over ) R % ) L S4 ! R U! L U D , and not / U6D as in (2). However it can be checked that any solution of (12) is in fact in / UZD . Indeed the numerator remains unchanged when a constant function is added to )R R E 4 - , while both N )RO and N )R9N < are minimized when ) has mean ! (for the latter case, this results from the fact that the constant vector is an eigenvector of the diffusion kernel, so the norm defined by (4) is minimized when the corresponding projection of ) , namely its average, is null). Formulated as (12) the problem appears to be a generalization of canonical correlation analysis (CCA) known as kernel-CCA, discussed in [1]. In particular Bach and Jordan show that E % J is a solution of (12) if and only if it satisfies the following generalized eigenvalue problem: ! R L L 6R ! E J 5L R R ! ! 5L L L E J (13) with the largest possible. Moreover, solving (13) provides a series of pairs of features "8 E ( %BJ ( % % P P P % $ , where
% , with decreasing values of E ( %BJ ( for which the gradient = is null, equivalent to the extraction of successive canonical directions with decreasing correlation in classical CCA. The resulting features ) R = ( R E ( and ) L = ( L J ( are therefore a set of features likely to have decreasing biological relevance when increases, and are the features we propose to extract in this report. As discussed in [1] we regularize the problem (13) by adding L on the diagonal of the matrix on the right-side, to be able to perform the Cholesky decomposition necessary to solve this problem. Hence we end up with the following problem: ! R L L R ! E J R B L ! ! . L B L E J % (14) where B . If E %BJ is an generalized eigenvector solution of (14) belonging to the generalized eigenvalue , then =E %BJ belong to . As a result the spectrum of (14) is symmetric : R % R % P P P % & % & with R P P P & , ( ! for . 7 Experiments We extracted from the LIGAND database of chemical compounds of reactions in biological pathways [6] a graph made of 774 genes of the budding yeast S. Cerevisiae, linked through 16,650 edges, where two genes are linked when they have the possibility to catalyze two successive reactions in the LIGAND database (i.e, two reactions such that the main product of the first one be the main substrate of the second one). Expression data were collected from the Stanford Microarray Database [13]. Concatenating several publicly available data, we ended up with 330 measurements for 6075 genes of the yeast, i.e., almost all its known or predicted genes. Following [4, 2] we work with the normalized logarithm of the ratio of expression levels of the genes between two experimental conditions. The functional classes of the yeast genes we consider are the one defined by the January 10, 2002 version of the Comprehensive Yeast Genome Database (CYGD) [10], which is a comprehensive classification of 3,936 genes into 259 categories. The 669 genes in the gene graph with known expression profiles were first used to perform the feature extraction process described in this report. The resulting linear features were then extracted from the expression profiles of the disjoint set of 2,688 genes which are in the CYGD functional catalogue but not in the pathway database. We then performed functional classification experiments on this set of 2,688 genes, using either the profiles themselves or the features extracted. All functional classes with more than 20 members in this set were tested (which amount to 115 categories). Experiments were carried out with SVM Light [7], a public and free implementation of SVM. All vectors were scaled to unit length before being sent to the SVM, and all SVM use a radial basis kernel with unit width, i.e., % K SN KYN L . The trade-off parameter between training error and margin error was set to its default value ( in that case), and the cost of errors on positive and and negative examples were adjusted to have the same total. Preliminary experiments to tune the two parameters of the algorithm, namely the width of the diffusion kernel and the regularization parameter , showed that and "! P !! provide good performances. For these values we first tested whether there exists an optimal number of features to be extracted for optimal gene function prediction. Figure 1 shows the performance of SVM using different numbers of features, in terms of ROC index averaged over all 115 classes. The ROC index is the area under the curve of false negative vs true positive, normalized to !! for a perfect classifier and 9! for a random classifier. For each category the ROC index was averaged over ! random splitting of the data into training and test set, in the proportion ! 9! . It appears that the more features are included, the better the performance averaged over all categories. A more precise analysis of the different classes shows however that some classes don’t follow the average trend and are better predicted by a smaller number of features, as shown on Figure 2 for categories best predicted by less than !! features. Finally Figure 3 compares, for each of the 115 categories, the ROC index for a SVM using the original expression profiles with a SVM using the vectors of 330 features. It demonstrates that the representation of genes as vectors of features helps improve the performance of SVM (the ROC index averaged over all categories increases 55 56 57 58 59 60 61 62 50 100 150 200 250 300 350 Average ROC index Number of features Figure 1: ROC index averaged over 115 categories, for various number of features 45 50 55 60 65 70 75 50 100 150 200 250 300 350 ROC index Number of features Prediction performance for several functional classes "fermentation" "ionic_homeostasis" "protein_complexes" "vacuolar_transport" "nucleus_organization" Figure 2: ROC index for 5 functional categories, for various number of features from P to P ). The difference is especially important for classes such as heavy metal ion transporters ( P vs P ), ribosome biogenesis ( P vs ! P ), protein synthesis ( P vs P ) or morphogenesis ( % vs P ) 8 Discussion and Conclusion Results reported in the previous section are encouraging for at least two reasons. First of all, the performance reached for some classes such as heavy ion metal transporters shows that a ROC above 80% can be expected for several classes. Second, while many classes are apparently not learned by the SVM based on expression profiles (ROC around 50), the ROC based on extracted features of the same classes is around 60. This shows that there is hope to be able to predict more functional classes than previously thought [2] from microarray data, which is a good news since the amount of microarray data is expected to explode in the coming years. The method presented in this paper can be seen as an attempt to explore the possibilities of data mining and analysis provided by kernel methods. Few studies have used kernel methods other than SVM, and have used kernels other than Gaussian or polynomial kernels. In this report we tried to show how “exotic” kernels such as the diffusion kernel, and “exotic” methods such as kernel-CCA, can be adapted to particular problems, graph-driven feature extraction in our case. Exploring other possibilities of kernel methods in the data-rich field of computational biology is among our future plans. 30 40 50 60 70 80 90 100 40 50 60 70 80 90 100 ROC index based on expression profiles ROC index based on extracted features Figure 3: ROC index of a SVM classifier based on expression profiles (y axis) or extracted features (x axis). Each point represents one functional category. References [1] F. R. Bach and M. I. Jordan. Kernel independent component analysis. Journal of Machine Learning Research, 3:1–48, 2002. [2] Michael P. S. Brown, William Noble Grundy, David Lin, Nello Cristianini, Charles Walsh Sugnet, Terence S. Furey, Jr. Manuel Ares, and David Haussler. Knowledge-based analysis of microarray gene expression data by using support vector machines. Proc. Natl. Acad. Sci. USA, 97:262–267, 2000. [3] Fan R.K. Chung. Spectral graph theory, volume 92 of CBMS Regional Conference Series. American Mathematical Society, Providence, 1997. [4] Michael B. Eisen, Paul T. Spellman, Patrick O. Brown, and David Botstein. Cluster analysis and display of genome-wide expression patterns. Proc. Natl. Acad. Sci. USA, 95:14863–14868, Dec 1998. [5] Frederico Girosi, Michael Jones, and Tomaso Poggio. Regularization theory and neural networks architectures. Neural Computation, 7(2):219–269, 1995. [6] S. Goto, Y. Okuno, M. Hattori, T. Nishioka, and M. Kanehisa. LIGAND: database of chemical compounds and reactions in biological pathways. Nucleic Acid Research, 30:402–404, 2002. [7] Thorsten Joachims. Making large-scale svm learning practical. In B. Sch¨olkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods - Support Vector Learning, pages 169–184. MIT Press, 1999. [8] M. Kanehisa, S. Goto, S. Kawashima, and A. Nakaya. The KEGG databases at GenomeNet. Nucleic Acid Research, 30:42–46, 2002. [9] R. I. Kondor and J. Lafferty. Diffusion kernels on graphs and other discrete input. In ICML 2002, 2002. [10] H.W. Mewes, D. Frishman, U. G¨uldener, G. Mannhaupt, K. Mayer, M. Mokrejs, B. Morgenstern, M. M¨unsterkoetter, S. Rudd, and B. Weil. MIPS: a database for genomes and protein sequences. Nucleic Acid Research, 30(1):31–34, 2002. [11] B. Mohar. Some applications of laplace eigenvalues of graphs. In G. Hahn and G. Sabidussi, editors, Graph Symmetry: Algebraic Methods and Applications, volume 497 of NATO ASI Series C, pages 227–275. Kluwer, Dordrecht, 1997. [12] S. Saitoh. Theory of reproducing Kernels and its applications. Longman Scientific & Technical, Harlow, UK, 1988. [13] G. Sherlock, T. Hernandez-Boussard, A. Kasarskis, G. Binkley, J.C. Matese, S.S. Dwight, M. Kaloper, S. Weng, H. Jin, C.A. Ball, M.B. Eisen, and P.T. Spellman. The stanford microarray database. Nucleic Acid Research, 29(1):152–155, Jan 2001. [14] G. Wahba. Spline Models for Observational Data, volume 59 of CBMS-NSF Regional Conference Series in Applied Mathematics. SIAM, Philadelphia, 1990.
|
2002
|
13
|
2,136
|
Mean-Field Approach to a Probabilistic Model in Information Retrieval Bin Wu, K. Y. Michael Wong Department of Physics Hong Kong University of Science and Technology Clear Water Bay, Hong Kong phwbd@ust.hk phkywong@ust.hk David Bodoff Department of ISMT Hong Kong University of Science and Technology Clear Water Bay, Hong Kong dbodoff@ust.hk Abstract We study an explicit parametric model of documents, queries, and relevancy assessment for Information Retrieval (IR). Mean-field methods are applied to analyze the model and derive efficient practical algorithms to estimate the parameters in the problem. The hyperparameters are estimated by a fast approximate leave-one-out cross-validation procedure based on the cavity method. The algorithm is further evaluated on several benchmark databases by comparing with standard algorithms in IR. 1 Introduction The area of information retrieval (IR) studies the representation, organization and access of information in an information repository. With the advent and boom of the Internet, especially the World Wide Web (WWW), more and more information is available to be shared online. Search on the Internet becomes increasingly popular. In this respect, probabilistic models have become very useful in empowering information searches [1, 2]. In fact, information searches themselves contain rich information, which can be recorded and fruitfully used to improve the performance of subsequent retrievals. This is an extension of the process of relevance feedback [3], which incorporates the relevance assessments supplied by the user to construct new representations for queries, during the procedure of the users interactive document retrieval. In the process, the feedback information helps to refine the queries continuously, but the effects pertain only to the particular retrieval session. On the other hand, our objective is to refine the representations of documents and queries with the help of relevancy data, so that subsequent retrieval sessions can be benefited. Based on Fuhr and Buckley’s meta-structure [4] relating documents, queries and relevancy assessments, one of us recently proposed a probabilistic model [5] in which these objects are described by explicit parametric distribution functions, facilitating the construction of a likelihood function, whose maximum can be used to characterize the documents and queries. Rather than relying on heuristics as in many previous work, the proposed model provides a unified formal framework for the following two tasks: (a) ad hoc information retrieval, in which a query is given and the goal is to return a list of ranked documents according to their similarities with the query; (b) document routing, in which a document is given and the goal is to categorize it using a list of ranked queries according to their similarities with the document. (Here we assume a model in which categories are represented by queries.) In this paper, we report our recent progress in putting this new theoretical approach to empirical tests. Since documents and queries are represented by high dimensional vectors in a vector space model, a mean-field approach will be adopted. mean-field methods were commonly used to study magnetic systems in statistical physics, but thanks to their ability to deal with high dimensional systems, they are increasingly applied to many areas of information processing recently [6]. In the present context, a mean-field treatment implies that when a particular component of a document or query vector is analyzed, all other components of the same and other vectors can be considered as background fields satisfying appropriate average properties, and correlations of statistical fluctuations with the background vectors can be neglected. After introducing the parametric model in Section 2, the mean-field approach will be used in two steps. First, in Section 3, the true representations of documents and queries will be estimated by maximizing the total probability of observation. It results in a set of meanfield equations, which can be solved by a fast iterative algorithm. Respectively, the estimated true documents and queries will then be used for ad hoc information retrieval and document routing. Secondly, the model depends on a few hyperparameters which are conventionally determined by the cross-validation method. Here, as described in Section 4, the mean-field approach can be used again to accelerate the otherwise tedious leave-one-out cross-validation procedure. For a given set of hyperparameter values, it enables us to carry out the systemwide iteration only once (rather than repeating once for each left-out document or query), and the leave-one-out estimations of the document and query representations can be obtained by a version of mean-field theory called the cavity method [7]. In Section 6, we compare the model with the standard tf-idf [8] and latent semantic indexing (LSI) [9] on benchmark test collections. As we shall see, the validity of our model is well supported by its superior performance. The paper is concluded in Section 7. 2 A Unified Probabilistic Model Our work is motivated by Fuhr and Buckley’s conceptual model. Assume that a set of documents and queries is available to us. In the vector space model, each document and query is represented by an dimensional vector. The vectors are denoted by ( ), which are referred to as the true meaning of the document (query). Our model consists of the following 3 components: (a) The document we really observe is distributed around the true document vector according to the probability distribution
, the difference resulting from the documents containing terms that do not ideally represent the meaning of the document. In other words, the document is generated from its true meaning . (b) Similarly, the query that the user actually submits is also distributed around the true query vector according to the probability distribution distribution . (c) There is some relation between the document and query, called relevancy assessment. We denote this relation with a binary variable for each pair of document and query. If , we say the document is relevant to the query, that is, the document is what the user wants. Otherwise, and the document is irrelevant to the query. Suppose we have some relevancy relations between documents and queries (through historical records, from experts, etc.). Then we hypothesize that the true documents and queries are distributed according to the distribution , that is, the true representation of documents and queries should satisfy their relevancy relations. We summarize the idea through a probabilistic meta-structure shown in Figure 1. B ( | ) f D,Q B 0 Qf Q Q ( | ) 0 ( | ) f D D D D Q B data unknown parameters data Q D 0 0 Figure 1: Probabilistic meta-structure In order to complete the model, we need to hypothesize the form of the distribution functions. In this paper, we restrict the documents and queries to a hypersphere, since usually only the cosines of the angles between documents and queries are used to determine the similarity between documents and queries. Hence, we assume the following distribution functions: (a) The distribution of each observed document given its true location :
! " (1) (b) The distribution of each observed query given its true location : #
$ % & ! " (2) (c) The prior distribution of the documents and queries, given the relevance relation between them: (' #)% +*-, ./ 0 1 )2 * /3 )4* ) * ) * &5 ! (3) where 76 is the Dirac -function, and ! , ! and ! are normalization constants of , and respectively, and are hence independent of and . If we further assume that the observation of documents and queries are independent of each other, we can obtain the total probability of observing all documents and queries, given the relevancy relation between them: 8 (' ) * , 9: 0 !<; ! ! %=?> ! =?@ ) * A (4) where !; )2 * #) +* #) % +* / (5) // )2 * ) * ) +*
) ) #)
* * *(6) and : denotes all hyperparameters '
%/<, . There is now an appealing correspondence between the present model and spin models in statistical physics. It is observed that ! ; is just the familiar partition function and is the energy function. By maximizing the probability in Eq. (4), we can obtain an estimation of the true documents , which can be used in ad hoc retrieval: we define the similarity function between two vectors as the cosine of the angle between them, and rank the similarities between (instead of ) with a new query to determine whether the documents should be retrieved or not. As a byproduct, we can also obtain the estimation of the true queries , which in turn can be used in document routing: new documents should be compared with to determine whether it belongs to this category or not. So our model gives a unifying procedure for both ad hoc retrieval and routing. 3 Parameter Estimation In this section, we derive a fast iterative algorithm for parameter estimation. First, we replace the -function by its Fourier transform. Then ! ; can be written as ! ; i
i ) ) i * * i ) * %A (7) where ) * ) * ) ) ) < * * * < / . In writing this formula, we have changed the integration to the imaginary axis. Mean-field theory works in the limit of large , and , when the integration can be well approximated by taking the saddle point of . This is obtained by equating the partial derivatives of with respect to , , and to zero, yielding #) / * ) * *
) ) (8) * / ) +)4* #)
* * (9) ) A/ * ) * *
) (10) * A/ ) ) * #)
* " (11) This set of equations is referred to as the mean-field equations, since fluctuations around the mean values of the parameters have been neglected. Due to its simple form, it can be solved by an iterative scheme. Though we have not studied the theoretical convergence of the iterative scheme, its effectiveness can be seen from the following arguments. If we replace ) in Eq. (8) and * in Eq. (9) by the respective values of "! ) and ! * at the saddle point, then the iteration process becomes a linear one. Now, Eqs. (8) and (9) differ from this linear iteration problem by scale factors of "! )$# ) and ! *# * respectively. Hence after using Eqs. (10) and (11), the problem is equivalent to rescaling the lengths of the iterated vectors back to the hypersphere defined by .) and +* . This alternate operation of linear iteration and rescaling back to the hypersphere makes it a very stable algorithm. The complexity of the algorithm is linear in the number of documents and queries. Empirically, it converges in just a few tens of steps. Alternatively, one may use the Augmented Lagrangian method to find the saddle point of , whose convergence is guaranteed, but is computationally more complex [10]. 4 Hyperparameter Estimation In our model, the parameters / ,
and
determine the shape of the distributions , and , and influence the parameter estimation described in Section 3. We refer to them as hyperparameters. They have to be chosen so that the model performs optimally when new queries are raised to retrieve documents, or when new documents are routed. A standard method for hyperparameter estimation in machine learning is leave-one-out cross-validation [11]. Suppose we have examples for training the model. Then each time we pick one data as the validation set and train the model with the rest of the examples. The hyperparameters are chosen as the ones that give the optimal performance averaged over the test examples. The exact leave-one-out cross-validation is very tedious, especially for multiple hyperparameters, because of the need to train the model times for each combination of hyperparameters. For this model, we propose an approximate leave-one-out procedure based on the cavity method [7]. Suppose we have trained the model with all data, and obtain the estimation ' #) * , , which satisfies the steady state equation #) / * ) * *
) ) * / ) )4* )
* * " (12) If the query were left out from the training set of queries, the cavity estimation should satisfy the equation ) / * )4* *
) ) * / ) )4* )
* * " (13) By subtracting (7) by (8), and assuming that ' ) * , is approximately the same as ' )% * , , we can get the difference, #)
/ * ) * * /3 ) ) +*
/ ) )4* ) * " (14) For ad hoc retrieval, we eliminate * to obtain a set of linear equations for ) . The solution can be further simplified by using the mean-field argument that the changes induced by removing the query on documents can be decoupled. Hence we can neglect the off-diagonal terms, yielding ) /3 ) ) / * " (15) Note that ' )% * , have been known in the systemwide training. Then ) can be estimated by ) ) ) . The similarities between and ) are then used to predict the leave-one-out ad hoc retrieval performance of the model. Equations for document routing can be derived analogously. Note that we need to train the model only once, and the leave-one-out estimation of documents and queries can be obtained in one step. So the algorithm is extremely fast. Amazingly, it also gives reasonable estimations of hyperparameters, as shown in the following experiments. We remark that the mean-field technique can be applied to distributions of documents, queries and relevance feedbacks other than those described by Eqs. (1-3). In the present case spectified by Eqs. (1-3), our model is similar to the Gaussian model, if the spherical constraint on ’s and ’s are replaced by a spherical Gaussian prior. Though leave-oneout cross-validation can be done exactly in the Gaussian model, it involves the inversion of a large matrix. On the other hand, the mean-field estimation greatly simplifies the process by neglecting the off-diagonal elements. 5 Experimental Results We have applied the proposed method to ad hoc retrieval and routing for the test collections of Cranfield and CISI. Because we treat both tasks identically, we use the same evaluation criterion: the recall precision curve and the average retrieval precision. We have run two versions of our algorithm: (a) in the original dimension, the observed documents and queries are represented by the original tf-idf weights; (b) in the reduced dimension of 100, in which the original vectors are reduced by singular value decomposition (SVD) in LSI. In Figs. 2 (a-b), we show the recall precision curves at the optimal hyperparameters. The mean-field estimates are compared with the baseline results of LSI. It is clear that our method gives significant gains in retrieval precision. Comparisons using the original dimension or the Cranfield collection, not shown here due to space limitations, yield equally satisfactory results. 0 0.2 0.4 0.6 0.8 1 Recall 0 0.2 0.4 0.6 Precision MF LSI 0 0.2 0.4 0.6 0.8 1 Recall 0 0.2 0.4 0.6 Precision MF LSI Figure 2: The recall precision curves of the mean-field estimation (MF) and the baseline (LSI) for (a) ad hoc retrieval (b) document routing for CISI in reduced dimension For hyperparameter estimation, we can compare the mean-field results and those for exact leave-one-out cross-validation in reduced dimension, since the computation of the exact ones is still feasible. In Fig. 3, we have plotted the average precision versus the two hyperparameters, as computed by the two methods. They have very similar contours, although there is a uniform displacement between their values. This demonstrates the usefulness of the mean-field approximation in hyperparameter estimation. In Table 1, we obtain the values of the optimal hyperparameters from the mean-field leaveone-out method, and the average precisions of the exact leave-one-out are then computed using these optimal hyperparameters. These are compared with the results of the exact leave-one-out and listed in Table 1. For the hyperparameter estimation in the original dimension, the exact leave-one-out is not available since it is too tedious. Instead, we compare the hyperparameters with the ones from the -fold cross-validation. Whether we compare the mean-field with the exact leave-one-out or -fold cross-validation, the optimal hperparameters are comparable in most cases, and when there are discrepancies, one can observe that the average precisions are essentially the same. Figure 3: Average retrieval precision versus hyperparametersfor ad hoc retrieval in reduced dimension for CISI: (a) mean-field leave-one-out, peaked at
# /0
# / + " " ; (b) exact leave-one-out. peaked at
# /
# / " " . Table 1: The average retrieval precision for leave-one-out cross-validation in reduced dimension: mean-field versus exact. CISI Cranfield
# /
# / Average precision
# /
# / Average precision ad hoc retrieval LSI – – 0.079 – – 0.178 Mean-Field 0.3 12.0 0.142 0.4 1.1 0.248 Exact 0.3 10.1 0.142 0.6 1.5 0.250 Document Routing LSI – – 0.104 – – 0.240 Mean-Field 28.9 1.6 0.192 2.5 1.1 0.351 Exact 23.0 2.5 0.193 0.9 0.7 0.356 6 Conclusion We have considered a probabilistic model of documents, queries and relevancy assessments. Fast algorithms are derived for parameter and hyperparameter estimations. Significant improvement is achieved for both ad hoc retrieval and routing compared with tf-idf and LSI. In another paper [12], we have compared the model with other heuristic methods such as Rocchio heuristics [3] and Bartell’s Multidimensional Scaling [13], and the mean-field method still outperforms them. These successes illustrate the potentials of the mean-field approach, which is especially suitable for systems with high dimensions and numerous mutually interacting components, such as those in IR. Hence we anticipate that mean-field methods will have increasing applications in many other probabilistic models in IR. Acknowledgments We thank R. Jin for interesting discussions. This work was supported by the grant HKUST6157/99P of the Research Grant Council of Hong Kong. References [1] Cohn, D. and T. Hofmann (2001). The Missing Link – A Probabilistic Model of Document Content and Hypertext Connectivity. Advances in Neural Information Processing Systems 13, T. K. Leen, T. G. Dietterich and V. Tresp, eds., MIT Press, Cambridge, MA, 430-436. [2] Jaakola, T. and H. Siegelmann (2002). Active Information Retrieval. Advances in Neural Information Processing Systems 14, T. G. Dietterich, S. Becker and Z. Ghahramani, eds., MIT Press, Cambridge, MA, 777-784. [3] Rocchio, J. J. (1971). Relevance Feedback in Information Retrieval. SMART Retrieval System–Experiments in Automatic Document Processing, G. Salton ed., PrenticeHall, Englewood Cliffs, NJ, Chapter 14. [4] Fuhr, N. and C. Buckley (1991). A Probabilistic Learning Approach for Document Indexing. ACM Transactions on Information Systems 9(3): 223-248. [5] Bodoff, D., D. Enabe, A. Kanbil, G. Simon and A. Yukhimets (2001). A Unified Maximumn Likelihood Approach to Document Retrieval. Journal of the American Society for Information Science and Technology 52(10): 785-796. [6] Opper, M. and D. Saad, eds. (2001). Advanced Mean Field Methods, MIT Press, Cambridge, MA. [7] Wong, K. Y. M. and F. Li (2002). Fast Parameter Estimation Using Green’s Functions. Advances in Neural Information Processing System 14: 535-542, T.G. Dietterich, S. Becker and Z. Ghahramani, eds., MIT Press, Cambridge, MA. [8] Salton, G. and M. J. McGill (1983). Introduction to Modern Information Retrieval, McGraw-Hill, New York, 63-66. [9] Deerwester, S., S. T. Dumais, G. W. Furnas, T. K. Landauer and R. Harshman (1990). Indexing by Latent Semantic Analysis. Journal of the American Society for Information Science 41(16): 391-407. [10] Nocedal, J. and S. J. Wright (1999). Numerical Optimization, Springer, Berlin, Ch. 17. [11] Bishop, C. M. (1995). Neural Networks for Pattern Recognition, Clarendon Press, Oxford, 372-375. [12] Bodoff, D., B. Wu and K. Y. M. Wong (2002). Relevance Feedback meets Maximum Likelihood, preprint. [13] Bartell, B. T., G. W. Cottrell and R. K. Belew (1992). Latent Semantic Indexing Is an Optimal Special Case of Multidimensional Scaling. Proceedings of the 15th International ACM SIGIR Conference on Research and Development in Information Retrieval, 161-167.
|
2002
|
130
|
2,137
|
Exponential Family PCA for Belief Compression in POMDPs Nicholas Roy Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 nickr@ri.cmu.edu Geoffrey Gordon Department of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 ggordon@cs.cmu.edu Abstract Standard value function approaches to finding policies for Partially Observable Markov Decision Processes (POMDPs) are intractable for large models. The intractability of these algorithms is due to a great extent to their generating an optimal policy over the entire belief space. However, in real POMDP problems most belief states are unlikely, and there is a structured, low-dimensional manifold of plausible beliefs embedded in the high-dimensional belief space. We introduce a new method for solving large-scale POMDPs by taking advantage of belief space sparsity. We reduce the dimensionality of the belief space by exponential family Principal Components Analysis [1], which allows us to turn the sparse, highdimensional belief space into a compact, low-dimensional representation in terms of learned features of the belief state. We then plan directly on the low-dimensional belief features. By planning in a low-dimensional space, we can find policies for POMDPs that are orders of magnitude larger than can be handled by conventional techniques. We demonstrate the use of this algorithm on a synthetic problem and also on a mobile robot navigation task. 1 Introduction Large Partially Observable Markov Decision Processes (POMDPs) are generally very difficult to solve, especially with standard value iteration techniques [2, 3]. Maintaining a full value function over the high-dimensional belief space entails finding the expected reward of every possible belief under the optimal policy. However, in reality most POMDP policies generate only a small percentage of possible beliefs. For example, a mobile robot navigating in an office building is extremely unlikely to ever encounter a belief about its pose that resembles a checkerboard. If the execution of a POMDP is viewed as a trajectory inside the belief space, trajectories for most large, real world POMDPs lie on low-dimensional manifolds embedded in the belief space. So, POMDP algorithms that compute a value function over the full belief space do a lot of unnecessary work. Additionally, real POMDPs frequently have the property that the belief probability distributions themselves are sparse. That is, the probability of being at most states in the world is zero. Intuitively, mobile robots and other real world systems have local uncertainty (which can often be multi-modal), but rarely encounter global uncertainty. Figure 1 depicts a mobile robot travelling down a corridor, and illustrates the sparsity of the belief space. Figure 1: An example probability distribution of a mobile robot navigating in a hallway (map dimensions are 47m x 17m, with a grid cell resolution of 10cm). The white areas are free space, states where the mobile robot could be. The black lines are walls, and the dark gray particles are the output of the particle filter tracking the robot’s position. The particles are located in states where the robot’s belief over its position is non-zero. Although the distribution is multi-modal, it is still relatively compact: the majority of the states contain no particles and therefore have zero probability. We will take advantage of these characteristics of POMDP beliefs by using a variant of a common dimensionality reduction technique, Principal Components Analysis (PCA). PCA is well-suited to dimensionality reduction where the data lies near a linear manifold in the higher-dimensional space. Unfortunately, POMDP belief manifolds are rarely linear; in particular, sparse beliefs are usually very non-linear. However, we can employ a link function to transform the data into a space where it does lie near a linear manifold; the algorithm which does so (while also correctly handling the transformed residual errors) is called Exponential Family PCA (E-PCA). E-PCA will allow us to find manifolds with only a handful of dimensions, even for belief spaces with thousands of dimensions. Our algorithm begins with a set of beliefs from a POMDP. It uses these beliefs to find a decomposition of belief space into a small number of belief features. Finally, it plans over a low-dimensional space by discretizing the features and using standard value iteration to find a policy over the discrete beliefs. 2 POMDPs A Partially Observable Markov Decision Process (POMDP) is a model given by a set of states
, actions and observations . Associated with these are a set of transition probabilities "! # %$ & and observation probabilities '( & )!*# $ & . The objective of the planning problem is to find a policy that maximises the expected sum of future (possibly discounted) rewards of the agent executing the policy. There are a large number of value function approaches [2, 4] that explicitly compute the expected reward of every belief. Such approaches produce complete policies, and can guarantee optimality under a wide range of conditions. However, finding a value function this way is usually computationally intractable. Policy search algorithms [3, 5, 6, 7] have met with success recently. We suggest that a large part of the success of policy search is due to the fact that it focuses computation on relevant belief states. A disadvantage of policy search, however, is that can be data-inefficient: many policy search techniques have trouble reusing sample trajectories generated from old policies. Our approach focuses computation on relevant belief states, but also allows us to use all relevant training data to estimate the effect of any policy. Related research has developed heuristics which reduce the belief space representation. In particular, entropy-based representations for heuristic control [8] and full value-function planning [9] have been tried with some success. However, these approaches make strong assumptions about the kind of uncertainties that a POMDP generates. By performing principled dimensionality reduction of the belief space, our technique should be applicable to a wider range of problems. 3 Dimensionality Reduction Principal Component Analysis is one of the most popular and successful forms of dimensionality reduction [10]. PCA operates by finding a set of feature vectors ! that minimise the loss function )! $ $ $ $ (1) where is the original data and is the matrix of low-dimensional coordinates of . This particular loss function assumes that the data lie near a linear manifold, and that displacements from this manifold are symmetric and have the same variance everywhere. (For example, i.i.d. Gaussian errors satisfy these requirements.) Unfortunately, as mentioned previously, probability distributions for POMDPs rarely form a linear subspace. In addition, squared error loss is inappropriate for modelling probability distributions: it does not enforce positive probability predictions. We use exponential family PCA to address this problem. Other nonlinear dimensionalityreduction techniques [11, 12, 13] could also work for this purpose, but would have different domains of applicability. Although the optimisation procedure for E-PCA may be more complicated than that for other models such as locally-linear models, it requires many fewer samples of the belief space. For real world systems such as mobile robots, large sample sets may be difficult to acquire. 3.1 Exponential family PCA Exponential family Principal Component Analysis [1] (E-PCA) varies from conventional PCA by adding a link function, in analogy to generalised linear models, and modifying the loss function appropriately. As long as we choose the link and loss functions to match each other, there will exist efficient algorithms for finding and given . By picking particular link functions (with their matching losses), we can reduce the model to an SVD. We can use any convex function to generate a matching pair of link and loss functions. The loss function which corresponds to is
(2) where is defined so that the minimum over of (3) is always 0. ( is called the convex dual of , and expression (3) is called a generalised Bregman divergence from to .) The loss functions themselves are only necessary for the analysis; our algorithm needs only the link functions and their derivatives. So, we can pick the loss functions and differentiate to get the matching link functions; or, we can pick the link functions directly and not worry about the corresponding loss functions. Each choice of link and loss functions results in a different model and therefore a potentially different decomposition of . This choice is where we should inject our domain knowledge about what sort of noise there is in and what parameter matrices and are a priori most likely. In our case the entries of are the number of particles from a large sample which fell into a small bin, so a Poisson loss function is most appropriate. The corresponding link function is ! )!! (4) (taken component-wise) and its associated loss function is )!! (5) where the “matrix dot product” is the sum of products of corresponding elements. It is worth noting that using the Poisson loss for dimensionality reduction is related to Lee and Seung’s non-negative matrix factorization [14]. In order to find and , we compute the derivatives of the loss function with respect to and and set them to 0. The result is a set of fixed-point equations that the optimal parameter settings must satisfy: ! (6) ! (7) There are many algorithms which we could use to solve our optimality equations (6) and (7). For example, we could use gradient descent. In other words, we could add a multiple of to , add a multiple of to , and repeat until convergence. Instead we will use a more efficient algorithm due to Gordon [15]; this algorithm is based on Newton’s method and is related to iteratively-reweighted least squares. We refer the reader to this paper for further details. 4 Augmented MDP Given the belief features acquired through E-PCA, it remains to learn a policy. We do so by using the low-dimensional belief features to convert the POMDP into a tractable MDP. Our conversion algorithm is a variant of the Augmented MDP, or Coastal Navigation algorithm [9], using belief features instead of entropy. Table 1 outlines the steps of this algorithm. 1. Collect sample beliefs 2. Use E-PCA to generate low-dimensional belief features 3. Convert low-dimensional space into discrete state space 4. Learn belief transition probabilities
, and reward function . 5. Perform value iteration on new model, using states , transition probabilities and . Table 1: Algorithm for planning in low-dimensional belief space. We can collect the beliefs in step 1 using some prior policy such as a random walk or a most-likely-state heuristic. We have already described E-PCA (step 2), and value iteration (step 5) is well-known. That leaves steps 3 and 4. The state space can be discretized in a number of ways, such as laying a grid over the belief features or using distance to the closest training beliefs to divide feature space into Voronoi regions. Thrun [16] has proposed nearest-neighbor discretization in high-dimensional belief space; we propose instead to use low-dimensional feature space, where neighbors should be more closely related. We can compute the model reward function easily from the reconstructed beliefs. )! (8) To learn the transition function, we can sample states from the reconstructed beliefs, sample observations from those states, and incorporate those observations to produce new belief states. One additional question is how to choose the number of bases. One possibility is to examine the singular values of the matrix after performing E-PCA, and use only the features that have singular values above some cutoff. A second possibility is to use a model selection technique such as keeping a validation set of belief samples and picking the basis size with the best reconstruction quality. Finally, we could search over basis sizes according to performance of the resulting policy. 5 Experimental Results We tested our approach on two models: a synthetic 40 state world with idealised action and observations, and a large mobile robot navigation task. For each problem, we compared EPCA to conventional PCA for belief representation quality, and compared E-PCA to some heuristics for policy performance. We are unable to compare our approach to conventional value function approaches, because both problems are too large to be solved by existing techniques. 5.1 Synthetic model The abstract model has a two-dimensional state space: one dimension of position along a circular corridor, and one binary orientation. States ) inclusive correspond to one orientation, and states correspond to the other. The reward is at a known position along the corridor; therefore, the agent needs to discover its orientation, move to the appropriate position, and declare it has arrived at the goal. When the goal is declared the system resets (regardless of whether the agent is actually at the goal). The agent has 4 actions: left, right, sense_orientation, and declare_goal. The observation and transition probabilities are given by von Mises distributions, an exponential family distribution defined over % . The von Mises distribution is the “wrapped” analog of a Gaussian; it accounts for the fact that the two ends of the corridor are connected, and because the sum of two von Mises variates is another von Mises variate, we can guarantees that the true belief distribution is always a von Mises distribution over the corridor for each orientation. 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0 5 10 15 20 25 30 35 40 Probability of State State Sample Beliefs Figure 2: Some sample beliefs from the two-dimensional problem, generated from roll-outs of the model. Notice that some beliefs are bimodal, whereas others are unimodal in one half or the other of the state space. Figure 2 shows some sample beliefs from this model. Notice that some of the beliefs are bimodal, but some beliefs have probability mass over half of the state space only—these unimodal beliefs follow the sense_orientation action. Figure 3(a) shows the reconstruction performance of both the E-PCA approach and conventional PCA, plotting average KL-divergence between the sample belief and its reconstruction against the number of bases used for the reconstruction. PCA minimises squared error, while E-PCA with the Poisson loss minimises unnormalised KL-divergence, so it is no surprise that E-PCA performs better. We believe that KL-divergence is a more appropriate measure since we are fitting probabilities. Both PCA and E-PCA reach near-zero error at 3 bases (E-PCA hits zero error, since an -basis E-PCA can fit an -parameter exponential family exactly). This fact suggests that both decompositions should generate good policies using only 3 dimensions. -0.2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1 2 3 4 5 KL Divergence Number of Bases KL Divergence between Sampled Beliefs and Reconstructions PCA E-PCA (a) Reconstruction error -20000 0 20000 40000 60000 80000 100000 120000 1 2 3 4 Average Reward Number of Bases Average reward vs. Number of Bases PCA E-PCA MDP Heuristic Entropy Heuristic (b) Policy performance Figure 3: (a) A comparison of the average KL divergence between the sample beliefs and their reconstructions, against the number of bases used, for 500 samples beliefs. (b) A comparison of policy performance using different numbers of bases, for 10000 trials. Policy performance was given by total reward accumulated over trials. Figure 3(b) shows a comparison of the policies from different algorithms. The PCA techniques do approximately twice as a well as the naive Maximum Likelihood heuristic. This is because the ML-heuristic must guess its orientation, and is correct about half the time. In comparison, the Entropy heuristic does very poorly because it is unable to distinguish between a unimodal belief that has uncertainty about its orientation but not its position, and a bimodal belief that knows its position but not its orientation. 5.2 Mobile Robot Navigation Next we tried our algorithm on a mobile robot navigating in a corridor, as shown in figure 1. As in the previous example, the robot can detect its position, but cannot determine its orientation until it reaches the lab door approximately halfway down the corridor. The robot must navigate to within 10cm of the goal and declare the goal to receive the reward. The map is shown in figures 1 and 4, and is 47m 17m, with a grid cell resolution of 0.1m. The total number of unoccupied cells is 8250, generating a POMDP with a belief space of 8250 dimensions. Without loss of generality, we restrict the robot’s actions to the forward and backward motion, and similarly simplified the observation model. The reward structure of the problem strongly penalised declaring the goal when the robot was far removed from the goal state. The initial set of beliefs was collected by a mobile robot navigating in the world, and then post-processed using a noisy sensor model. In this particular environment, the laser data used for localisation normally gives very good localisation results; however, this will not be true for many real world environments [17]. Figure 4 shows a sample robot trajectory using the policy learned using 5 basis functions. Notice that the robot drives past the goal to the lab door in order to verify its orientation before returning to the goal. If the robot had started at the other end of the corridor, its orientation would have become apparent on its way to the goal. Figure 5(a) shows the reconstruction performance of both the E-PCA approach and conStart Distribution Start State Goal State Robot Trajectory Figure 4: An example robot trajectory, using the policy learned using 5 basis functions. On the left are the start conditions and the goal. On the right is the robot trajectory. Notice that the robot drives past the goal to the lab door to localise itself, before returning to the goal. ventional PCA, plotting average KL-divergence between the sample belief and its reconstruction against the number of bases used for the reconstruction. 0 5 10 15 20 25 30 35 40 45 1 2 3 4 5 6 7 8 9 KL Divergence Number of Bases KL Divergence between Sampled Beliefs and Reconstructions E-PCA PCA (a) Reconstruction performance -300000 -200000 -100000 0 100000 200000 300000 400000 ML Heuristic PCA E-PCA Average Reward Policy perfomance on Mobile Robot Navigation -268500.0 -1000.0 33233.0 (b) Policy performance Figure 5: (a) A comparison of the average KL divergence between the sample beliefs and their reconstructions against the number of bases used, for 400 samples beliefs for a navigating mobile robot.(b) A comparison of policy performance using E-PCA, conventional PCA and the Maximum Likelihood heuristic, for 1,000 trials. Figure 5(b) shows the average policy performance for the different techniques, using 5 bases. (The number of bases was chosen based on reconstruction quality of E-PCA: see [15] for further details.) Again, the E-PCA outperformed the other techniques because it was able to model its belief accurately. The Maximum-Likelihood heuristic could not distinguish orientations, and therefore regularly declared the goal in the wrong place. The conventional PCA algorithm failed because it could not represent its belief accurately with only a few bases. 6 Conclusions We have demonstrated an algorithm for planning for Partially Observable Markov Decision Processes by taking advantage of particular kinds of belief space structure that are prevalent in real world domains. In particular, we have shown this approach to work well on an abstract small problem, and also on a 8250 state mobile robot navigation task which is well beyond the capability of existing value function techniques. The heuristic that we chose for dimensionality reduction was simply one of reconstruction error, as in equation 5: a reduction that minimises reconstruction error should allow nearoptimal policies to be learned. However, it may be possible to learn good policies with even fewer dimensions by taking advantage of transition probability structure, or cost function structure. For example, for certain classes of problems, a loss function such as )! (9) would lead to a dimensionality reduction that maximises predictability. Similarly, )! (10) where is some heuristic cost function (such as from a previous iteration of dimensionality reduction) would lead to a reduction that maximises ability to differentiate states with different values. Acknowledgments Thanks to Sebastian Thrun for many suggestions and insight. Thanks also to Drew Bagnell, Aaron Courville and Joelle Pineau for helpful discussion. Thanks to Mike Montemerlo for localisation code. References [1] M. Collins, S. Dasgupta, and R. E. Schapire. A generalization of principal components analysis to the exponential family. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems, volume 14, Cambridge, MA, 2002. MIT Press. [2] Leslie Pack Kaelbling, Michael L. Littman, and Anthony R. Cassandra. Planning and acting in partially observable stochastic domains. Artificial Intelligence, 101:99–134, 1998. [3] Andrew Ng and Michael Jordan. PEGASUS: A policy search method for large MDPs and POMDPs. In Proceedings of Uncertainty in Artificial Intelligence (UAI), 2000. [4] Milos Hauskrecht. Value-function approximations for partially observable Markov decision processes. Journal of Artificial Intelligence Research, 13:33–94, 2000. [5] Andrew Ng, Ron Parr, and Daphne Koller. Policy search via density estimation. In Advances in Neural Information Processing Systems 12, 1999. [6] Jonathan Baxter and Peter Bartlett. Reinforcement learning in POMDP’s via direct gradient ascent. In Proc. the 17th International Conference on Machine Learning, 2000. [7] J. Andrew Bagnell and Jeff Schneider. Autonomous helicopter control using reinforcement learning policy search methods. In Proceedings of the International Conference on Robotics and Automation, 2001. [8] Anthony R. Cassandra, Leslie Pack Kaelbling, and James A. Kurien. Acting under uncertainty: Discrete Bayesian models for mobile-robot navigation. In Proceedings of the IEEE/RSJ Interational Conference on Intelligent Robotic Systems (IROS), 1996. [9] Nicholas Roy and Sebastian Thrun. Coastal navigation with mobile robots. In Advances in Neural Processing Systems 12, pages 1043–1049, 1999. [10] I. T. Joliffe. Principal Component Analysis. Springer-Verlag, 1986. [11] Sam Roweis and Lawrence Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323–2326, December 2000. [12] J. B. Tenenbaum, V. de Silva, and J. C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319–2323, December 2000. [13] S. T. Roweis, L. K. Saul, and G. E. Hinton. Global coordination of local linear models. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems, volume 14, Cambridge, MA, 2002. MIT Press. [14] Daniel D. Lee and H. Sebastian Seung. Learning the parts of objects by non-negative matrix factorization. Nature, 401:788–791, 1999. [15] Geoffrey Gordon. Generalized linear models. In Suzanna Becker, Sebastian Thrun, and Klaus Obermayer, editors, Advances in Neural Information Processing Systems 15. MIT Press, 2003. [16] Sebastian Thrun. Monte Carlo POMDPs. In Advances in Neural Information Processing Systems 12, 1999. [17] S. Thrun, M. Beetz, M. Bennewitz, W. Burgard, A.B. Cremers, F. Dellaert, D. Fox, D. Hhnel, C. Rosenberg, N. Roy, J. Schulte, , and D. Schulz. Probabilistic algorithms and the interactive museum tour-guide robot Minerva. International Journal of Robotics Research, 19(11):972– 999, 2000.
|
2002
|
131
|
2,138
|
Dynamic Structure Super-Resolution Amos J Storkey Institute of Adaptive and Neural Computation Division of Informatics and Institute of Astronomy University of Edinburgh 5 Forrest Hill, Edinburgh UK a.storkey@ed.ac.uk Abstract The problem of super-resolution involves generating feasible higher resolution images, which are pleasing to the eye and realistic, from a given low resolution image. This might be attempted by using simple filters for smoothing out the high resolution blocks or through applications where substantial prior information is used to imply the textures and shapes which will occur in the images. In this paper we describe an approach which lies between the two extremes. It is a generic unsupervised method which is usable in all domains, but goes beyond simple smoothing methods in what it achieves. We use a dynamic tree-like architecture to model the high resolution data. Approximate conditioning on the low resolution image is achieved through a mean field approach. 1 Introduction Good techniques for super-resolution are especially useful where physical limitations exist preventing higher resolution images from being obtained. For example, in astronomy where public presentation of images is of significant importance, superresolution techniques have been suggested. Whenever dynamic image enlargement is needed, such as on some web pages, super-resolution techniques can be utilised. This paper focuses on the issue of how to increase the resolution of a single image using only prior information about images in general, and not relying on a specific training set or the use of multiple images. The methods for achieving super-resolution are as varied as the applications. They range from simple use of Gaussian or preferably median filtering, to supervised learning methods based on learning image patches corresponding to low resolution regions from training data, and effectively sewing these patches together in a consistent manner. What method is appropriate depends on how easy it is to get suitable training data, how fast the method needs to be and so on. There is a demand for methods which are reasonably fast, which are generic in that they do not rely on having suitable training data, but which do better than standard linear filters or interpolation methods. This paper describes an approach to resolution doubling which achieves this. The method is structurally related to one layer of the dynamic tree model [9, 8, 1] except that it uses real valued variables. 2 Related work Simple approaches to resolution enhancement have been around for some time. Gaussian and Wiener filters (and a host of other linear filters) have been used for smoothing the blockiness created by the low resolution image. Median filters tend to fare better, producing less blurry images. Interpolation methods such as cubicspline interpolation tend to be the most common image enhancement approach. In the super-resolution literature there are many papers which do not deal with the simple case of reconstruction based on a single image. Many authors are interested in reconstruction based on multiple slightly perturbed subsamples from an image [3, 2] . This is useful for photographic scanners for example. In a similar manner other authors utilise the information from a number of frames in a temporal sequence [4]. In other situations highly substantial prior information is given, such as the ground truth for a part of the image. Sometimes restrictions on the type of processing might be made in order to keep calculations in real time or deal with sequential transmission. One important paper which deals specifically with the problem tackled here is by Freeman, Jones and Pasztor [5]. They follow a supervised approach, learning a low to high resolution patch model (or rather storing examples of such maps), and utilising a Markov random field for combining them and loopy propagation for inference. Later work [6] simplifies and improves on this approach. Earlier work tackling the same problem includes that of Schultz and Stevenson [7], which performed an MAP estimation using a Gibbs prior. There are two primary difficulties with smoothing (eg Gaussian, Wiener, Median filters) or interpolation (bicubic, cubic spline) methods. First smoothing is indiscriminate. It occurs both within the gradual change in colour of the sky, say, as well as across the horizon, producing blurring problems. Second these approaches are inconsistent: subsampling the super-resolution image will not return the original low-resolution one. Hence we need a model which maintains consistency but also tries to ensure that smoothing does not occur across region boundaries (except as much is as needed for anti-aliasing). 3 The model Here the high-resolution image is described by a series of very small patches with varying shapes. Pixel values within these patches can vary, but will have a common mean value. Pixel values across patches are independent. Apriori exactly where these patches should be is uncertain, and so the pixel to patch mapping is allowed to be a dynamic one. The model is best represented by a belief network. It consists of three layers. The lowest layer consists of the visible low-resolution pixels. The intermediate layer is a high-resolution image (4 × 4 the size of the low-resolution image). The top layer is a latent layer which is a little more than 2 × 2 the size of the low resolution image. The latent variables are ‘positioned’ at the corners, centres and edge centres of the pixels of the low resolution image. The values of the pixel colour of the high resolution nodes are each a single sample from a Gaussian mixture (in colour space), where each mixture centre is given by the pixel colour of a particular parent latent Latent Low Res Hi Res Figure 1: The three layers of the model. The small boxes in the left figure (64 of them) give the position of the high resolution pixels relative to the low resolution pixels (the 4 boxes with a thick outline). The positions of the latent variable nodes are given by the black circles. The colour of each high resolution pixel is generated from a mixture of Gaussians (right figure), each Gaussian centred at its latent parent pixel value. The closer the parent is, the higher the prior probability of being generated by that mixture is. variable node. The prior mixing coefficients decay with distance in image space between the high-resolution node and the corresponding latent node. Another way of viewing this is that a further indicator variable can be introduced which selects which mixture is responsible for a given high-resolution node. We say a high resolution node ‘chooses’ to connect to the parent that is responsible for it, with a connection probability given by the corresponding mixing coefficient. These connection probabilities can be specified in terms of positions (see figure 2). The motivation for this model comes from the possibility of explaining away. In linear filtering methods each high-resolution node is determined by a fixed relationship to its neighbouring low-resolution nodes. Here if one of the latent variables provides an explanation for a high-resolution node which fits well with it neighbours to form the low-resolution data, then the posterior responsibility of the other latent nodes for that high-resolution pixel is reduced, and they are free to be used to model other nearby pixels. The high-resolution pixels corresponding to a visible node can be separated into two (or more) independent regions, corresponding to pixels on different sides of an edge (or edges). A different latent variable is responsible for each region. In other words each mixture component effectively corresponds to a small image patch which can vary in size depending on what pixels it is responsible for. Let vj ∈L denote a latent variable at site j in the latent space L. Let xi ∈S denote the value of pixel i in high resolution image space S, and let yk denote the value of the visible pixel k. Each of these is a 3-vector representing colour. Let V denote the ordered set of all vj. Likewise X denotes the ordered set of all xi and Y the set of all yi. In all the work described here a transformed colorspace of (gray, red-green, blue-yellow) is used. In other words the data is a linear transformation on the RGB colour values using the matrix 0.66 1 0.5 0.66 −1 0.5 0.66 0 −1 ! . The remaining component is the connectivity (i.e. the indicator for the responsibility) between the high-resolution nodes and the nodes in the latent layer. Let zij denote this connectivity with zij an indicator variable taking value 1 when vj is a parent of xi in the belief network. Every high resolution pixel has one and only one parent in the latent layer. Let Z denote the ordered set of all zij. 3.1 Distributions A uniform distribution over the range of pixel values is presumed for the latent variables. The high resolution pixels are given by Gaussian distributions centred on the pixel values of the parental latent variable. This Gaussian is presumed independent in each pixel component. Finally the low resolution pixels are given by the average of the sixteen high resolution pixels covering the site of the low resolution pixel. This pixel value can also be subject to some additional Gaussian noise if necessary (zero noise is assumed in this paper). It is presumed that each high resolution pixel is allowed to ‘choose’ its parent from the set of latent variables in an independent manner. A pixel has a higher probability of choosing a nearby parent than a far away one. For this we use a Gaussian integral form so that : P(Z) = Y ij pzij ij where pij ∝ Z Bi dr exp −(rj −r)2 2Σ , (1) where r is a position in the high resolution picture space, rj is the position of the jth latent variable in the high resolution image space (where these are located at the corners of every second pixel in each direction as described above). The integral is over Bi defined as the region in image space corresponding to pixel xi. Σ gives the width (squared) over which the probability decays. The larger Σ the more possible parents with non-negligible probability. The connection probabilities can be illustrated by the picture in figure 2. The equations for the other distributions are given here. First we have P(X|Z, V ) = Y ijm 1 (2πΩm)1/2 exp −zij (xm i −vm j )2 2Ωm ! . (2) where Ωm is a variance which determines how much each pixel must be like its latent parent. Here the indicator zij ensures the only contribution for each i comes from the parent j of i. Second P(Y |X) = Y km 1 (2πΛ)1/2 exp − (ym k −1 d P i∈Pa(k) xm i )2 2Λ ! (3) Figure 2: An illustration of the connection probabilities from a high resolution pixel in the position of the smaller checkered square to the latent variables centred at each of the larger squares. The probability is proportional to the intensity of the shading: darker is higher probability. with Pa(k) denoting the set of all the d = 16 high resolution pixels which go to make up the low resolution pixel yk. In this work we let the variance Λ →0. Λ determines the additive Gaussian noise which is in the low resolution image. Last, P(V ) is simply uniform over the whole of the possible values of V . Hence P(V ) = 1/C for C the volume of V space being considered. 3.2 Inference The belief network defined above is not tree structured (rather it is a mixture of tree structures) and so we have to resort to approximation methods for inference. In this paper a variational approach is followed. The posterior distribution is approximated using a factorised distribution over the latent space and over the connectivity. Only in the high resolution space X do we consider joint distributions: we use a joint Gaussian for all the nodes corresponding to one low resolution pixel. The full distribution can be written as Q(Z, V, X) = Q(Z)Q(V )Q(X) where Q(Z) = Y ij qzij ij , Q(V ) = Y jm 1 (2πΦm j )1/2 exp −(vm j −νm j )2 2(Φm j ) ! and (4) Q(X) = Y km (2π)−d/2 |Ψm k |1/2 exp −1 2[(x∗)m k −(µ∗)m k ]T (Ψm k )−1[(x∗)m k −(µ∗)m k ] (5) where (x∗)m k is the vector (xm i |i ∈Pa(k)), the joint of all d high resolution pixel values corresponding to a given low resolution pixel k (for a given colour component m). Here qij, µm i , νm j , Φm j and Ψm i are variational parameters to be optimised. As usual, a local minima the KL divergence between the approximate distribution and the true posterior distribution is computed. This is equivalent to maximising the negative variational free energy (or variational log likelihood) L(Q||P) = log Q(Z, V, X) P(Z, V, X, Y ) Q(Z,V,X) (6) where Y is given by the low resolution image. In this case we obtain L(Q||P) = ⟨log Q(Z) −log P(Z)⟩Q(Z) + ⟨log Q(V ) −log p(V )⟩Q(V ) + ⟨log Q(X)⟩Q(X) −⟨log P(X|Z, V )⟩Q(X,Z,V ) −⟨log P(Y |X)⟩Q(Y,X). (7) Taking expectations and derivatives with respect to each of the parameters in the approximation gives a set of self-consistent mean field equations which we can solve by repeated iteration. Here for simplicity we only solve for qij and for the means µm i and νm j which turn out to be independent of the variational variance parameters. We obtain νm j = P i qijxm i P i qij and µm i = ρm i + Dc(i) where ρm i = X j qijvm i (8) where c(i) is the child of i, i.e. the low level pixel which i is part of. Dk is a Lagrange multiplier, and is obtained through constraining the high level pixel values to average to the low level pixels: 1 d X i∈Pa(k) µm i = ym k ⇒Dk ≡D∗ k = ym k −1 d X i∈Pa(k) ρm i (9) In the case where Λ is non-zero, this constraint is softened and Dk is given by Dk = ΩD∗ k/(Ω+ Λ). The update for the qij is given by qij ∝pij exp − X m (xm i −vm k )2 2Ωm ! (10) where the constant of proportionality is given by normalisation: P j qij = 1. Optimising the KL divergence involves iterating these equations. For each Q(Z) optimisation (10), equations (8a) and (8b) are iterated a number of times. Each optimisation loop is either done a preset number of times, or until a suitable convergence criterion is met. The former approach is generally used, as the basic criterion is a limit on the time available for the optimisation to be done. 4 Setting parameters The prior variance parameters need to be set. The variance Λ corresponds to the additive noise. If this is not known to be zero, then it will vary from image to image, and needs to be found for each image. This can be done using variational maximum likelihood, where Λ is set to maximise the variational log likelihood. Σ is presumed to be independent of the images presented, and is set by hand by visualising changes on a test set. The Ωm might depend on the intensity levels in the image: very dark images will need a smaller value of Ω1 for example. However for simplicity Ωm = Ω is treated as global and set by hand. Because the primary criterion for optimal parameters is subjective, this is the most sensible approach, and is reasonable when there are only two parameters to determine. To optimise automatically based on the variational log likelihood is possible but does not produce as good results due to the complicated nature of a true prior or error-measure for images. For example, a highly elaborate texture offset by one pixel will give a large mean square error, but look almost identical, whereas a blurred version of the texture would give a smaller mean square error, but look much worse. 5 Implementation The basic implementation involves setting the parameters, running the mean field optimisation and then looking at the result. The final result is a downsampled version of the 4 × 4 image to 2 × 2 size: the larger image is used to get reasonable anti-aliasing. To initialise the mean field optimisation, X is set equal to the bi-cubic interpolated image with added Gaussian noise. The Q(Z) is initialised to P(Z). Although in the examples here we used 25 optimisations Q(Z), each of which involves 10 cycles through the mean field equations for Q(X) and Q(V ), it is possible to get reasonable results with only three Q(Z) optimisation cycles each doing 2 iterations through the mean field equations. In the runs shown here, Λ is set to zero, the variance Ω is set to 0.008, and Σ is set to 3.3. 6 Demonstrations and assessment The method described in this paper is compared with a number of simple filtering and interpolation methods, and also with the methods of Freeman et al. The image from Freeman’s website is used for comparison with that work (figure 3). Full colour comparisons for these and other images can be found at http://www.anc.ed.ac.uk/~amos/superresolution.html. First two linear filtering approaches are considered, the Wiener filter and a Gaussian filter. The third method is a median filter. Bi-cubic interpolation is also given. Quantitative assessment of the quality of super-resolution results is always something of a difficulty because the basic criterion is human subjectivity. Even so we (a) (b) (c) (d) (e) (f) Figure 3: Comparison with approach of Freeman et al. (a) gives the 70x70 low resolution image, (b) the true image, (c) a bi-cubic interpolation (d) Freeman et al result (taken from website and downsampled), (e) dynamic structure super-resolution, (f) median filter. compare the results of this approach with standard filtering methods using a root mean squared pixel error on a set of 8, 128 by 96 colour images, giving 0.0486, 0.0467, 0.0510 and 0.0452 for the original low resolution image, bicubic interpolation, the median filter and dynamic structure super-resolution respectively. Unfortunately the unavailability of code prevents representative calculations for the Freeman et al approach. Dynamic structure resolution requires approximately 30 −60 flops per 2 × 2 high resolution pixel per optimisation cycle, compared with, say, 16 flops for a linear filter, so it is more costly. Trials have been done working directly with 2 × 2 grids rather than with 4 × 4 and then averaging up. This is much faster and the results, though not quite as good, were still an improvement on the simpler methods. Qualitatively, the results for dynamic structure super-resolution are significantly better than most standard filtering approaches. The texture is better represented because it maintains consistency, and the edges are sharper, although there is still some significant difference from the true image. The method of Freeman et al is perhaps comparable at this resolution, although it should be noted that their result has been downsampled here to half the size of their enhanced image. Their method can produce 4 × 4 the resolution of the original, and so this does not accurately represent the full power of their technique. Furthermore this image is representative of early results from their work. However their approach does require learning large numbers of patches from a training set. Fundamentally the dynamic structure super-resolution approach does a good job at resolution doubling without the need for representative training data. The edges are not blurred and much of the blockiness is removed. Dynamic structure super-resolution provides a technique for resolution enhancement, and provides an interesting starting model which is different from the Markov random field approaches. Future directions could incorporate hierarchical frequency information at each node rather than just a single value. References [1] N. J. Adams. Dynamic Trees: A Hierarchical Probabilistic Approach to Image Modelling. PhD thesis, Division of Informatics, University of Edinburgh, 5 Forrest Hill, Edinburgh, EH1 2QL, UK, 2001. [2] S. Baker and T. Kanade. Limits on super-resolution and how to break them. In Proceedings of CVPR 00, pages 372–379, 2000. [3] P. Cheeseman, B. Kanefsky, R. Kraft, and J. Stutz. Super-resolved surface reconstruction from multiple images. Technical Report FIA-94-12, NASA Ames, 1994. [4] M. Elad and A. Feuer. Super-resolution reconstruction of image sequences. IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(9):817–834, 1999. [5] W. T. Freeman, T. R. Jones, and E. C. Pasztor. Markov networks for super-resolution. Technical Report TR-2000-08, MERL, 2000. [6] W. T. Freeman, T. R. Jones, and E. C. Pasztor. Example-based super-resolution. IEEE Computer Graphics and Applications, 2002. [7] R. R. Schultz and R. L. Stevenson. A Bayesian approach to image expansion for improved definition. IEEE Transactions on Image Processing, 3:233–242, 1994. [8] A. J. Storkey. Dynamic trees: A structured variational method giving efficient propagation rules. In C. Boutilier and M. Goldszmidt, editors, Uncertainty in Artificial Intelligence, pages 566–573. Morgan Kauffmann, 2000. [9] C. K. I. Williams and N. J. Adams. DTs: Dynamic trees. In M. J. Kearns, S. A. Solla, and D. A. Cohn, editors, Advances in Neural Information Processing Systems 11. MIT Press, 1999.
|
2002
|
132
|
2,139
|
Independent Components Analysis through Product Density Estimation 'frevor Hastie and Rob Tibshirani Department of Statistics Stanford University Stanford, CA, 94305 { hastie, tibs } @stat.stanford. edu Abstract We present a simple direct approach for solving the ICA problem, using density estimation and maximum likelihood. Given a candidate orthogonal frame, we model each of the coordinates using a semi-parametric density estimate based on cubic splines. Since our estimates have two continuous derivatives, we can easily run a second order search for the frame parameters. Our method performs very favorably when compared to state-of-the-art techniques. 1 Introduction Independent component analysis (ICA) is a popular enhancement over principal component analysis (PCA) and factor analysis. In its simplest form, we observe a random vector X E IRP which is assumed to arise from a linear mixing of a latent random source vector S E IRP, (1) X=AS; the components Sj, j = 1, ... ,p of S are assumed to be independently distributed. The classical example of such a system is known as the "cocktail party" problem. Several people are speaking, music is playing, etc., and microphones around the room record a mix of the sounds. The ICA model is used to extract the original sources from these different mixtures. Without loss of generality, we assume E(S) = 0 and Cov(S) = I, and hence Cov(X) = AA T. Suppose S* = R S represents a transformed version of S, where R is p x p and orthogonal. Then with A * = ART we have X* = A * S* = AR TR S = X. Hence the second order moments Cov(X) = AAT = A * A *T do not contain enough information to distinguish these two situations. Model (1) is similar to the factor analysis model (Mardia, Kent & Bibby 1979), where S and hence X are assumed to have a Gaussian density, and inference is typically based on the likelihood of the observed data. The factor analysis model typically has fewer than p components, and includes an error component for each variable. While similar modifications are possible here as well, we focus on the full-component model in this paper. Two facts are clear: • Since a multivariate Gaussian distribution is completely determined by its first and second moments, this model would not be able to distinguish A and A * . Indeed, in factor analysis one chooses from a family of factor rotations to select a suitably interpretable version. • Multivariate Gaussian distributions are completely specified by their second-order moments. If we hope to recover the original A, at least p - 1 of the components of S will have to be non-Gaussian. Because of the lack of information in the second moments, the first step in an ICA model is typically to transform X to have a scalar covariance, or to pre-whiten the data. From now on we assume Cov(X) = I, which implies that A is orthogonal. Suppose the density of Sj is Ij, j = 1, ... ,p, where at most one of the Ii are Gaussian. Then the joint density of S is p (2) Is(s) = II Ii(Sj), j = l and since A is orthogonal, the joint density of X is p (3) Ix(x) = II Ii(aJ x), j=l where aj is the jth column of A . Equation (3) follows from S = AT X due to the orthogonality of A, and the fact that the determinant in this multivariate transformation is 1. In this paper we fit the model (3) directly using semi-parametric maximum likelihood. We represent each of the densities Ii by an exponentially tilted Gaussian density (Efron & Tibshirani 1996). (4) where ¢ is the standard univariate Gaussian density, and gj is a smooth function, restricted so that Ii integrates to 1. We represent each of the functions gj by a cubic smoothing spline, a rich class of smooth functions whose roughness is controlled by a penalty functional. These choices lead to an attractive and effective semi-parametric implementation of ICA: • Given A, each of the components Ii in (3) can be estimated separately by maximum likelihood. Simple algorithms and standard software are available. • The components gj represent departures from Gaussianity, and the expected log-likelihood ratio between model (3) and the gaussian density is given by Ex 2:j gj(aJ X), a flexible contrast function. • Since the first and second derivatives of each of the estimated gj are immediately available, second order methods are available for estimating the orthogonal matrix A . We use the fixed point algorithms described in (Hyvarinen & Oja 1999). • Our representation of the gj as smoothing splines casts the estimation problem as density estimation in a reproducing kernel Hilbert space, an infinite family of smooth functions. This makes it directly comparable with the "Kernel ICA" approach of Bach & Jordan (2001), with the advantage that we have O(N) algorithms available for the computation of our contrast function, and its first two derivatives. In the remainder of this article, we describe the model in more detail, and evaluate its performance on some simulated data. 2 Fitting the Product Density leA model Given a sample Xl, ... ,XN we fit the model (3),(4) by maximum penalized likelihood. The data are first transformed to have zero mean vector, and identity covariance matrix using the singular value decomposition. We then maximize the criterion (5) subject to (6) (7) T aj ak J ¢(s)e9j (slds bjk 't/j, k 1 't/j For fixed aj and hence Sij = aT Xi the solutions for 9j are known to be cubic splines with knots at each of the unique values of Sij (Silverman 1986). The p terms decouple for fixed aj, leaving us p separate penalized density estimation problems. We fit the functions 9j and directions aj by optimizing (5) in an alternating fashion, as described in Algorithm 1. In step (a), we find the optimal 9j for fixed 9j; in Algorithm 1 Product Density leA algorithm 1. Initialize A (random Gaussian matrix followed by orthogonalization). 2. Alternate until convergence of A, using the Amari metric (16). (a) Given A , optimize (5) w.r.t. 9j (separately for each j), using the penalized density estimation algorithm 2. (b) Given 9j , j = 1, ... ,p, perform one step of the fixed point algorithm 3 towards finding the optimal A. step (b), we take a single fixed-point step towards the optimal A. In this sense Algorithm 1 can be seen to be maximizing the profile penalized log-likelihood w.r.t. A. 2.1 Penalized density estimation We focus on a single coordinate, with N observations Si, Si = af Xi for some k). We wish to maximize (8) 1, ... ,N (where subject to J ¢(s)e9(slds = 1. Silverman (1982) shows that one can incorporate the integration constraint by using the modified criterion (without a Lagrange multiplier) N (9) ~ l:= {lOg¢(Si) + 9(Si)} - J ¢(s)e9(slds - A J 91/2(S)ds. >=1 Since (9) involves an integral, we need an approximation. We construct a fine grid of L values s; in increments ~ covering the observed values Si, and let (10) * #Si E (sf ~/2, Sf + ~/2) Y£ = N Typically we pick L to be 1000, which is more than adequate. We can then approximate (9) by L (11) L {Y; [log(¢(s;)) + g(s;)]- ~¢(se)e9(sll} - A J gI/2(s)ds. £=1 This last expression can be seen to be proportional to a penalized Poisson loglikelihood with response Y;! ~ and penalty parameter A/~, and mean J-t(s) = ¢(s)e9(s). This is a generalized additive model (Hastie & Tibshirani 1990), with an offset term log(¢(s)), and can be fit using a Newton algorithm in O(L) operations. As with other GAMs, the Newton algorithm is conveniently re-expressed as an iteratively reweighted penalized least squares regression problem, which we give in Algorithm 2. Algorithm 2 Iteratively reweighted penalized least squares algorithm for fitting the tilted Gaussian spline density model. 1. Initialize 9 == O. 2. Repeat until convergence: (a) Let J-t(s;) = ¢(s;)e9(sll, £ = 1, ... ,L, and w£ = J-t(s;). (b) Define the working response (12) z£ = g(s*) + Ye - J-t(sf) £ J-t( sf) (c) Update g by solving the weighted penalized least squares problem (13) This amounts to fitting a weighted smoothing spline to the pairs (sf, ze) with weights w£ and tuning parameter 2A/~. Although other semi-parametric regression procedures could be used in (13), the cubic smoothing spline has several advantages: • It has knots at all L of the pseudo observation sites sf' The values sf can be fixed for all terms in the model (5), and so a certain amount of pre-computation can be performed. Despite the large number of knots and hence basis functions, the local support of the B-spline basis functions allows the solution to (13) to be obtained in O(L) computations. • The first and second derivatives of 9 are immediately available, and are used in the second-order search for the direction aj in Algorithm 1. • As an alternative to choosing a value for A, we can control the amount of smoothing through the effective number of parameters, given by the trace of the linear operator matrix implicit in (13) (Hastie & Tibshirani 1990). • It can also be shown that because of the form of (9), the resulting density inherits the mean and variance of the data (0 and 1); details will be given in a longer version of this paper. 2.2 A fixed point method for finding the orthogonal frame For fixed functions g1> the penalty term in (5) does not playa role in the search for A. Since all of the columns aj of any A under consideration are mutually orthogonal and unit norm, the Gaussian component p L log ¢(aJ Xi) j=l does not depend on A. Hence what remains to be optimized can be seen as the log-likelihood ratio between the fitted model and the Gaussian model, which is simply (14) C(A) Since the choice of each gj improves the log-likelihood relative to the Gaussian, it is easy to show that C(A) is positive and zero only if, for the particular value of A, the log-likelihood cannot distinguish the tilted model from a Gaussian model. C(A) has the form of a sum of contrast functions for detecting departures from Gaussianity. Hyvarinen, Karhunen & Oja (2001) refer to the expected log-likelihood ratio as the negentropy, and use simple contrast functions to approximate it in their FastICA algorithm. Our regularized approach can be seen as a way to construct a flexible contrast function adaptively using a large set of basis functions. Algorithm 3 Fixed point update forA. 1. For j = 1, ... ,p: (15) where E represents expectation w.r.t. the sample Xi, and aj is the jth column of A. 2. Orthogonalize A: Compute its SVD, A = UDVT , and replace A f- UVT . Since we have first and second derivatives avaiable for each gj , we can mimic exactly the fast fixed point algorithm developed in (Hyvarinen et al. 2001, page 189); see algorithm 3. Figure 1 shows the optimization criterion C (14) above, as well as the two criteria used to approximate negentropy in FastICA by Hyvarinen et al. (2001) [page 184]. While the latter two agree with C quite well for the uniform example (left panel), they both fail on the mixture-of-Gaussians example, while C is also successful there. Uniforms Gaussian Mixtures 0 0 '" '" ,,; ,,; '" '" x ,,; x ,,; " " "0 "0 C C '" '" ,,; ,,; '" '" ,,; ,,; 0 0 ,,; ,,; 0.0 0.5 1.0 1.5 2.0 2.5 3.0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 e e Figure 1: The optimization criteria and solutions found for two different examples in lR2 using FastICA and our ProDenICA. G1 and G2 refer to the two functions used to define negentropy in FastICA. In the left example the independent components are uniformly distributed, in the right a mixture of Gaussians. In the left plot, all the procedures found the correct frame; in the right plot, only the spline based approach was successful. The vertical lines indicate the solutions found, and the two tick marks at the top of each plot indicate the true angles. 3 Comparisons with fast ICA In this section we evaluate the performance of the product density approach (ProDenICA) , by mimicking some of the simulations performed by Bach & Jordan (2001) to demonstrate their Kernel ICA approach. Here we compare ProDenICA only with FastICA; a future expanded version of this paper will include comparisons with other leA procedures as well. The left panel in Figure 2 shows the 18 distributions used as a basis of comparison. These exactly or very closely approximate those used by Bach & Jordan (2001). For each distribution, we generated a pair of independent components (N=1024) , and a random mixing matrix in ill? with condition number between 1 and 2. We used our Splus implementation of the FastICA algorithm, using the negentropy criterion based on the nonlinearity G1(s) = log cosh(s) , and the symmetric orthogonalization scheme as in Algorithm 3 (Hyvarinen et al. 2001, Section 8.4.3). Our ProDenICA method is also implemented in Splus. For both methods we used five random starts (without iterations). Each of the algorithms delivers an orthogonal mixing matrix A (the data were pre-whitenea) , which is available for comparison with the generating orthogonalized mixing matrix Ao. We used the Amari metric(Bach & Jordan 2001) as a measure of the closeness of the two frames: (16) d(Ao,A) = ~ f.- (L~= 1 Irijl -1) + ~ f.- (Lf=1Irijl -1) , 2p ~ max·lr· ·1 2p ~ max·lr··1 i=1 J"J j=1" "J where rij = (AoA - 1 )ij . The right panel in Figure 2 shows boxplots of the pairwise differences d(Ao, A F ) -d(Ao , Ap) (x100), where the subscripts denote ProDenICA or FastICA. ProDenICA is competitive with FastICA in all situations, and dominates in most of the mixture simulations. The average Amari error (x 100) for FastICA was 13.4 (2.7), compared with 3.0 (0.4) for ProDenICA (Bach & Jordan (2001) report averages of 6.2 for FastICA, and 3.8 and 2.9 for their two KernelICA methods). We also ran 300 simulations in 1R.4, using N = 1000, and selecting four of the , b JL ~ d ~~ 9 h flL ~ j k ~ ~ m " ~ ~ p q ~ ~ , ~ f JJL ~ I ~ , ~ ~ 0 ..,---_____________ ----, ro ci N ci o ci abcdefghijklmnopqr distribution Figure 2: The left panel shows eighteen distributions used for comparisons. These include the "t", uniform, exponential, mixtures of exponentials, symmetric and asymmetric gaussian mixtures. The right panel shows boxplots of the improvement of ProDenICA over FastICA in each case, using the Amari metric, based on 30 simulations in lR? for each distribution. 18 distributions at random. The average Amari error (x 100) for FastICA was 26.1 (1.5), compared with 9.3 (0.6) for ProDenICA (Bach & Jordan (2001) report averages of 19 for FastICA, and 13 and 9 for their two KernelICA methods). 4 Discussion The lCA model stipulates that after a suitable orthogonal transformation, the data are independently distributed. We implement this specification directly using semiparametric product-density estimation. Our model delivers estimates of both the mixing matrix A, and estimates of the densities of the independent components. Many approaches to lCA, including FastICA, are based on minimizing approximations to entropy. The argument, given in detail in Hyvarinen et al. (2001) and reproduced in Hastie, Tibshirani & Friedman (2001), starts with minimizing the mutual information the KL divergence between the full density and its independence version. FastICA uses very simple approximations based on single (or a small number of) non-linear contrast functions, which work well for a variety of situations, but not at all well for the more complex gaussian mixtures. The log-likelihood for the spline-based product-density model can be seen as a direct estimate of the mutual information; it uses the empirical distribution of the observed data to represent their joint density, and the product-density model to represent the independence density. This approach works well in both the simple and complex situations automatically, at a very modest increase in computational effort. As a side benefit, the form of our tilted Gaussian density estimate allows our log-likelihood criterion to be interpreted as an estimate of negentropy, a measure of departure from the Gaussian. Bach & Jordan (2001) combine a nonparametric density approach (via reproducing kernel Hilbert function spaces) with a complex measure of independence based on the maximal correlation. Their procure requires O(N3) computations, compared to our O(N). They motivate their independence measures as approximations to the mutual independence. Since the smoothing splines are exactly function estimates in a RKHS, our method shares this flexibility with their Kernel approach (and is in fact a "Kernel" method). Our objective function, however, is a much simpler estimate of the mutual information. In the simulations we have performed so far, it seems we achieve comparable accuracy. References Bach, F. & Jordan, M. (2001), Kernel independent component analysis, Technical Report UCBjCSD-01-1166, Computer Science Division, University of California, Berkeley. Efron, B. & Tibshirani, R. (1996), 'Using specially designed exponential families for density estimation', Annals of Statistics 24(6), 2431-246l. Hastie, T. & Tibshirani, R. (1990), Generalized Additive Models, Chapman and Hall. Hastie, T., Tibshirani, R. & Friedman, J. (2001), The Elements of Statistical Learning; Data mining, Inference and Prediction, Springer Verlag, New York. Hyvarinen, A., Karhunen, J. & Oja, E. (2001), Independent Component Analysis, Wiley, New York. Hyvarinen, A. & Oja, E. (1999), 'Independent component analysis: Algorithms and applications' , Neural Networks . Mardia, K., Kent, J. & Bibby, J. (1979), Multivariate Analysis, Academic Press. Silverman, B. (1982), 'On the estimation of a probability density function by the maximum penalized likelihood method', Annals of Statistics 10(3),795-810. Silverman, B. (1986), Density Estimation for Statistics and Data Analysis, Chapman and Hall.
|
2002
|
133
|
2,140
|
Discriminative Densities from Maximum Contrast Estimation Peter Meinicke Neuroinformatics Group University of Bielefeld Bielefeld, Germany pmeinick@techfak.uni-bielefeld.de Thorsten Twellmann Neuroinformatics Group University of Bielefeld Bielefeld, Germany ttwellma@techfak.uni-bielefeld.de Helge Ritter Neuroinformatics Group University of Bielefeld Bielefeld, Germany helge@techfak.uni-bielefeld.de Abstract We propose a framework for classifier design based on discriminative densities for representation of the differences of the class-conditional distributions in a way that is optimal for classification. The densities are selected from a parametrized set by constrained maximization of some objective function which measures the average (bounded) difference, i.e. the contrast between discriminative densities. We show that maximization of the contrast is equivalent to minimization of an approximation of the Bayes risk. Therefore using suitable classes of probability density functions, the resulting maximum contrast classifiers (MCCs) can approximate the Bayes rule for the general multiclass case. In particular for a certain parametrization of the density functions we obtain MCCs which have the same functional form as the well-known Support Vector Machines (SVMs). We show that MCC-training in general requires some nonlinear optimization but under certain conditions the problem is concave and can be tackled by a single linear program. We indicate the close relation between SVM- and MCC-training and in particular we show that Linear Programming Machines can be viewed as an approximate realization of MCCs. In the experiments on benchmark data sets, the MCC shows a competitive classification performance. 1 Introduction In the Bayesian framework of classification the ultimate goal of a classifier
is to minimize the expected risk of misclassification measured by which denotes the loss for assigning a given feature vector to class , while it actually belongs to class , with being the number of classes. With ! " being the class-conditional probability density functions (PDFs) and #%$ denoting the corresponding apriori probabilities of class-membership we have the risk
$ # $ ! " (1) With the standard “zero-one” loss function $ , where $ denotes the Kronecker delta, it is easy to show (see e.g. [3]) that the expected risk is minimized, if one chooses the classifier ! #"%$ $ # $ ! " (2) The resulting lower bound on is known as the Bayes risk which limits the average performance of the classifier . Because the class-conditional densities are usually unknown, one way to realize the above classifier is to use estimates of these densities instead. This leads to the so-called plug-in classifiers, which are Bayes-consistent if the density estimators are consistent (e.g. [9]). Due to the notoriously slow convergence of density estimates the plug-in scheme usually isn’t the best recipe for classifier design and as an alternative many discriminant functions including Neural Networks (see [1, 9] for an overview) and Support Vector Machines (SVMs) [2, 12] have been proposed which are trained directly to minimize the empirical classification error. We recently proposed a method for the design of density-based classifiers without resorting to the usual density estimation schemes of the plug-in approach [6]. Instead we utilized discriminative densities with parameters optimized to solve the classification problem. The approach requires maximization of the average bounded difference between class (discriminative) densities $ & , which we refer to as the contrast of the underlying “true” distributions. The ')(#*,+ -bounded contrast is the expectation - . 0/ '(#*,+ with 1 . 0/ ' (#*2+
43 65 "87:9 ' (#*,+ # 5 ;/0< 5 # ;/0< (3) The idea is to find discriminative densities ;/0< $ , which represent the underlying distributions with “true” densities ! " in a way, that is optimal for classification. When maximizing the contrast with respect to the parameters < $ of the discriminative densities the upper bound '(#*,+ plays a central role because it prevents the learning algorithm from increasing the differences between discriminative densities where the differences between the true densities are already large. In this paper we show that with some slight modification the contrast can be viewed as an approximation of the negative Bayes risk (up to some constant shift and scaling) which is valid for the binary as well as for the general multiclass case. Therefore for certain parametrizations of the discriminative densities MCCs allow to find an optimal trade-off between the classical plug-in Bayes-consistency and the consistency which arises from direct minimization of the approximate Bayes risk. Furthermore, for a particular parametrization of the PDFs, we obtain certain kinds of Linear Programming Machines (LPMs) [4] as (in general) approximate solutions of maximum contrast estimation. In that way MCCs provide a Bayes-consistent approach to realize multiclass LPMs / SVMs and they suggest an interpretation of the magnitude of the LPM / SVM classification function in terms of density differences which provide a probabilistic measure of confidence. For the case of LPMs we propose an extended optimization procedure for maximization of the contrast via iteration of linear optimizations. Inspired by the MCC-framework, for the resulting Sequential Linear Programming Machines (SLPM) we propose a new regularizer which allows to find an optimal trade-off between the above mentioned two approaches to Bayes consistency. In the experiments we analyse the performance of the SLPM on simulated and real world data. 2 Maximum Contrast Estimation For the design of MCCs the first step, which is the same as for the plug-in concept, requires to replace the unknown class-conditional densities of the Bayes classifier (2) by suitably parametrized PDFs. Then, instead of choosing the parameters for an approximation of the original (true) densities (e.g. by maximum likelihood estimation) as with the plug-in scheme, the density parameters are choosen to maximize the so-called contrast which is the expected value of the ' (#*,+ -bounded density differences as defined in (3). For the case of an unbounded contrast, i.e. '(#*,+
, the general maximum contrast solution can be found analytically and for notational simplicity we derive it for the binary case with equal apriori probabilities, where the contrast can be written as ! ! ! ! ! Thus the unbounded contrast is maximized for with the peaks of the Delta (Dirac) functions located at , "%$ ! and , #"$ ! ! , respectively. Obviously, these are not the best discriminative densities we may think of and therefore we require an appropriate bound ' (#*2+ . For finite ')(#*,+ , maximization of the contrast enforces a redistribution of the estimated probability mass and gives rise to a constrained linear optimization problem in the space of discriminative densities which may be solved by variational methods in some cases. The relation between contrast and Bayes risk becomes more convenient when we slightly modify the above definition (3) by a unit upper bound and by adding a lower bound on the
-scaled density differences: . 0/
"$
43 65 "87:9
# 5 ;/ < 5 # ;/0< (4) with scale factor
')(#*,+ . Therefore, for an infinite scale factor
the (expected) contrast .
. /
approaches the negative Bayes risk up to constant shift and scaling: :7:" . 0/
:7 " .
(5) Thus the scale factor defines a subset of the input-space, which includes the decision boundary and which becomes increasingly focused in their vicinity as
. The extent of the region is defined by the bounds
on the difference between discriminative densities. In terms of the contrast function it can be defined as !% . 0/
(6) Since for MCC-training we maximize the empirical contrast, i.e. the corresponding sample average of . 0/
, the scale factor then defines a subset of the training data which has impact on learning of the decision boundary. Thus for increasing scale factor the relative size of that subset is shrinking. However for increasing size of the training set the scale factor can be gradually increased and then, for suitable classes of PDFs, MCCs can approach the Bayes rule. In other words,
acts as a regularization parameter such that, for particular choices of the PDF class convergence to the Bayes classifier can be achieved if the quality of the approximation of the loss function is gradually increased for increasing sample sizes. In the following section we shall consider such a class of PDFs which is flexible enough and which turns out to include a certain kind of SVMs. 3 MCC-Realizations In the following we shall first consider a particularly useful parametrization of the discriminative densities which gives rise to classifiers which in the binary case have the same functional form as SVMs up to a “missing” bias term in the MCC-case. For training of these MCCs we derive a suitable objective function which can be maximized by sequential linear programming where we show the close relation to training of Linear Programming Machines. 3.1 Density Parametrization We first have to choose a set of candidate functions from which we select the required PDF. Because this set should provide some flexibility with respect to contrast maximization the usual kernel density estimator (KDE)[11] !
$ $ (7) with index set containing indices of examples from class and with normalized kernel functions according to 2 isn’t a quite good choice, since the only free parameter is the kernel bandwidth which doesn’t allow for any local adaptation. On the other hand if we allow for local variation of the bandwidth we get a complicated contrast which is difficult to maximize due to nonlinear dependencies on the parameters. The same is true if we treat the kernel centers as free parameters. However, if we modify the kernel density estimator to have flexible mixing weights according to ;/ <
$ $ $
with (8) we get an objective function, which is linear in the mixing parameters $ under certain conditions. Thus we have class-specific densities with mixing weights $ which control the contribution of a single training example to the PDF. With that choice we achieve plug-in Bayes-consistency for the case of equal mixing weights, since then we have the usual kernel density estimator (KDE), which, besides some mild assumptions about the distributions, requires a vanishing kernel bandwidth for
. 3.2 Objective Function For notational simplicity in the following we shall incorporate the scale factor
and the mixing weigths into a common parameter vector
with
and ! . Further we define the scaled density difference " $ ;/ # $
$ $ #
(9) so that we can write the empirical contrast .$# , i.e. the sample average over training examples, as: .%#
,
$ &(' $)
3 "87:9 " $ / +* -, (10) where the assignment variables ' $/. realize the maximum function in (4). With fixed assignment variables ' $ , .%# is concave and maximization with respect to gives rise to a linear optimization problem. On the other hand, for fixed maximization with respect to the ' $ is achieved by setting ' $ for negative terms. This suggests a sequential linear optimization strategy for overall maximization of the contrast which shall be introduced in detail in the following section. Since we have already incorporated
as a scaling factor into the parameter vector ,
is now identified with the norm . Therefore the scale factor can be adjusted implicitly by a regularization term which penalizes some suitable norm of the . Thus a suitable objective function can be defined by .%# .%# (11) with determining the weight of the penalty, i.e. the degree of regularization. We now consider several instances of the case where the penalty corresponds to some -norm of . With the -norm, for
the probability mass of the discriminative densities is concentrated on those two kernel-functions which yield the highest average density difference. Although that property forces the sparsest solution for large enough , clearly, that solution isn’t Bayes-consistent in general because as pointed out in Sec.2, for
all probability mass of the discriminative densities is concentrated at the two points with maximum average density difference. Conversely taking / , which resembles the standard SVM regularizer [10], yields the KDE with equal mixing weights for
. Indeed, it is easy to see that all -norm penalties with share this convenient property, which guarantees “plug-in” Bayes consistency in the case where the solution is totally determined by the regularizer. In that case kernel density estimators are achieved as the “default” solution. Therefore we chose a combination of the -norm with the maximum-norm
, - (12) which is easily incorporated into a linear program, as to be shown in the following. For that kind of penalty in the limiting case
we achieve an equal distribution of the weights which corresponds to the kernel density estimator (KDE) solution. In that way we have a nice trade-off between two kinds of Bayes consistency: for increasing the class-specific densities converge to the KDE with equal mixing weights, whereas for decreasing the probability mass of the discriminative densities is more and more concentrated near the Bayes-optimal decision boundary. By a suitable choice of the kernel width and the scale of the weights, e.g. via cross-validation, the solution with fastest convergence to the Bayes rule may be selected. With an 1-norm penalty on the weights and on the vector of soft margin slack variables we get the Linear Programming Machine which requires to minimize - subject to $ #
, 6 $ $ $ (13) with . and with the above constraints on . Dividing the objective by , subtracting , setting
$ '$ and turning minimization to maximization of the negative objective shows that LPM training corresponds to a special case of MCC training with fixed ' $ and -norm regularizer with . 3.3 Sequential Linear Programming Estimation of mixing weights is now achieved by maximizing the sample contrast with respect to the $ and the assignment variables ' $ . This can be achieved by the following iterative optimization scheme: 1. Initialization: ' $ 2. Maximization w.r.t. for fixed : maximize
,
$ ' $
3 0 ' $
, (#*,+
$ $ subject to '$ " $ '$ ! (#*,+ $ 3. Maximization w.r.t. for fixed : ' $
43 65 " $ otherwise. 4. If convergence in contrast then stop else proceed with step 2. Where '$ are slack variables, measuring the part of the density difference " $ which can be charged to the objective function. The constraint in the linear program was chosen in order to prevent the trivial solution which may otherwise appear for larger values of . Since we used unnormalized Gaussian kernel functions with , i.e. we excluded all multiplicative density constants, that constraint doesn’t exclude any useful solutions for the weights. 4 Experiments In the following section we consider the task of solving binary classification problems within the MCC-framework, using the above SLPM with Gaussian kernel function. The first experiment illustrates the behaviour of the MCC for different values for the regularization by means of a simple two-dimensional toy dataset. The second experiment compares the classification performance of the MCC with those of the SVM and KernelDensity-Classifier (KDC) which is a special case of the MCC with equal weighting of each kernel function. To this end, we selected four frequently used benchmark datasets from the UCI Machine Learning Repository. The two-dimensional toy dataset consists of 300 data points, sampled from two overlapping isotropic normal distributions with a mutual distance of and standard deviation . Figure 1 shows the solution of the MCC for two different values of (only data points with non-zero weights according the criterion $ are marked by symbols). In both figures, data points with large mixing weights are located near the decision border. In particular for small there are regions of high contrast . alongside the decision function (illustrated by isolines). For increasing the number of data points with non-zero $ increases. At the same time, one can note a decrease of the difference between the weights. Regions with contrast . are highlighted gray. For small values of , these regions are nearer to the decision border than for large values. This illustrates that for increasing the quality of the approximation of the loss function decreases. In both figures, several data points are misclassified with a contrast . . The MCC identified those data points as outliers and deactivated them during the training (encircled symbols). The second experiment demonstrates the performance of the MCC in comparison with those of a Support Vector Machine, as one of the state-of-the-art binary classifiers, and with the KDC. For this experiment we selected the Pima Indian Diabetes, Breast-Cancer, Heart and Thyroid dataset from the UCI Machine Learning repository. The Support Vector Machine was trained using the Sequential Minimal Optimization algorithm by J. Platt[7] adjusted according to the modification proposed by S.S. Keerthi [5]. 300 datapoints / λ = 0.2 0.5 1 1.5 2 2.5 −2.5 −2 −1.5 −1 −0.5 0 0 300 datapoints / λ = 4.2 0.5 1 1.5 0 0 −1.5 −1 −0.5 −2 0.5 −0.5 Figure 1: Two MCC solutions for the two-dimensional toy dataset for different values of (left: , right: ). The symbols and depict the positions of data points with with non-zero $ . The size of each symbol is scaled according the value of the corresponding $ . Encircled symbols have been deactivated during the training (symbols for deactivated data points are not scaled according to $ , since in most cases $ is zero). The absolute value of the contrast is illustrated by the isolines while the sign of the contrast depicts the binary classification of the classifier. The region with . which corresponds to as defined in (6) is colored white and the complement colored gray. The percentage of data points that define the solution is (left figure) and (right figure) of the dataset. The experimental setup was comparable with that in [8]: After normalization to zero mean and unit standard deviation, each dataset was divided 100 times in different pairs of disjoint train- and testsets with a ratio of : (provided by G. R¨atsch at http://ida.first.gmd.de/ raetsch/data/benchmarks.htm). Since we used for all classifiers the Gaussian kernel function, all three algorithms are parametrized by the bandwidth . Additionally, for the SVM and MCC the regularization value had to be chosen. The optimal parametrization was chosen by estimating the generalization performance for different values of bandwidth and regularization by means of the average test error on the first five dataset partitions. More precisely, a first coarse scan was performed, followed by a fine scan in the interval near the optimal values of the first one. Each scan considered 1600 different combinations of and , resp. and . For parameter pairs with identical test error, the pair constructing the sparsest solution was kept. Finally, the reported values in Tab.1 and Tab.2 are averaged over all 100 dataset partitions. Table 1 shows the optimal parametrization of the MCC in combination with the classification rate and sparseness of the solution (measured as percentage non-zero $ ). Additionally, the corresponding values after the first MCC iteration are given in brackets. The last two columns show the absolute number of iterations and the final number of deactivated examples. For all four datasets the MCC is able to find a sparse solution. In particular for the Heart, Breast-Cancer and Diabetes dataset the solution of the MCC is significantly sparser than those of the SVM (see Tab.2). Nevertheless, Tab.2 indicates that the classification rates of the MCC are competitive with those of the SVM. 5 Conclusion The MCC-approach provides an understanding of SVMs / LPMs in terms of generative modelling using discriminative densities. While usual unsupervised density estimation schemes try to minimize some distance criterion (e.g. Kullback-Leibler divergence) beTable 1: Optimal parametrization , classification rate, percentage of non-zero $ , number of iterations of the MCC and number of ' $ . The results are averaged over all 100 dataset partitions. For the classification rate and percentage of non-zero -coefficients the corresponding value after the first MCC iteration is given in brackets. Dataset Classif. rate
Iter. Breast-Cancer 1.38 12.17 74.3 (74.4 ) 13.6 (13.8 ) 2.23 2.6 Heart 2.69 2.066 84.3 (84.1 ) 20.4 (21.2 ) 3.10 6.4 Thyroid 0.49 95.5 (95.5 ) 46.1 (46.1 ) 1.00 0.0 Diabetes 4.52 2.624 76.6 (76.5 ) 5.3 (5.5 ) 5.86 40.7 Table 2: Summary of the performance of the KDC, SVM and MCC for the four benchmark datasets. Given are the classification rates with percentage of non-zero $ (in brackets). Note that our results for the SVM are slightly better to those reported in [8]. One reason could be the coarse parameter selection for the SVM as already mentioned by the author. Dataset KDC SVM MCC Breast-Cancer 73.1 (100 ) 74.5 (58.5 ) 74.3 (13.6 ) Heart 84.1 (100 ) 84.4 (60.9 ) 84.3 (20.4 ) Thyroid 95.6 (100 ) 95.7 (15.8 ) 95.5 (46.1 ) Diabetes 74.2 (100 ) 76.7 (53.6 ) 76.6 ( 5.3 ) tween the models and the true densities, MC-estimation aims at learning of densities which represent the differences of the underlying distributions in an optimal way for classification. Future work will address the investigation of the general multiclass performance and the capability to cope with misslabeled data. References [1] C. M. Bishop. Neural Networks for Pattern Recognition. Clarendon Press, Oxford, 1995. [2] C. Cortes and V. Vapnik. Support-vector networks. Machine Learning, 20(3):273–297, 1995. [3] R. O. Duda and P. E. Hart. Pattern Classification and Scene Analysis. Wiley, New York, 1973. [4] T. Graepel, R. Herbrich, B. Scholkopf, A. Smola, P. Bartlett, K. Robert-Muller, K. Obermayer, and B. Williamson. Classification on proximity data with lp–machines, 1999. [5] S.S. Keerthi, S.K. Shevade, C. Bhattacharyya, and K.R.K. Murthy. Improvements to platt’s SMO algorithm for SVM classifier design. Technical report, Dept of CSA, IISc, Bangalore, India, 1999. [6] P. Meinicke, T. Twellmann, and H. Ritter. Maximum contrast classifiers. In Proc. of the Int. Conf. on Artificial Neural Networks, Berlin, 2002. Springer. in press. [7] J. Platt. Fast training of support vector machines using sequential minimal optimization. In B. Sch¨olkopf, C. J. C. Burges, and A. J. Smola, editors, Advances in Kernel Methods — Support Vector Learning, pages 185–208, Cambridge, MA, 1999. MIT Press. [8] G. R¨atsch, T. Onoda, and K.-R. M¨uller. Soft margins for AdaBoost. Technical Report NC-TR1998-021, Department of Computer Science, Royal Holloway, University of London, Egham, UK, August 1998. Submitted to Machine Learning. [9] B. D. Ripley. Pattern Recognition and Neural Networks. Cambridge University Press, Cambridge, 1996. [10] B. Sch¨olkopf and A. J. Smola. Learning with Kernels. MIT Press, 2002. [11] D. W. Scott. Multivariate Density Estimation. Wiley, 1992. [12] V. N. Vapnik. The Nature of Statistical Learning Theory. Springer, New York, 1995.
|
2002
|
134
|
2,141
|
Interpreting Neural Response Variability as Monte Carlo Sampling of the Posterior Patrik O. Hoyer and Aapo Hyv¨arinen Neural Networks Research Centre Helsinki University of Technology P.O. Box 9800, FIN-02015 HUT, Finland http://www.cis.hut.fi/phoyer/ patrik.hoyer@hut.fi Abstract The responses of cortical sensory neurons are notoriously variable, with the number of spikes evoked by identical stimuli varying significantly from trial to trial. This variability is most often interpreted as ‘noise’, purely detrimental to the sensory system. In this paper, we propose an alternative view in which the variability is related to the uncertainty, about world parameters, which is inherent in the sensory stimulus. Specifically, the responses of a population of neurons are interpreted as stochastic samples from the posterior distribution in a latent variable model. In addition to giving theoretical arguments supporting such a representational scheme, we provide simulations suggesting how some aspects of response variability might be understood in this framework. 1 Introduction During the past half century, a wealth of data has been collected on the response properties of cortical sensory neurons. The majority of this research has focused on how the mean firing rates of individual neurons depend on the sensory stimulus. Similarly, mathematical models have mainly focused on describing how the mean firing rate could be computed from the input. One aspect which this research does not address is the high variability of cortical neural responses. The trial-to-trial variation in responses to identical stimuli are significant [1, 2], and several trials are typically required to get an adequate estimate of the mean firing rate. The standard interpretation is that this variability reflects ‘noise’ which limits the accuracy of the sensory system [2, 3]. In the standard model, the firing rate is given by rate stimulus noise (1) where is the ’tuning function’ of the cell in question. Here, the magnitude of the noise may depend on the stimulus. Experimental results [1, 2] seem to suggest that the amount of variability depends only on the mean firing rate, i.e. stimulus , and not on the particular Current address: 4 Washington Place, Rm 809, New York, NY 10003, USA stimulus that evoked it. Specifically, spike count variances tend to grow in proportion to spike count means [1, 2]. This has been taken as evidence for something like a Poisson process for neural firing. This standard view is not completely satisfactory. First, the exquisite sensitivity and the reliability of many peripheral neurons (see, e.g. [3]) show that neurons in themselves need not be very unreliable. In vitro experiments [4] also suggest that the large variability does not have its origin in the neurons themselves, but is a property of intact cortical circuits. One is thus tempted to point at synaptic ‘background’ activity as the culprit, attributing the variability of individual neurons to variable inputs. This seems reasonable, but it is not quite clear why such modulation of firing should be considered meaningless noise rather than reflecting complex neural computations. Second, the above model does a poor job of explaining neural responses in the phenomenon known as ’visual competition’: When viewing ambiguous (bistable) figures, perception, and the responses of many neurons with it, oscillates between two distinct states (for a review, see [5]). In other words, a single stimulus can yield two very different firing rates in a single neuron depending on how the stimulus is interpreted. In the above model, this means that either (a) the noise term needs to have a bimodal distribution, or (b) we are forced to accept the fact that neurons can be tuned to stimulus interpretations, rather than stimuli themselves. The former solution is clearly unattractive. The latter seems sensible, but we have then simply transformed the problem of oscillating firing rates into a problem of oscillating interpretations: Why should there be variability (over time, and over trials) in the interpretation of a stimulus? What would be highly desirable is a theoretical framework in which the variability of responses could be shown to have a specific purpose. One suggestion [6] is that variability could improve the signal to noise ratio through a phenomenon known as ‘stochastic resonance’. Another recent suggestion is that variability contributes to the contrast invariance of visual neurons [7]. In this paper, we will propose an alternative explanation for the variability of neural responses. This hypothesis attempts to account for both aspects of variability described above: the Poisson-like ‘noise’ and the oscillatory responses to ambiguous stimuli. Our suggestion is based on the idea that cortical circuits implement Bayesian inference in latent variable models [8, 9, 10]. Specifically, we propose that neural firing rates might be viewed as representing Monte Carlo samples from the posterior distribution over the latent variables, given the observed input. In this view, the response variability is related to the uncertainty, about world parameters, which is inherent in any stimulus. This representation would allow not only the coding of parameter values but also of their uncertainties. The latter could be accomplished by pooling responses over time, or over a population of redundant cells. Our proposal has a direct connection to Monte Carlo methods widely used in engineering. These methods use built-in randomness to solve difficult problems that cannot be solved analytically. In particular, such methods are one of the main options for performing approximate inference in Bayesian networks [11]. With that in mind, it is perhaps even a bit surprising that Monte Carlo sampling has not, to our knowledge, previously been suggested as an explanation for the randomness of neural responses. Although the approach proposed is not specific to sensory modality, we will here, for concreteness, exclusively concentrate on vision. We shall start by, in the next section, reviewing the basic probabilistic approach to vision. Then we will move on to further explain the proposal of this contribution. 2 The latent variable approach to vision 2.1 Bayesian models of high-level vision Recently, a growing number of researchers have argued for a probabilistic approach to vision, in which the functioning of the visual system is interpreted as performing Bayesian inference in latent variable models, see e.g. [8, 9, 10]. The basic idea is that the visual input is seen as the observed data in a probabilistic generative model. The goal of vision is to estimate the latent (i.e. unobserved or hidden) variables that caused the given sensory stimulation. In this framework, there are a number of world parameters that contribute to the observed data. These could be, for example, object identities, dimensions and locations, surface properties, lighting direction, and so forth. These parameters are not directly available to the sensory system, but must be estimated from the effects that they have on the images projected onto the retinas. Collecting all the unknown world variables into the vector and all sensory data into the vector , the probability that a given set of world parameters caused a given sensory stimulus is (2) where is the prior probability of the set of world parameters , and describes how sensory data is generated from the world parameters. The distribution is known as the posterior distribution. A specific perceptual task then consists of estimating some subset of the world variables, given the observed data [10]. In face recognition, for example, one wants to know the identity of a person but one does not care about the specific viewpoint or the direction of lighting. Note, however, that sometimes one might specifically want to estimate viewpoint or lighting, disregarding identity, so one cannot just automatically throw out that information [10]. In a latent variable model, all relevant information is contained in the complete posterior distribution identity viewpoint lighting sensory data . To estimate the identity one must use the marginal posterior identity sensory data , obtained by integrating out the viewpoint and lighting variables. Bayesian models of high-level vision model the visual system as performing these types of computations, but typically do not specify how they might be neurally implemented. 2.2 Neural network models of low-level vision This probabilistic approach has not only been suggested as an abstract framework for vision, but in fact also as a model for interpreting actual neural firing patterns in the early visual cortex [12, 13]. In this line of research, the hypothesis is that the activity of individual neurons can be associated with hidden state variables, and that the neural circuitry implements probabilistic inference.1 The model of Olshausen and Field [12], known as sparse coding or independent component analysis (ICA) [14], depending on the viewpoint taken, is perhaps the most influential latent variable model of early visual processing to date. The hidden variables are independent and sparse, such as is given, for instance, by the double-sided exponential distribution
. The observed data vector is then given by a linear combination of the , plus additive isotropic Gaussian noise. That is, , 1Here, it must be stressed that in these low-level neural network models, the hidden variables that the neurons represent are not what we would typically consider to be the ‘causal’ variables of a visual scene. Rather, they are low-level visual features similar to the optimal stimuli of neurons in the early visual cortex. The belief is that more complex hierarchical models will eventually change this. where is a matrix of model parameters (weights), and is Gaussian with zero mean and covariance matrix . How does this abstract probabilistic model relate to neural processing? Olshausen and Field showed that when the model parameters are estimated (learned) from natural image data, the basis vectors (columns of ) come to resemble V1 simple cell receptive fields. Moreover, the latent variables relate to the activities of the corresponding cells. Specifically, Olshausen and Field suggested [12] that the firing rates of the neurons correspond to the maximum a posteriori (MAP) estimate of the latent variables, given the image input:
. An important problem with this kind of a MAP representation is that it attempts to represent a complex posterior distribution using only a single point (at the maximum). Such a representation cannot adequately represent multimodal posterior distributions, nor does it provide any way of coding the uncertainty of the value (the width of the peak). Many other proposed neural representations of probabilities face similar problems [11] (however, see [15] for a recent interesting approach to representing distributions). Indeed, it has been said [10, 16] that how probabilities actually are represented in the brain is one of the most important unanswered questions in the probabilistic approach to perception. In the next section we suggest an answer based on the idea that probability distributions might be represented using response variability. 3 Neural responses as samples from the posterior distribution? As discussed in the previous section, the distribution of primary interest to a sensory system is the posterior distribution over world parameters. In all but absolutely trivial models, computing and representing such a distribution requires approximative methods, of which one major option is Monte Carlo methods. These generate stochastic samples from a given distribution, without explicitly calculating it, and such samples can then be used to approximately represent or perform computations on that distribution [11]. Could the brain use a Monte Carlo approach to perform Bayesian inference? If neural firing rates are used (even indirectly) to represent continuous-valued latent variables, one possibility would be for firing rate variability to represent a probability distribution over these variables. Here, there are two main possibilities: (a) Variability over time. A single neuron could represent a continuous distribution if its firing rate fluctuated over time in accordance with the distribution to be represented. At each instant in time, the instantaneous firing rate would be a random sample from the distribution to be represented. (b) Variability over neurons. A distribution could be instantaneously represented if the firing rate of each neuron in a pool of identical cells was independently and randomly drawn from the distribution to be represented. Note that these are not exclusive, both types of variability could potentially coexist. Also note that both cases lead to trial-to-trial variability, as all samples are assumed independent. Both possibilities have their advantages. The first option is much more efficient in terms of the number of cells required, which is particularly important for representing highdimensional distributions. In this case, dependencies between variables can naturally be represented as temporal correlations between neurons representing different parameters. This is not nearly as straightforward for case (b). On the other hand, in terms of processing speed, this latter option is clearly preferred to the former. Any decisions should optimally be based on the whole posterior distribution, and in case (a) this would require collecting samples over an extended period of time. 0.1 1 10 0.1 1 10 0.1 1 10 0.1 1 10 0.1 1 10 0.1 1 10 0.1 1 10 0.1 1 10 Figure 1: Variance of response versus mean response, on log-log axes, for 4 representative model neurons. Each dot gives the mean (horizontal axis) and variance (vertical axis) of the response of the model neuron in question to one particular stimulus. Note that the scale of responses is completely arbitrary. We will now explain how both aspects of response variability described in the introduction can be understood in this framework. First, we will show how a simple mean-variance relationship can arise through sampling in the independent component analysis model. Then, we will consider how the variability associated with the phenomenon of visual competion can be interpreted using sampling. 3.1 Example 1: Posterior sampling in ICA Here, we sample the posterior distribution in the ICA model of natural images, and show how this might relate to the conspicious variance-mean relation of neural response variability. First, we used standard ICA methods [17] to estimate a complete basis for the 40-dimensional principal subspace of -pixel natural image patches. Motivated by the non-negativity of neural firing rates we modified the model to assume single-sided exponential priors
[18], and augmented the basis so that a pair of neurons coded separately for the positive and negative parts of each original independent component. We then took 50 random natural image patches and sampled the posterior distributions for all 50 patches , taking a total of 1000 samples in each case.2 From the 1000 collected samples, we calculated the mean and variance of the response of each neuron to each stimulus separately. We then plotted the variance against the mean independently for each neuron in log-log coordinates. Figure 1 shows the plots from 4 randomly selected neurons. The crucial thing to note is that, as for real neurons [1], the variance of the response is systematically related to the mean response, and does not seem to depend on the particular stimulus used to elicit a given mean response. This feature of neural variability is perhaps the single most important reason to believe that the variability is meaningless noise inherent in neural firing; yet we have shown that something like this might arise through sampling in a simple probabilistic model. Following [1, 2], we fitted lines to the plots, modeling the variance as var mean . Over the whole population (80 model neurons), the mean values of and were
and , with population standard deviations and (respectively). Although these values do not actually match those obtained from physiology (most reports give values of between 1 and 2, and close to 1, see [1, 2]), this is to be expected. First, the values of these parameters probably depend on the specifics of the ICA model, such as its dimensionality and the noise level; we did not optimize these to attempt to fit physiology. Second, and more importantly, we do not believe that ICA is an exact model of V1 function. Rather, the visual cortex would be expected to employ a much more complicated, hierarchical, image 2This was accomplished using a Markov Chain Monte Carlo method, as described in the Appendix. However, the technical details of this method are not very relevant to this argument. model. Thus, our main goal was not to show that the particular parameters of the variancemean relation could be explained in this framework, but rather the surprising fact that such a simple relation might arise as a result of posterior sampling in a latent variable model. 3.2 Example 2: Visual competition as sampling As described in the introduction, in addition to the mean-variance relationship observed throughout the visual cortex, a second sort of variability is that observed in visual competition. This phenomenon arises when viewing a bistable figure, such as the famous Necker cube or Rubin’s vase/face figure. These figures each have two interpretations (explanations) that both cannot reasonably explain the image simultaneously. In a latent variable image model, this corresponds to the case of a bimodal posterior distribution. When such figures are viewed, the perception oscillates between the two interpretations (for a review of this phenomenon, see [5]). This corresponds to jumping from mode to mode in the posterior distribution. This can directly be interpreted as sampling of the posterior. When the stimulus is modified so that one interpretations is slightly more natural than the other one, the former is dominant for a relatively longer period compared with the latter (again, see [5]), just as proper sampling takes relatively more samples from the mode which has larger probability mass. Although the above might be considered purely ‘perceptual’ sampling, animal studies indicate that especially in higher-level visual areas many neurons modulate their responses in sync with the animal’s perceptions [5, 19]. This link proves that some form of sampling is clearly taking place on the level of neural firing rates as well. Note that this phenomenon might be considered as evidence for sampling scheme (a) and against (b). If we instantaneously could represent whole distributions, we should be able to keep both interpretations in mind simultaneously. This is in fact (weak) evidence against any scheme of representing whole distributions instantaneously, by the same logic. 4 Conclusions One of the key unanswered questions in theoretical neuroscience seems to be: How are probabilities represented by the brain? In this paper, we have proposed that probability distributions might be represented using response variability. If true, this would also present a functional explanation for the significant amount of cortical neural ‘noise’ observed. Although it is clear that the variability degrades performance on many perceptual tasks of the laboratory, it might well be that it plays an important function in everyday sensory tasks. Our proposal would be one possible way in which it might do so. Do actual neurons employ such a computational scheme? Although our arguments and simulations suggest that it might be possible (and should be kept in mind), future research will be needed to answer that question. As we see it, key experiments would compare measured firing rate variability statistics (single unit variances, or perhaps two-unit covariances) to those predicted by latent variable models. Of particular interest are cases where contextual information reduces the uncertainty inherent in a given stimulus; our hypothesis predicts that in such cases neural variability is also reduced. A final question concerns how neurons might actually implement Monte Carlo sampling in practice. Because neurons cannot have global access to the activity of all other neurons in the population, the only possibility seems to be something akin to Gibbs sampling [20]. Such a scheme might require only relatively local information and could thus conceivably be implemented in actual neural networks. Acknowledgements — Thanks to Paul Hoyer, Jarmo Hurri, Bruno Olshausen, Liam Paninski, Phil Sallee, Eero Simoncelli, and Harri Valpola for discussions and comments. References [1] A. F. Dean. The variability of discharge of simple cells in the cat striate cortex. Experimental Brain Research, 44:437–440, 1981. [2] D. J. Tolhurst, J. A. Movshon, and A. F. Dean. The statistical reliability of signals in single neurons in cat and monkey visual cortex. Vision Research, 23:775–785, 1983. [3] A. J. Parker and W. T. Newsome. Sense and the single neuron: Probing the physiology of perception. Annual Review of Neuroscience, 21:227–277, 1998. [4] G. R. Holt, W. R. Softky, C. Koch, and R. J. Douglas. Comparison of discharge variability in vitro and in vivo in cat visual cortex neurons. Journal of Neurophysiology, 75:1806–1814, 1996. [5] R. Blake and N. K. Logothetis. Visual competition. Nature Reviews Neuroscience, 3:13–21, 2002. [6] M. Rudolph and A. Destexhe. Do neocortical pyramidal neurons display stochastic resonance? Journal of Computational Neuroscience, 11:19–42, 2001. [7] J. S. Anderson, I. Lampl, D. C. Gillespie, and D. Ferster. The contribution of noise to contrast invariance of orientation tuning in cat visual cortex. Science, 290:1968–1972, 2000. [8] D. C. Knill and W. Richards, editors. Perception as Bayesian Inference. Cambridge University Press, 1996. [9] R. P. N. Rao, B. A. Olshausen, and M. S. Lewicki, editors. Probabilistic Models of the Brain. MIT Press, 2002. [10] D. Kersten and P. Schrater. Pattern inference theory: A probabilistic approach to vision. In R. Mausfeld and D. Heyer, editors, Perception and the Physical World. Wiley & Sons, 2002. [11] P. Dayan. Recognition in hierarchical models. In F. Cucker and M. Shub, editors, Foundations of Computational Mathematics. Springer, Berlin, Germany, 1997. [12] B. A. Olshausen and D. J. Field. Sparse coding with an overcomplete basis set: A strategy employed by V1? Vision Research, 37:3311–3325, 1997. [13] R. P. N. Rao and D. H. Ballard. Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive field effects. Nature Neuroscience, 2(1):79–87, 1999. [14] A. J. Bell and T. J. Sejnowski. The ‘independent components’ of natural scenes are edge filters. Vision Research, 37:3327–3338, 1997. [15] R. S. Zemel, P. Dayan, and A. Pouget. Probabilistic interpretation of population codes. Neural Computation, 10(2):403–430, 1998. [16] H. B. Barlow. Redundancy reduction revisited. Network: Computation in Neural Systems, 12:241–253, 2001. [17] A. Hyv¨arinen. Fast and robust fixed-point algorithms for independent component analysis. IEEE Trans. on Neural Networks, 10(3):626–634, 1999. [18] P. O. Hoyer. Modeling receptive fields with non-negative sparse coding. In E. De Schutter, editor, Computational Neuroscience: Trends in Research 2003. Elsevier, Amsterdam, 2003. In press. [19] N. K. Logothetis and J. D. Schall. Neuronal correlates of subjective visual perception. Science, 245:761–763, 1989. [20] S. Geman and D. Geman. Stochastic relaxation, gibbs distributions, and the bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6:721–741, 1984. Appendix: MCMC sampling of the non-negative ICA posterior The posterior probability of , upon observing , is given by
(3) Taking the (natural) logarithm yields (4) where is a vector of all ones. The crucial thing to note is that this function is quadratic in . Thus, the posterior distribution has the form of a gaussian, except that of course it is only defined for non-negative . Rejection sampling might look tempting, but unfortunately does not work well in high dimensions. Thus, we will instead opt for a Markov Chain Monte Carlo approach. Implementing Gibbs sampling [20] is quite straightforward. The posterior distribution of
, given and all other hidden variables , is a one-dimensional density that we will call cut-gaussian, if
! "$#&%
')( +* ( % , if . if / (5) In this case, we have the following parameter values: 0 ! 21 3 1 ! 1 and 54 (6) Here, 1 denotes the 6 :th column of , and
3 denotes the current state vector but with set to zero. Sampling from such a one-dimensional distribution is relatively simple. Just as one can sample the corresponding (uncut) gaussian by taking uniformly distributed samples on the interval and passing them through the inverse of the gaussian cumulative distribution function, the same can be done for a cut-gaussian distribution by constraining the uniform sampling interval suitably. Hence Gibbs sampling is feasible, but, as is well known, Gibbs sampling exhibits problems when there are significant correlations between the sampled variables. Thus we choose to use a sampling scheme based on a rotated co-ordinate system. The basic idea is to update the state vector not in the directions of the component axes, as in standard Gibbs sampling, but rather in the directions of the eigenvectors of . Thus we start by calculating these eigenvectors, and cycle through them one at a time. Denoting the current unit-length eigenvector to be updated 7 we have as a function of the step length , 7 const 7 8 7 7 7 (7) Again, note how this is a quadratic function of . Again, the non-negativity constraints on require us to sample a cut-gaussian distribution. But this time there is an additional complication: When the basis is overcomplete, some of the eigenvectors will be associated with zero eigenvalues, and the logarithmic probability will be linear instead of quadratic. Thus, in such a case we must sample a cut-exponential distribution, :9 if ;
0=< if > if /; (8) Like in the cut-gaussian case, this can be done by uniformly sampling the corresponding interval and then applying the inverse of the exponential cumulative distribution function. In summary: We start by calculating the eigensystem of the matrix , and set the state vector to random non-negative values. Then we cycle through the eigenvectors indefinitely, sampling from cut-gaussian or cut-exponential distributions depending on the eigenvalue corresponding to the current eigenvector 7 , and updating the state vector to 7 . MATLAB code performing and verifying this sampling is available at: http://www.cis.hut.fi/phoyer/code/samplingpack.tar.gz
|
2002
|
135
|
2,142
|
Combining Features for BCI Guido Dornhege1∗, Benjamin Blankertz1, Gabriel Curio2, Klaus-Robert Müller1,3 1Fraunhofer FIRST.IDA, Kekuléstr. 7, 12489 Berlin, Germany 2Neurophysics Group, Dept. of Neurology, Klinikum Benjamin Franklin, Freie Universität Berlin, Hindenburgdamm 30, 12203 Berlin, Germany 3University of Potsdam, August-Bebel-Str. 89, 14482 Potsdam, Germany
! #"%$ !'&( ) )* (+ ,- /.0"21 3&
( )4567 -8.9%(+ Abstract Recently, interest is growing to develop an effective communication interface connecting the human brain to a computer, the ’Brain-Computer Interface’ (BCI). One motivation of BCI research is to provide a new communication channel substituting normal motor output in patients with severe neuromuscular disabilities. In the last decade, various neurophysiological cortical processes, such as slow potential shifts, movement related potentials (MRPs) or event-related desynchronization (ERD) of spontaneous EEG rhythms, were shown to be suitable for BCI, and, consequently, different independent approaches of extracting BCI-relevant EEG-features for single-trial analysis are under investigation. Here, we present and systematically compare several concepts for combining such EEG-features to improve the single-trial classification. Feature combinations are evaluated on movement imagination experiments with 3 subjects where EEG-features are based on either MRPs or ERD, or both. Those combination methods that incorporate the assumption that the single EEG-features are physiologically mutually independent outperform the plain method of ’adding’ evidence where the single-feature vectors are simply concatenated. These results strengthen the hypothesis that MRP and ERD reflect at least partially independent aspects of cortical processes and open a new perspective to boost BCI effectiveness. 1 Introduction A brain-computer interface (BCI) is a system which translates a subject’s intentions into a control signal for a device, e.g., a computer application, a wheelchair or a neuroprosthesis, cf. [1]. When measuring non-invasively, brain activity is acquired by scalp-recorded electroencephalogram (EEG) from a subject that tries to convey its intentions by behaving according to well-defined paradigms, e.g., motor imagery, specific mental tasks, or feedback control. ’Features’ (or feature vectors) are extracted from the digitized EEG-signals by signal processing methods. These features are translated into a control signal, either (1) by simple equations or threshold criteria (with only a few free parameters that are estimated on training data), or (2) by machine learning algorithms that learn a more complex ∗To whom correspondence should be addressed. decision function on the training data, e.g., linear discriminant analysis (LDA), support vector machines (SVMs), or artificial neural networks (ANN). Concerning the pivotal step of feature extraction, neurophysiological a priori knowledge can aid to decide which EEG-feature is to be expected to hold the most discriminative information for the chosen paradigm. For some behavioral paradigms even several EEGfeatures might be usable, stimulating a discussion how to combine different features. Investigations in this direction were announced, e.g., in [2, 3] but no publications on that topic followed. Here, we present several methods for combining features to enhance single-trial EEG classification for BCI. A special focus was placed on the question how to incorporate a priori knowledge about feature independence. Recently this approach proved to be most effective in an open internet-based classification competition: it turned out winning entry of the NIPS BCI competition 2001, dataset 2, cf. && &/%( (+,9 .( ,
7
&.8&.0 ! 8&/
( & . Neurophysiological background for single-feature EEG-paradigms. Three approaches are characteristic for the majority of single-feature BCI paradigms. (1) Based on slow cortical potentials the Tübinger Thought Translation Device (TTD) [4] translates low-pass filtered brain activity from central scalp position into a vertical cursor movement on a computer screen. This enables subjects to learn self-regulation of electrocortical positivity or negativity. After some training, patients can generate binary decisions in a 4 seconds pace with an accuracies of up to 85 % and thereby handle a word processor or an internet browser. (2) The Albany BCI system [2] allows the user to control cursor movement by oscillatory brain activity into one of two or four possible target areas on a computer screen. In the first training sessions most subjects use some kind of motor imagery which is replaced by adapted strategies during further feedback sessions. Well-trained users achieve hit rates of over 90 % in the two-target setup. Each selection typically takes 4 to 5 seconds. And (3), the Graz BCI system [5] is based on event-related modulations of the pericentral µ- and/or β-rhythms of sensorimotor cortices, with a focus on motor preparation and imagination. Feature vectors calculated from spontaneous EEG signals by adaptive autoregressive modelling are used to train a classifier. In a ternary classification task accuracies of over 96 % were obtained in an offline study with a trial duration of 8 seconds. Neurophysiological background for combining single EEG-features. Most gain from a combination of different features is expected when the single features provide complementary information for the classification task. In the case of movement related potentials (MRPs) or event-related desynchronization (ERD) of EEG rhythms, recent evidence [6] supports the hypothesis that MRPs and ERD of the pericentral alpha rhythm reflect different aspects of sensorimotor cortical processes and could provide complementary information on brain activity accompanying finger movements, as they show different spatiotemporal activation patterns, e.g., in primary (sensori-)motor cortex (M-1), supplementary motor area (SMA) and posterior parietal cortex (PP). This hypothesis is backed by invasive recordings [7] supporting the idea that ERD and MRPs represent different aspects of motor cortex activation with varying generation mechanisms: EEG was recorded during brisk, self-paced finger and foot movements subdurally in 3 patients and scalp-recorded in normal subjects. MRPs started over wide areas of the sensorimotor cortices (Bereitschaftspotential) and focalizes at the contralateral M-1 hand cortex with a steep negative slope prior to finger movement onset, reaching a negative peak approximately 100 ms after EMG onset (motor potential). In contrast, a bilateral M-1 ERD just prior to movement onset appeared to reflect a more widespread cortical ’alerting’ function. Most importantly, the ERD response magnitude did not have a significant correlation with the amplitude of the negative MRPs slope. Note that these studies analyze movement preparation and execution only. We presume a similar independence of MRP and ERD phenomena for imagined movements. This hypothesis is confirmed by our results, see section 3. Apart from exploiting complementary information on cortical processes, combining MRP and ERD based features might give the benefit of being more robust against artifacts from non central nervous system (CNS) activity such as eye movement (EOG) or muscular artifacts (EMG). While EOG activity mainly affects slow potentials, i.e. MRPs, EMG activity is of more concern to oscillatory features, cf. [1]. Accordingly, a classification method that is based on both features has better chance to handle trials that are contaminated by one kind of those artifacts. On the other hand, it might increase the risk of using non-CNS activity for classification which would not be conform with the BCI idea, [1]. For our setting the latter issue is investigated in section 2.3. 2 Data acquisition and analysis methods Experiments. In this paper we analyze EEG data from experiments with three subjects called aa, af and ak. The subject sat in a normal chair, with arms lying relaxed on the table. During the experiment the symbol ’L’ or ’R’ was shown every 4.5 ±0.25 sec for a duration of 3 s on the computer screen. The subject was instructed to imagine performing left resp. right hand finger movements as long as the symbol was visible. 200–300 trials were recorded for each class and each subject. Brain activity was recorded with 28 (subject aa) resp. 52 (subjects af and ak) Ag/AgCl electrodes at 1000Hz and downsampled to 100 Hz for the present offline study. In addition, an electromyogram (EMG) of the musculus flexor digitorum bilaterally and horizontal and vertical electrooculograms (EOG) were recorded to monitor non-CNS activity. No artifact rejection or correction was employed. Objective of single-trial analysis. In these experiments the aim of classification is to discriminate ’left’ from ’right’ trials based on EEG-data during the whole period of imagination. Here, no effort was made to come to a decision as early as possible, which would also be a reasonable objective. 2.1 Feature Extraction The present behavioural paradigms allowed to study the two prominent brain signals accompanying motor imagery: (1) the lateralized MRP showing up as a slow negative EEGshift focussed over the corresponding motor and sensorimotor cortex contralateral to the involved hand, and (2) the ERD appearing as a lateralized attenuation of the µ- and/or central β-rhythm. Fig. 1 shows these effects calculated from subject aa. In the following we describe methods to derive feature vectors capturing MRP or ERD effects. Note that all filtering techniques used are causal so that all methods are applicable in online systems. Some free parameters were chosen from appropriately fixed parameter sets by cross-validation for all experiments and each classification setting separately described in section 2.2. This selection was done to obtain the most appropriate setting for each single-feature analysis. These values were used for both, classifying trials based on single-features and the combined classification. Movement related potential (MRP). To quantify the lateralized MRP we proceeded similar to our approach in [8] (Berlin BrainComputer Interface, BBCI). Small modifications were made to take account of the different experimental setup. Signals were baseline corrected on the interval 0–300ms and downsampled by calculating five jumping means in several consecutive intervals beginning at 300ms and ending between 1500–3500ms. Optional an elliptic IIR low-pass filter at 2.5Hz C3 lap C4 lap C3 lap C4 lap Figure 1: ERP and ERD (7–30 Hz) curves for subject aa in the time interval -500 ms to 3000 ms relative to stimulus. Thin and thick lines are averages over right resp. left hand trials. The contralateral negativation resp. desynchronization is clearly observable. was applied to the signals beforehand. To derive feature vectors for the ERD effects we use two different methods which may reflect different aspects of brain rhythm modulations. The first (AR) reflects the spectral distribution of the most prominent brain rhythms whereas the second (CSP) reflects spatial patterns of most prominent power modulation in specifying frequency bands. Autoregressive models (AR). In an autoregressive model of order p each time point of a time series is represented as a fixed linear combination (AR coefficients) of the last p data points. The model order p was taken as free parameter to be selected between 5 and 12. The feature vector of one trial is the concatenation of the AR coefficients plus the variance of each channel. The AR coefficients reflect oscillatory properties of the EEG signal, but not the overall amplitude. Accounting for this by adding the variance to the feature vector improves classification. To prevent the AR models from being distorted by EEG-baseline drifts, the signals were high-pass filtered at 4 Hz. And to sharpen the spectral information to focal brain sources (spatial) Laplacian filters were applied. The interval for estimating the AR parameters started at 500ms and the end points were choosen between 2000ms and 3500ms. Common spatial patterns (CSP). This method was suggested for binary classification of EEG trials in [9]. In features space projections on orientations with most differing power-ratios are used. These can be calculated by determining generalized eigenvalues or by simultaneous diagonalisation of the covariance matrices of both classes. Only a few orientations with the highest ratio between their eigenvalues (in both directions) are selected. The number of CSP used per class was a free parameter to be chosen between 2 and 4. Before applying CSP, the signals were filtered between 8 and 13 Hz to focus on effects in the α-band. Using a broader band of 7–30Hz did not give better results. The interval of interest were choosen as described above for the AR model. Feature vectors consist of the variances of the CSP projected trial, cf. [9]. Note that for cross-validation CSP must be calculated for each training set separately. 2.2 Classification and model selection Our approach for classification was guided by two general ideas. First, following the concept ’simple methods first’ we employed only linear classifiers. In our BCI studies linear classification methods were never found to perform worse than non-linear classifiers, cf. also [10, 11]. And second, regularization, which is a well-established principle in machine learning, is highly relevant in experimental conditions typical for a BCI scenario, i.e., a small number of training samples for ’weak features’. In weak features discriminative information is spread across many dimensions. Classifying such features based on a small training set may lead to the well-known overfitting problem. To avoid this, typically one of the following strategies is employed: (1) performing strong preprocessing to extract low dimensional feature vectors which are tractable for most classifiers. Or (2) performing no or weak preprocessing and carefully regularizing the classifier such that high-dimensional features can be handled even with only a small training set. Solution (1) has the disadvantage that strong assumptions about the data distributions have to be made. So especially in EEG analysis where many sources of variability make strong assumptions dubious, solution (2) is to be preferred. A good introduction to regularized classification is [12] including regularized LDA which we used here. To assess classification performance, the generalization error was estimated by 10×10-fold cross-validation. The reported standard deviation is calculated from the mean errors of the 10-fold cross-validations. The regularization coefficients were chosen by cross-validation together with the free parameters of the feature extraction methods, see section 2.1, in the following way. Strictly this cross-validation has to be performed on the training set. So in this off-line analysis where in each cross-validation procedure 100 different training sets are drawn randomly from the set of all trials one would have to do a cross-validation (for model selection, MS) within a cross-validation (for estimating the generalization error, GE). Obviously this would be very time consuming. On the other hand doing the model selection by cross-validation on all trials would could lead to overfitting and underestimating the generalization error. As an intermediate way MS-cross-validation was performed on three subsets of all trials that were randomly drawn where the size of the subsets was the same as the size of the training sets in the GE-cross-validation, i.e., here 90 % of the whole set. This procedure was tested in several settings without any significant bias on the estimation of the GE, cf. [13]. 2.3 Analysis of single-features The table in Fig. 2 shows the generalization error for single-features. Data of each subject can be well classified. Some differences in the quality of the features for classification are observable, but there is not one type of feature that is generally the best. The 10×10-fold cross-validation was also used to determine how often each trial is classified correctly when belonging to the test set. Trials which were classified 9 to 10 times (i.e., 90 to 100%) correctly are labeled ’good’, while those classified 9 to 10 times wrong are labeled ’bad’. Only a small number of trials did fall in neither of those two categories (’ambivalent’) as could be expected due to the small standard deviation. It is now interesting to see whether there are trials which are for one feature type in the well classified range and for the other feature in the badly classified part. Fig. 2 shows BP and CSP for subject af as example for each the part of the bad classified values which are good and bad classified in the other feature. These results strengthen the hypothesis that it is promising to combine features. We made the following check for the impact of non-CNS activity on classification results. MRP based classification was applied to the EOG signals and ERD based classification was applied to the EMG signals. All those tests resulted in accuracies at chance level (∼50%). Since the main concern in this paper is comparing classification with single vs. combined features this issue was not followed in further detail. 2.4 Combination methods Feature combination or sensor fusion strategies are rather common in speech recognition (e.g. [14]) or vision (e.g. [15]) or robotics (e.g. [16]) where either signals on different timescales or from distinct modalities need to be combined. Typical approaches suggested are a winner-takes-all strategy, which cannot increase performance above the best single feature analysis, and concatenation of the single feature vectors, discussed as CONCAT below. Furthermore combinations that use a joint probabilistic modeling [15] appear promising. We propose two further methods that incorporate independence assumptions (PROB and to aa af ak MRP 12.4 ± 0.6 18.4 ± 1.0 17.2 ± 0.8 AR 13.1 ± 0.8 21.2 ± 1.0 25.1± 0.6 CSP 9.5 ± 0.5 14.4 ± 0.8 17.5± 0.9 8% 10% 82% MRP−bad 11% 8% 81% CSP−bad Figure 2: Left: Misclassification rates for single features classified with regularized LDA. Free parameters of each feature extraction method were selected by cross-validation on subsets of all trials, see section 2.2. Right: Pie charts show how ’MRP-bad’ and ’CSP-bad’ trials for subject af are classified based on the respective other feature: white is the portion of the trials which is ’good’ for the other feature, black marks ’bad’, and gray ’ambivalent’ trials for the other feature. See text for the definition of ’good’, ’bad’ and ’ambivalent’ in this context. a smaller extend META) and allow individual decision boundary fitting to single features (META). (CONCAT) In this simple approach of gathered evidence feature vectors are just concatenated. To account for the increased dimensionality careful regulization is necessary. Additionally, we tried classification with a linear programming machine (LPM), which is appealing for its sparse feature selection property, but it did not improve results compared to regularized LDA. (PROB) It is well-known that LDA is the Bayes-optimal classifier, i.e., the one minimizing the expected risk of misclassification, for two classes of known gaussian distribution with equal covariance matrices. Here we derive the optimal classifier for combined feature vectors X = (X1,...,Xn) under the additional assumption that individual features X1,...,Xn are mutually independent. Denoting by ˆY(x) the decision function on feature space X ˆY(x) = ’R’ ⇔ P(Y = ’R’ | X = x) > P(Y = ’L’ | X = x) ⇔ fY=’R’(x) P(Y = ’R’) > fY=’L’(x) P(Y = ’L’), where Y is a random variable on the labels {’L’, ’R’} and f denotes densities. Using the independence assumption one can factorize the densities. Neglecting the class priors and exploiting the gaussian assumption (Xn | Y = y) ∼N (µn,y,Σn) we get the decision function ˆY(x) = ’R’ ⇔ N ∑ n=1 [w⊤ n xn −1 2(µn,’R’ + µn,’L’)⊤wn] > 0, with wn := Σ−1 n (µn,’R’ −µn,’L’) In terms of LDA this corresponds to forcing the elements of the estimated covariance matrix that belong to different features to zero. Thereby less parameters have to be estimated and distortions by accidental correlations of independent variables are avoided. If the classes do not have equal covariance matrices a non-linear version of PROB can be formulated in analogy to quadratic discriminant analysis (QDA). To avoid overfitting we use regularisation for PROB. There are two ways possible: Regularisation of the covariance matrices with one global parameter (PROBsame) or with three separately selected parameters corresponding to the single-type features (PROBdiff). (META) In this approach a meta classifier is applied to the continuous output of individual classifiers that are trained on single features beforehand. This allows a tailor-made choice of classifiers for each feature, e.g., if the decision boundary is linear for one feature and nonlinear for another. Here we just use LDA for all features, but regularization coefficients are selected for each single feature individually. Since the meta classifier acts on low (2 or 3) dimensional features further regularization is not needed, so we used unregularized LDA. META extracts discriminative information from single features independently but the meta classification may exploit inter relations based on the output of the individual decision Best Single CONCAT PROBsame PROBdiff META aa 9.5 ± 0.5 9.5 ± 0.4 6.3 ± 0.5 6.5 ± 0.5 6.7 ± 0.4 af 14.4 ± 0.8 14.4 ± 1.2 7.4 ± 0.8 7.4 ± 0.7 10.2 ± 0.5 ak 17.2 ± 0.8 14.8 ± 0.9 13.9 ± 1.0 13.2 ± 0.7 14.0 ± 0.8 mean 13.7 ± 3.2 12.9 ± 2.4 9.2 ± 3.4 9.0 ± 3.0 10.3 ± 3.0 Table 1: Generalization errors ± s.d. of the means in 10×10-fold cross-validation for combined features compared to the most successful single-type feature. Best result for each subject is in boldface. functions. That means independence is assumed on the low level while possible high level relations are taken into account. 3 Results Table 1 shows the results for the combined classification methods and for comparison the best result on single-type features (’Best Single’) from the table of Fig. 2. All three feature were combined together. Combining two of them (especially MRP with AR or CSP) leads to good values, too, which are slightly worse, however. The CONCAT method performs only for subject ak better than the single feature methods. The following two problems may be responsible for that. First, there are only few training samples and a higher dimensional space than for the single features, so the curse of dimensionality stikes harder. And second, regularisation for the single features results in different regularisation parameters. In CONCAT a single regularisation parameter has to be found. In our case the regularisation parameters for subject aa for MRP are about 0.001 whereas for CSP about 0.8. From the other approaches the PROB methods are most successful, but META is very good, too, and better than the single feature results. Differences between the two PROB methods were not observed. Concerning the results it is noteworthy that all subjects were BCI-untrained. Only subject aa had experience as subject in EEG experiments. The result obtained with single-features is in the range of the best results for untrained BCI performance with imagined movement paradigm, cf. [17]. Whereas the result of less than 8 % error with our proposed combining approach for subject aa and af is better than for the 3 subjects in [17] in up to even 10 feedback sessions. Subject ak with an error rate of less than 14 % is in the range of good results. Additionally, it should be noted that the subject aa reported that he sometimes missed to react to the stimulus due to fatigue. He estimated the portion of missed stimuli to be 5 %. Hence the classification error of 6.3 % is very close to what is possible to achieve. 4 Concluding discussion Combining the feature vectors corresponding to event-related desynchronization and movement-relatedpotentials under an independence assumption derived from a priori physiological knowledge (PROB, and to a smaller extent META) leads to an improved classification accuracy when compared to single-feature classification. In contrast, the combination of features without any assumption of independence (CONCAT) did not improve accuracy in every case and always performs worse than PROB and META. These results further support the hypothesis that MRP and ERD reflect independent aspects of brain activity. In all three experiments an improvement of about 25 % to 50 % reduction of the error rate could be achieved by combining methods. Additionally, the combined approach has the practical advantage that no prior decision has to be made about what feature to use. Combining features of different brain processes in feedback scenarios where the subject is trying to adapt to the feedback algorithm could in principle hold the risk of making the learning task too complex for the subject. This, however, needs to be investigated in future online studies. Finally, we would like to remark that the proposed feature combination principles can be used in other application areas where independent features can be obtained. Acknowledgments. We thank Sebastian Mika, Roman Krepki, Thorsten Zander, Gunnar Raetsch, Motoaki Kawanabe and Stefan Harmeling for helpful discussions. The studies were supported by a grant of the Bundesministerium für Bildung und Forschung (BMBF), FKZ 01IBB02A and FKZ 01IBB02B. References [1] J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and T. M. Vaughan, “Braincomputer interfaces for communication and control”, Clin. Neurophysiol., 113: 767–791, 2002. [2] J. R. Wolpaw, D. J. McFarland, and T. M. Vaughan, “Brain-Computer Interface Research at the Wadsworth Center”, IEEE Trans. Rehab. Eng., 8(2): 222–226, 2000. [3] J. A. Pineda, B. Z. Allison, and A. Vankov, “The Effects of Self-Movement, Observation, and Imagination on µ–Rhythms and Readiness Potential (RP’s): Toward a Brain-computer Interface (BCI)”, IEEE Trans. Rehab. Eng., 8(2): 219–222, 2000. [4] N. Birbaumer, N. Ghanayim, T. Hinterberger, I. Iversen, B. Kotchoubey, A. Kübler, J. Perelmouter, E. Taub, and H. Flor, “A spelling device for the paralysed”, Nature, 398: 297–298, 1999. [5] B. O. Peters, G. Pfurtscheller, and H. Flyvbjerg, “Automatic Differentiation of Multichannel EEG Signals”, IEEE Trans. Biomed. Eng., 48(1): 111–116, 2001. [6] C. Babiloni, F. Carducci, F. Cincotti, P. M. Rossini, C. Neuper, G. Pfurtscheller, and F. Babiloni, “Human Movement-Related Potentials vs Desynchronization of EEG Alpha Rhythm: A HighResolution EEG Study”, NeuroImage, 10: 658–665, 1999. [7] C. Toro, G. Deuschl, R. Thather, S. Sato, C. Kufta, and M. Hallett, “Event-related desynchronization and movement-related cortical potentials on the ECoG and EEG”, Electroencephalogr. Clin. Neurophysiol., 93: 380–389, 1994. [8] B. Blankertz, G. Curio, and K.-R. Müller, “Classifying Single Trial EEG: Towards Brain Computer Interfacing”, in: T. G. Diettrich, S. Becker, and Z. Ghahramani, eds., Advances in Neural Inf. Proc. Systems (NIPS 01), vol. 14, 2002, to appear. [9] H. Ramoser, J. Müller-Gerking, and G. Pfurtscheller, “Optimal spatial filtering of single trial EEG during imagined hand movement”, IEEE Trans. Rehab. Eng., 8(4): 441–446, 2000. [10] L. Parra, C. Alvino, A. C. Tang, B. A. Pearlmutter, N. Yeung, A. Osman, and P. Sajda, “Linear spatial integration for single trial detection in encephalography”, NeuroImage, 2002, to appear. [11] K.-R. Müller, C. W. Anderson, and G. E. Birch, “Linear and Non-Linear Methods for BrainComputer Interfaces”, IEEE Trans. Neural Sys. Rehab. Eng., 2003, submitted. [12] J. H. Friedman, “Regularized Discriminant Analysis”, J. Amer. Statist. Assoc., 84(405): 165– 175, 1989. [13] G. Rätsch, T. Onoda, and K.-R. Müller, “Soft Margins for AdaBoost”, Machine Learning, 42(3): 287–320, 2001. [14] N. Morgan and H. Bourlard, “Continuous Speech Recognition: An Introduction to the Hybrid HMM/Connectionist Approach”, Signal Processing Magazine, 25–42, 1995. [15] M. Brand, N. Oliver, and A. Pentland, “Coupled hidden markov models for complex action recognition”, 1996. [16] S. Thrun, A. Bücken, W. Burgard, D. Fox, T. Fröhlinghaus, D. Henning, T. Hofmann, M. Krell, and T. Schmidt, “Map Learning and High-Speed Navigation in RHINO”, in: D. Kortenkamp, R. Bonasso, and R. Murphy, eds., AI-based Mobile Robots, MIT Press, 1998. [17] G. Pfutscheller, C. Neuper, D. Flotzinger, and M. Pregenzer, “EEG-based discrimination between imagination of right and left hand movement”, Electroencephalogr. Clin. Neurophysiol., 103: 642–651, 1997.
|
2002
|
136
|
2,143
|
A Probabilistic Model for Learning Concatenative Morphology Matthew G. Snover Department of Computer Science Washington University St Louis, MO, USA, 63130-4809 ms9@cs.wustl.edu Michael R. Brent Department of Computer Science Washington University St Louis, MO, USA, 63130-4809 brent@cs.wustl.edu Abstract This paper describes a system for the unsupervised learning of morphological suffixes and stems from word lists. The system is composed of a generative probability model and hill-climbing and directed search algorithms. By extracting and examining morphologically rich subsets of an input lexicon, the directed search identifies highly productive paradigms. The hill-climbing algorithm then further maximizes the probability of the hypothesis. Quantitative results are shown by measuring the accuracy of the morphological relations identified. Experiments in English and Polish, as well as comparisons with another recent unsupervised morphology learning algorithm demonstrate the effectiveness of this technique. 1 Introduction One of the fundamental problems in computational linguistics is adaptation of language processing systems to new languages with minimal reliance on human expertise. A ubiquitous component of language processing systems is the morphological analyzer, which determines the properties of morphologically complex words like watches and gladly by inferring their derivation as watch+s and glad+ly. The derivation reveals much about the word, such as the fact that glad+ly share syntactic properties with quick+ly and semantic properties with its stem glad. While morphological processes can take many forms, the most common are suffixation and prefixation (collectively, concatenative morphology). In this paper, we present a system for unsupervised inference of morphological derivations of written words, with no prior knowledge of the language in question. Specifically, neither the stems nor the suffixes of the language are given in advance. This system is designed for concatenative morphology, and the experiments presented focus on suffixation. It is applicable to any language for written words lists are available. In languages that have been a focus of research in computational linguistics the practical applications are limited, but in languages like Polish, automated analysis of unannotated text corpora has potential applications for information retrieval and other language processing systems. In addition, automated analysis might find application as a hypothesis-generatingtool for linguists or as a cognitive model of language acquisition. In this paper, however, we focus on the problem of unsupervised morphological inference for its inherent interest. During the last decade several minimally supervised and unsupervised algorithms have been developed. Gaussier[1] describes an explicitly probabilistic system that is based primarily on spellings. It is an unsupervised algorithm, but requires the tweaking of parameters to tune it to the target language. Brent [2] and Brent et al. [3] describe Minimum Description Length, (MDL), systems. Goldsmith [4] describes a similar MDL approach. Our motivation in developing a new system was to improve performance and to have a model cast in an explicitly probabilistic framework. We are particularly interested in developing automated morphological analysis as a first stage of a larger grammatical inference system, and hence we favor a conservative analysis that identifies primarily productive morphological processes (those that can be applied to new words). In this paper, we present a probabilistic model and search algorithm for automated analysis of suffixation, along with experiments comparing our system to that of Goldsmith [4]. This system, which extends the system of Snover and Brent [5], is designed to detect the final stem and suffix break of each word given a list of words. It does not distinguish between derivational and inflectional suffixation or between the notion of a stem and a root. Further, it does not currently have a mechanism to deal with multiple interpretations of a word, or to deal with morphological ambiguity. Within it’s design limitations, however, it is both mathematically clean and effective. 2 Probability Model This section introduces a prior probability distribution over the space of all hypotheses, where a hypothesis is a set of words, each with morphological split separating the stem and suffix. The distribution is based on a seven-step model for the generation of hypotheses, which is heavily based upon the probability model presented in [5]. The hypothesis is generated by choosing the number of stems and suffixes, the spellings of those stems and suffixes and then the combination of the stems and suffixes. The seven steps are presented below, along with their probability distributions and a running example of how a hypothesis could be generated by this process. By taking the product over the distributions from all of the steps of the generative process, one can calculate the prior probability for any given hypothesis. What is described in this section is a mathematical model and not an algorithm intended to be run. 1. Choose the number of stems, , according to the distribution:
(1) The
term normalizes the inverse-squared distribution on the positive integers. The number of suffixes, is chosen according to the same probability distribution. The symbols M for steMs and X for suffiXes are used throughout this paper. Example: = 5. = 3. 2. For each stem , choose its length in letters , according to the inverse squared distribution. Assuming that the lengths are chosen independently and multiplying together their probabilities we have:
! " $#&%
(2) The distribution for the lengths of the suffixes, ' , is similar to (2), differing only in that suffixes of length 0 are allowed, by offsetting the length by one. Example: ( = 4, 4, 4, 3, 3. ' = 2, 0, 1. 3. Let be the alphabet, and let %
be a probability distribution on . For each from 1 to , generate stem by choosing letters at random, according to the probabilities % . Call the resulting stem set STEM. The suffix set SUFF is generated in the same manner. The probability of any character, , being chosen is obtained from a maximum likelihood estimate: where is the count of among all the hypothesized stems and suffixes and . The joint probability of the hypothesized stem and suffix sets is defined by the distribution: STEM SUFF ' ! " #" $ &% (3) The factorial terms reflect the fact that the stems and suffixes could be generated in any order. Example: STEM = walk, look, door, far, cat . SUFF = ed, ' , s . 4. We now choose the number of paradigms, ( . A paradigm is a set of suffixes and the stems that attach to those suffixes and no others. Each stem is in exactly one paradigm, and each paradigm has at least one stem., thus ( can range from 1 to . We pick ( according to the following uniform distribution: ( *) % (4) Example: ( = 3. 5. We choose the number of suffixes in the paradigms, + , according to a uniform distribution. The distribution for picking + , suffixes for paradigm is: + ,( The joint probability over all paradigms, + is therefore: + ,( ." # % ) % (5) Example: + = 2, 1, 2 . 6. For each paradigm , choose the set of + suffixes, PARA ' that the paradigm will represent. The number of subsets of a given size is finite so we can again use the uniform distribution. This implies that the probability of each individual subset of size + , is the inverse of the total number of such subsets. Assuming that the choices for each paradigm are independent: PARA ' ,(/+ " # % + ) % + ) (6) Example: PARA ' % = 0' , s, ed . PARA '
= 1' . PARA '2 = 1' , s . 7. For each stem choose the paradigm that the stem will belong in, according to a distribution that favors paradigms with more stems. The probability of choosing a paradigm , for a stem is calculated using a maximum likelihood estimate: PARA where PARA is the set of stems in paradigm . Assuming that all these choices are made independently yields the following: PARA ,( " # % PARA PARA 34 (7) Example: PARA % = walk, look . PARA
= far . PARA 2 = door, cat . Combining the results of stages 6 and 7, one can see that the running example would yield the hypothesis consisting of the set of words with suffix breaks, walk+ ' , walk+s, walk+ed, look+ ' , look+s, look+ed, far+ ' , door+ ' , door+s, cat+ ' , cat+s . Removing the breaks in the words results in the set of input words. To find the probability for this hypothesis just take of the product of the probabilities from equations (1) to (7). Using this generative model, we can assign a probability to any hypothesis. Typically one wishes to know the probability of the hypothesis given the data, however in our case such a distribution is not required. Equation (8) shows how the probability of the hypothesis given the data could be derived from Bayes law. Hyp Data Hyp Data Hyp Data (8) Our search only considers hypotheses consistent with the data. The probability of the data given the hypothesis, Data Hyp , is always , since if you remove the breaks from any hypothesis, the input data is produced. This would not be the case if our search considered inconsistent hypotheses. The prior probability of the data is constant over all hypotheses, thus the probability of the hypothesis given the data reduces to Hyp . The prior probability of the hypothesis is given by the above generative process and, among all consistent hypotheses, the one with the greatest prior probability also has the greatest posterior probability. 3 Search This section details a novel search algorithm which is used to find a high probability segmentation of the all the words in the input lexicon, . The input lexicon is a list of words extracted from a corpus. The output of the search is a segmentation of each of the input words into a stem and suffix. The search algorithm has two phases, which we call the directed search and the hillclimbing search. The directed search builds up a consistent hypothesis about the segmentation of all words in the input out of consistent hypothesis about subsets of the words. The hill-climbing search further tunes the result of the directed search by trying out nearby hypotheses over all the input words. 3.1 Directed Search The directed search is accomplished in two steps. First, sub-hypotheses, each of which is a hypothesis about a subset of the lexicon, are examined and ranked. The best subhypotheses are then incrementally combined until a single sub-hypothesis remains. The remainder of the input lexicon is added to this sub-hypothesis at which point it becomes the final hypothesis. We define the set of possible suffixes to be the set of terminal substrings, including the empty string ' , of the words in . For each subset of the possible suffixes , there is a maximal set of possible stems (initial substrings) , such that for each and each , is a word in . We define to be the sub-hypothesis in which each input word that can be analyzed as consisting of a stem in and a suffix in is analyzed that way. This subhypothesis consists of all pairings of the stems in and the suffixes in with the corresponding morphological breaks. One can think of each sub-hypothesis as initially corresponding to a maximally filled paradigm. We only consider sub-hypotheses which have at least two stems and two suffixes. For each sub-hypothesis, , there is a corresponding null hypothesis, , which has the same set of words as , but in which all the words are hypothesized to consist of the word as the stem and ' as the suffix. We give each sub-hypothesis a score as follows: score . This reflects how much more probable is for those words, than the null hypothesis. One can view all sub-hypotheses as nodes in a directed graph. Each node, , is connected to another node, if and only if represents a superset of the suffixes that represents, which is exactly one suffix greater in size than the set that represents. By beginning at the node representing no suffixes, one can apply standard graph search techniques, such as a beam search or a best first search to find the best scoring nodes without visiting all nodes. While one cannot guarantee that such approaches perform exactly the same as examining all sub-hypotheses, initial experiments using a beam search with a beam size equal to , with a of 100, show that the best sub-hypotheses are found with a significant decrease in the number of nodes visited. The experiments presented in this paper do not use these pruning methods. The highest scoring sub-hypotheses are incrementally combined in order to create a hypothesis over the complete set of input words. Changing the value of does not dramatically alter the results of the algorithm, though higher values of give slightly better results. We let be 100 in the experiments reported here. Let be the highest scoring sub-hypotheses. We iteratively remove the highest scoring hypothesis from . The words in are added to each of the remaining sub-hypotheses in , and their null hypotheses, with their morphological breaks from . If a word in was already in the morphological break from overrides the one from . All of the sub-hypotheses are now rescored, as the words in them have changed. If, after rescoring, none of the sub-hypotheses have likelihood ratios greater than one, then we use as our final hypothesis. Otherwise we, iterate until either there is only one sub-hypotheses left or all subhypotheses have scores no greater than one. The final sub-hypothesis, , is now converted into a full hypothesis over all the words. All words in that are not in are added to with suffix ' . 3.2 Hill Climbing Search The hill climbing search further optimizes the probability of the hypothesis by moving stems to new nodes. For each possible suffix , and each node , the search attempts to add to . This means that all stems in that can take the suffix are moved to a new node, , which represents all the suffixes of and . This is analogous to pushing stems to adjacent nodes in a directed graph. A stem , can only be moved into a node with the suffix , if the new word, is an observed word in the input lexicon. The move is only done if it increases the probability of the hypothesis. There is an analogous suffix removal step which attempts to remove suffixes from nodes. The hill climbing search continues to add and remove suffixes to nodes until the probability of the hypothesis cannot be increased. A more detailed description of this portion of the search and its algorithmic invariants is given in [5]. 4 Experiment and Evaluation 4.1 Experiment We tested our unsupervised morphology learning system, which we refer to as Paramorph, and Goldsmith’s MDL system, otherwise known as Linguistica1, on various sized word lists 1A demo version available on the web, http://humanities.uchicago.edu/faculty/goldsmith/, was used for these experiments. Word-list corpus mode and the method A suffix detection were used. All from English and Polish corpora. For English we used set A of the Hansard corpus, which is a parallel English and French corpus of proceedings of the Canadian Parliament. We were unable to find a standard corpus for Polish and developed one from online sources. The sources for the Polish corpus were older texts and thus our results correspond to a slightly antiquated form of the language. The results were evaluated by measuring the accuracy of the stem relations identified. We extracted input lexicons from each corpus, excluding words containing non-alphabetic characters. The 100 most common words in each corpus were also excluded, since these words tend to be function words and are not very informative for morphology. The systems were run on the 500, 1,000, 2,000, 4,000, and 8,000 most common remaining words. The experiments in English were also conducted on the 16,000 most common words from the Hansard corpus. 4.1.1 Stem Relation Ideally, we would like to be able to specify the correct morphological break for each of the words in the input, however morphology is laced with ambiguity, and we believe this to be an inappropriate method for this task. For example it is unclear where the break in the word, “location” should be placed. It seems that the stem “locate” is combined with the suffix “tion”, but in terms of simple concatenation it is unclear if the break should be placed before or after the “t”. In an attempt to solve this problem we have developed a new measure of performance, which does not specify the exact morphological split of a word. We measure the accuracy of the stems predicted by examining whether two words which are morphologically related are predicted as having the same stem. The actual break point for the stems is not evaluated, only whether the words are predicted as having the same stem. We are working on a similar measure for suffix identification. Two words are related if they share the same immediate stem. For example the words “building”, “build”, and “builds” are related since they all have “build” as a stem, just as “building” and “buildings” are related as they both have “building” as a stem. The two words, “buildings” and “build” are not directly related since the former has “building” as a stem, while “build” is its own stem. Irregular forms of words are also considered to be related even though such relations would be very difficult to detect with a simple concatenation model. The stem relation precision measures how many of the relations predicted by the system were correct, while the recall measures how many of the relations present in the data were found. Stem relation fscore is an unbiased combination of precision and recall that favors equal scores. 4.2 Results The results from the experiments are shown in Figures 1 and 2. All graphs are shown using a log scale for the corpus size. Due to software difficulties we were unable to get Linguistica to run on 500, 1000, and 2000 words in English. The software ran without difficulties on the larger English datasets and on the Polish data. As an additional note, Linguistica was dramatically faster than Paramorph, which is a development oriented software package and not as optimized for efficient runtime as Linguistica appears to be. Figure 1 shows the number of different suffixes predicted by each of the algorithms in both English and Polish. Our Paramorph system found a relatively constant number of other parameters were left at their default values. 0 100 200 300 400 500 600 700 800 500 1000 2k 4k 8k 16k English Number of Suffixes Lexicon Size 0 20 40 60 80 100 120 140 160 500 1000 2k 4k 8k Polish Number of Suffixes Lexicon Size ParaMorph Linguistica Figure 1: Number of Suffixes Predicted 0 0.2 0.4 0.6 0.8 1 500 1000 2k 4k 8k 16k English Stem Relation Fscore Lexicon Size 0 0.2 0.4 0.6 0.8 1 500 1000 2k 4k 8k Polish Stem Relation Fscore Lexicon Size ParaMorph Linguistica Figure 2: Stem Relation Fscores suffixes across lexicon sizes and Linguistica found an increasingly large number of suffixes, predicting over 700 different suffixes in the 16,000 word English lexicon. Figure 2 shows the fscores using the stem relation metric for various sizes of English and Polish input lexicons. Paramorph maintains a very high precision across lexicon sizes in both languages, whereas the precision of Linguistica decreases considerably at larger lexicon sizes. However Linguistica shows an increasing recall as the lexicon size increases, with Paramorph having a decreasing recall as lexicon size increases, though the recall of Linguistica in Polish is consistently lower than the Paramorph’s recall. The fscores for Paramorph and Linguistica in English are very close, and Paramorph appears to clearly outperform Linguistica in Polish. Suffixes Stems -a -e -ego -ej -ie -o -y dziwn ' -a -ami -y -e¸ chmur siekier ' -cie -li -m -´c gada odda sprzeda Table 1: Sample Paradigms in Polish Table 1 shows several of the larger paradigms found by Paramorph when run on 8000 words of Polish. The first paradigm shown is for the single adjective stem meaning “strange” with numerous inflections for gender, number and case, as well as one derivational suffix, “ie” which changes it into an adverb, “strangely”. The second paradigm is for the nouns, “cloud” and “ax”, with various case inflections and the third paradigm paradigm contains the verbs, “talk”, “return”, and “sell”. All suffixes in the third paradigm are inflectional indicating tense and agreement. The differences between the performance of Linguistica and Paramorph can most easily be seen in the number of suffixes predicted by each algorithm. The number of suffixes predicted by Linguistica grows linearly with the number of words, in general causing his algorithm to get much higher recall at the expense of precision. Paramorph maintains a fairly constant number of suffixes, causing it to generally have higher precision at the expense of recall. This is consistent with our goals to create a conservative system for morphological analysis, where the number of false positives is minimized. The Polish language presents special difficulties for both Linguistica and Paramorph, due to the highly complex nature of its morphology. There are far fewer spelling change rules and a much higher frequency of suffixes in Polish than in English. In addition phonology plays a much stronger role in Polish morphology, causing alterations in stems, which are difficult to detect using a concatenative framework. 5 Discussion Many of the stem relations predicted by Paramorph result from postulating stem and suffix breaks in words that are actually morphologically simple. This occurs when the endings of these words resemble other, correct, suffixes. In an attempt to deal with this problem we have investigated incorporating semantic information into the probability model since morphologically related words also tend to be semantically related. A successful implementation of such information should eliminate errors such as capable breaking down as cap+able since capable is not semantically related to cape or cap. The goal of the Paramorph system was to produce a preliminary description, with very low false positives, of the final suffixation, both inflectional and derivational, in a language independent manner. Paramorph performed better for the most part with respect to Fscore than Linguistica, but more importantly, the precision of Linguistica does not approach the precision of our algorithm, particularly on the larger corpus sizes. In summary, we feel our Paramorph system has attained the goal of producing an initial estimate of suffixation that could serve as a front end to aid other models in discovering higher level structure. References [1] ´Eric. Gaussier. 1999. Unsupervised learning of derivational morphology from inflectional lexicons. In ACL ’99 Workshop Proceedings: Unsupervised Learning in Natural Language Processing. ACL. [2] Michael R. Brent. 1993. Minimal generative models: A middle ground between neurons and triggers. In Proceedings of the Fifth International Workshop on Artificial Intelligence and Statistics, Ft. Laudersdale, FL. [3] Michael R. Brent, Sreerama K. Murthy, and Andrew Lundberg. 1995. Discovering morphemic suffixes: A case study in minimum description length induction. In Proceedings of the 15th Annual Conference of the Cognitive Science Society, pages 28-36, Hillsdale, NJ. Erlbaum. [4] John Goldsmith. 2001. Unsupervised learning of the morphology of a natural language. Computational Linguistics, 27:153-198. [5] Matthew G. Snover and Michael R. Brent. 2001. A Bayesian Model for Morpheme and Paradigm Identification. In Proceedings of the 39th Annual Meeting of the ACL, pages 482-490. ACL.
|
2002
|
137
|
2,144
|
The Decision List Machine Marina Sokolova SITE, University of Ottawa Ottawa, Ont. Canada,K1N-6N5 sokolova@site.uottawa.ca Mario Marchand SITE, University of Ottawa Ottawa, Ont. Canada,K1N-6N5 marchand@site.uottawa.ca Nathalie Japkowicz SITE, University of Ottawa Ottawa, Ont. Canada,K1N-6N5 nat@site.uottawa.ca John Shawe-Taylor Royal Holloway, University of London Egham, UK, TW20-0EX jst@cs.rhul.ac.uk Abstract We introduce a new learning algorithm for decision lists to allow features that are constructed from the data and to allow a tradeoffbetween accuracy and complexity. We bound its generalization error in terms of the number of errors and the size of the classifier it finds on the training data. We also compare its performance on some natural data sets with the set covering machine and the support vector machine. 1 Introduction The set covering machine (SCM) has recently been proposed by Marchand and Shawe-Taylor (2001, 2002) as an alternative to the support vector machine (SVM) when the objective is to obtain a sparse classifier with good generalization. Given a feature space, the SCM tries to find the smallest conjunction (or disjunction) of features that gives a small training error. In contrast, the SVM tries to find the maximum soft-margin separating hyperplane on all the features. Hence, the two learning machines are fundamentally different in what they are trying to achieve on the training data. To investigate if it is worthwhile to consider larger classes of functions than just the conjunctions and disjunctions that are used in the SCM, we focus here on the class of decision lists introduced by Rivest (1987) because this class strictly includes both conjunctions and disjunctions and is strictly included in the class of linear threshold functions (Marchand and Golea, 1993). Hence, we denote by decision list machine (DLM) any classifier which computes a decision list of Boolean-valued features, including features that are possibly constructed from the data. In this paper, we use the set of features introduced by Marchand and Shawe-Taylor (2001, 2002) known as data-dependent balls. By extending the sample compression technique of Littlestone and Warmuth (1986), we bound the generalization error of the DLM with data-dependent balls in terms of the number of errors and the number of balls it achieves on the training data. We also show that the DLM with balls can provide better generalization than the SCM with this same set of features on some natural data sets. 2 The Decision List Machine Let x denote an arbitrary n-dimensional vector of the input space X which could be arbitrary subsets of ℜn. We consider binary classification problems for which the training set S = P ∪N consists of a set P of positive training examples and a set N of negative training examples. We define a feature as an arbitrary Boolean-valued function that maps X onto {0, 1}. Given any set H = {hi(x)}|H| i=1 of features hi(x) and any training set S, the learning algorithm returns a small subset R ⊂H of features. Given that subset R, and an arbitrary input vector x, the output f(x) of the Decision List Machine (DLM) is defined to be: If (h1(x)) then b1 Else If (h2(x)) then b2 . . . Else If (hr(x)) then br Else br+1 where each bi ∈0, 1 defines the output of f(x) if and only if hi is the first feature to be satisfied on x (i.e. the smallest i for which hi(x) = 1). The constant br+1 (where r = |R|) is known as the default value. Note that f computes a disjunction of the his whenever bi = 1 for i = 1 . . . r and br+1 = 0. To compute a conjunction of his, we simply place in f the negation of each hi with bi = 0 for i = 1 . . . r and br+1 = 1. Note, however, that a DLM f that contains one or many alternations (i.e. a pair (bi, bi+1) for which bi ̸= bi+1 for i < r) cannot be represented as a (pure) conjunction or disjunction of his (and their negations). Hence, the class of decision lists strictly includes conjunctions and disjunctions. From this definition, it seems natural to use the following greedy algorithm for building a DLM from a training set. For a given set S′ = P ′ ∪N ′ of examples (where P ′ ⊆P and N ′ ⊆N) and a given set H of features, consider only the features hi ∈H which make no errors on either P ′ or N ′. If hi makes no error with P ′, let Qi be the subset of examples of N ′ on which hi makes no errors. Otherwise, if hi makes no error with N ′, let Qi be the subset of examples of P ′ on which hi makes no errors. In both cases we say that hi is covering Qi. The greedy algorithm starts with S′ = S and an empty DLM. Then it finds the hi with the largest |Qi| and appends this hi to the DLM. It then removes Qi from S′ and repeat to find the hk with the largest |Qk| until either P ′ or N ′ is empty. It finally assigns br+1 to the class label of the remaining non-empty set. Following Rivest (1987), this greedy algorithm is assured to build a DLM that makes no training errors whenever there exists a DLM on a set E ⊆H of features that makes zero training errors. However, this constraint is not really required in practice since we do want to permit the user of a learning algorithm to control the tradeoffbetween the accuracy achieved on the training data and the complexity (here the size) of the classifier. Indeed, a small DLM which makes a few errors on the training set might give better generalization than a larger DLM (with more features) which makes zero training errors. One way to include this flexibility is to early-stop the greedy algorithm when there remains a few more training examples to be covered. But a further reduction in the size of the DLM can be accomplished Algorithm BuildDLM(P, N, pp, pn, s, H) Input: A set P of positive examples, a set N of negative examples, the penalty values pp and pn , a stopping point s, and a set H = {hi(x)}|H| i=1 of Boolean-valued features. Output: A decision list f consisting of a set R = {(hi, bi)}r i=1 of features hi with their corresponding output values bi, and a default value br+1. Initialization: R = ∅, P ′ = P, N ′ = N 1. For each hi ∈H, let Pi and Ni be respectively the subsets of P ′ and N ′ correctly classified by hi. For each hi compute Ui, where: Ui def = max {|Pi| −pn · |N ′ −Ni|, |Ni| −pp · |P ′ −Pi|} 2. Let hk be a feature with the largest value of Uk. 3. If (|Pk| −pn · |N ′ −Nk| ≥|Nk| −pp · |P ′ −Pk|) then R = R ∪{(hk, 1)}, P ′ = P ′ −Pk, N ′ = Nk. 4. If (|Pk| −pn · |N ′ −Nk| < |Nk| −pp · |P ′ −Pk|) then R = R ∪{(¬hk, 0)}, N ′ = N ′ −Nk, P ′ = Pk. 5. Let r = |R|. If (r < s and P ′ ̸= ∅and N ′ ̸= ∅) then go to step 1 6. Set br+1 = ¬br. Return f. Figure 1: The learning algorithm for the Decision List Machine by considering features hi that do make a few errors on P ′ (or N ′) if many more examples Qi ∈N ′ (or Qi ∈P ′) can be covered. Hence, to include this flexibility in choosing the proper tradeoffbetween complexity and accuracy, we propose the following modification of the greedy algorithm. For every feature hi, let us denote by Pi the subset of P ′ on which hi makes no errors and by Ni the subset of N ′ on which hi makes no error. The above greedy algorithm is considering only features for which we have either Pi = P ′ or Ni = N ′, but to allow small deviation from these choices, we define the usefullness Ui of feature hi by Ui def = max {|Pi| −pn · |N ′ −Ni|, |Ni| −pp · |P ′ −Pi|} where pn denotes the penalty of making an error on a negative example whereas pp denotes the penalty of making an error on a positive example. Hence, each greedy step will be modified as follows. For a given set S′ = P ′ ∪N ′, we will select the feature hi with the largest value of Ui and append this hi in the DLM. If |Pi| −pn · |N ′ −Ni| ≥|Ni| −pp · |P ′ −Pi|, we will then remove from S′ every example in Pi (since they are correctly classified by the current DLM) and we will also remove from S′ every example in N ′ −Ni (since a DLM with this feature is already misclassifying N ′ −Ni, and, consequently, the training error of the DLM will not increase if later features err on examples in N ′ −Ni). Otherwise if |Pi|−pn ·|N ′ −Ni| < |Ni|−pp ·|P ′ −Pi|, we will then remove from S′ examples in Ni ∪(P ′ −Pi). Hence, we recover the simple greedy algorithm when pp = pn = ∞. The formal description of our learning algorithm is presented in Figure 1. The penalty parameters pp and pn and the early stopping point s are the model-selection parameters that give the user the ability to control the proper tradeoffbetween the training accuracy and the size of the DLM. Their values could be determined either by using k-fold cross-validation, or by computing our bound (see section 4) on the generalization error. It therefore generalizes the learning algorithm of Rivest (1987) by providing this complexity-accuracy tradeoffand by permitting the use of any kind of Boolean-valued features, including those that are constructed from the data. Finally let us mention that Dhagat and Hellerstein (1994) did propose an algorithm for learning decision lists of few relevant attributes but this algorithm is not practical in the sense that it provides no tolerance to noise and does not easily accommodate parameters to provide a complexity-accuracy tradeoff. 3 Data-Dependent Balls For each training example xi with label yi ∈{0, 1} and (real-valued) radius ρ, we define feature hi,ρ to be the following data-dependent ball centered on xi: hi,ρ(x) def = hρ(x, xi) = ½ yi if d(x, xi) ≤ρ yi otherwise where yi denotes the Boolean complement of yi and d(x, x′) denotes the distance between x and x′. Note that any metric can be used for d. So far, we have used only the L1, L2 and L∞metrics but it is certainly worthwhile to try to use metrics that actually incorporate some knowledge about the learning task. Moreover, we could use metrics that are obtained from the definition of an inner product k(x, x′). Given a set S of m training examples, our initial set of features consists, in principle, of H = S i∈S S ρ∈[0,∞[ hi,ρ. But obviously, for each training example xi, we need only to consider the set of m −1 distances {d(xi, xj)}j̸=i. This reduces our initial set H to O(m2) features. In fact, from the description of the DLM in the previous section, it follows that the ball with the largest usefulness belongs to one of the following following types of balls: type Pi, Po, Ni, and No. Balls of type Pi (positive inside) are balls having a positive example x for its center and a radius given by ρ = d(x, x′) −ϵ for some negative example x′ (that we call a border point) and very small positive number ϵ. Balls of type Po (positive outside) have a negative example center x and a radius ρ = d(x, x′) + ϵ given by a negative border x′. Balls of type Ni (negative inside) have a negative center x and a radius ρ = d(x, x′) −ϵ given by a positive border x′. Balls of type No (negative outside) have a positive center x and a radius ρ = d(x, x′) + ϵ given by a positive border x′. This proposed set of features, constructed from the training data, provides to the user full control for choosing the proper tradeoffbetween training accuracy and function size. 4 Bound on the Generalization Error Note that we cannot use the “standard” VC theory to bound the expected loss of DLMs with data-dependent features because the VC dimension is a property of a function class defined on some input domain without reference to the data. Hence, we propose another approach. Since our learning algorithm tries to build a DLM with the smallest number of datadependent balls, we seek a bound that depends on this number and, consequently, on the number of examples that are used in the final classifier (the hypothesis). We can thus think of our learning algorithm as compressing the training set into a small subset of examples that we call the compression set. It was shown by Littlestone and Warmuth (1986) and Floyd and Warmuth (1995) that we can bound the generalization error of the hypothesis f if we can always reconstruct f from the compression set. Hence, the only requirement is the existence of such a reconstruction function and its only purpose is to permit the exact identification of the hypothesis from the compression set and, possibly, additional bits of information. Not surprisingly, the bound on the generalization error increases rapidly in terms of these additional bits of information. So we must make minimal usage of them. We now describe our reconstruction function and the additional information that it needs to assure, in all cases, the proper reconstruction of the hypothesis from a compression set. Our proposed scheme works in all cases provided that the learning algorithm returns a hypothesis that always correctly classifies the compression set (but not necessarily all of the training set). Hence, we need to add this constraint in BuildDLM for our bound to be valid but, in practice, we have not seen any significant performance variation introduced by this constraint. We first describe the simpler case where only balls of types Pi and Ni are permitted and, later, describe the additional requirements that are introduced when we also permit balls of types Po and No. Given a compression set Λ (returned by the learning algorithm), we first partition it into four disjoint subsets Cp, Cn, Bp, and Bn consisting of positive ball centers, negative ball centers, positive borders, and negative borders respectively. Each example in Λ is specified only once. When only balls of type Pi and Ni are permitted, the center of a ball cannot be the center of another ball since the center is removed from the remaining examples to be covered when a ball is added to the DLM. But a center can be the border of a previous ball in the DLM and a border can be the border of more than one ball. Hence, points in Bp∪Bn are examples that are borders without being the center of another ball. Because of the crucial importance of the ordering of the features in a decision list, these sets do not provide enough information by themselves to be able to reconstruct the hypothesis. To specify the ordering of each ball center it is sufficient to provide log2(r) bits of additional information where the number r of balls is given by r = cp +cn for cp = |Cp| and cn = |Cn|. To find the radius ρi for each center xi we start with C′ p = Cp, C′ n = Cn, B′ p = Bp, B′ n = Bn, and do the following, sequentially from the first center to the last. If center xi ∈C′ p, then the radius is given by ρi = minxj∈C′ n∪B′ n d(xi, xj)−ϵ and we remove center xi from C′ p and any other point from B′ p covered by this ball (to find the radius of the other balls). If center xi ∈C′ n, then the radius is given by ρi = minxj∈C′ p∪B′ p d(xi, xj) −ϵ and we remove center xi from C′ n and any other point from B′ n covered by this ball. The output bi for each ball hi is 1 if the center xi ∈Cp and 0 otherwise. This reconstructed decision list of balls will be the same as the hypothesis if and only if the compression set is always correctly classified by the learning algorithm. Once we can identify the hypothesis from the compression set, we can bound its generalization error. Theorem 1 Let S = P ∪N be a training set of positive and negative examples of size m = mp + mn. Let A be the learning algorithm BuildDLM that uses data-dependent balls of type Pi and Ni for its set of features with the constraint that the returned function A(S) always correctly classifies every example in the compression set. Suppose that A(S) contains r balls, and makes kp training errors on P, kn training errors on N (with k = kp + kn), and has a compression set Λ = Cp ∪Cn ∪Bp ∪Bn (as defined above) of size λ = cp + cn + bp + bn . With probability 1 −δ over all random training sets S of size m, the generalization error er(A(S)) of A(S) is bounded by er(A(S)) ≤ 1 −exp ½ −1 m −λ −k µ ln Bλ + ln(r!) + ln 1 δλ ¶¾ where δλ def = ³ π2 6 ´−6 · ((cp + 1)(cn + 1)(bp + 1)(bn + 1)(kp + 1)(kn + 1))−2 · δ and where Bλ def = µmp cp ¶µmp −cp bp ¶µmn cn ¶µmn −cn bn ¶µmp −cp −bp kp ¶µmn −cn −bn kn ¶ Proof Let X be the set of training sets of size m. Let us first bound the probability Pm def = P{S ∈X : er(A(S)) ≥ϵ | m(S) = m} given that m(S) is fixed to some value m where m def = (m, mp, mn, cp, cn, bp, bn, kp, kn). For this, denote by Ep the subset of P on which A(S) makes an error and similarly for En. Let I be the message of log2(r!) bits needed to specify the ordering of the balls (as described above). Now define P ′ m to be P ′ m def = P{S ∈X : er(A(S)) ≥ϵ | Cp = S1, Cn = S2, Bp = S3, Bn = S4 Ep = S5, En = S6, I = I0, m(S) = m} for some fixed set of disjoint subsets {Si}6 i=1 of S and some fixed information message I0. Since Bλ is the number of different ways of choosing the different compression subsets and set of error points in a training set of fixed m, we have: Pm ≤(r!) · Bλ · P ′ m where the first factor comes from the additional information that is needed to specify the ordering of r balls. Note that the hypothesis f def = A(S) is fixed in P ′ m (because the compression set is fixed and the required information bits are given). To bound P ′ m, we make the standard assumption that each example x is independently and identically generated according to some fixed but unknown distribution. Let p be the probability of obtaining a positive example, let α be the probability that the fixed hypothesis f makes an error on a positive example, and let β be the probability that f makes an error on a negative example. Let tp def = cp + bp + kp and let tn def = cn + bn + kn. We then have: P ′ m = (1 −α)mp−tp(1 −β)m−tn−mp µm −tn −tp mp −tp ¶ pmp−tp(1 −p)m−tn−mp ≤ m−tn X m′=tp (1 −α)m′−tp(1 −β)m−tn−m′µm −tn −tp m′ −tp ¶ pm′−tp(1 −p)m−tn−m′ = [(1 −α)p + (1 −β)(1 −p)]m−tn−tp = (1 −er(f))m−tn−tp ≤ (1 −ϵ)m−tn−tp Consequently: Pm ≤(r!) · Bλ · (1 −ϵ)m−tn−tp. The theorem is obtained by bounding this last expression by the proposed value for δλ(m) and solving for ϵ since, in that case, we satisfy the requirement that P ½ S ∈X: er(A(S)) ≥ϵ ¾ = X m PmP ½ S ∈X: m(S) = m ¾ ≤ X m δλ(m)P ½ S ∈X: m(S) = m ¾ ≤ X m δλ(m) = δ where the sums are over all possible realizations of m for a fixed mp and mn. With the proposed value for δλ(m), the last equality follows from the fact that P∞ i=1(1/i2) = π2/6. The use of balls of type Po and No introduces a few more difficulties that are taken into account by sending more bits to the reconstruction function. First, the center of a ball of type Po and No can be used for more than one ball since the covered examples are outside the ball. Hence, the number r of balls can now exceed cp + cn = c. So, to specify r, we can send log2(λ) bits. Then, for each ball, we can send log2 c bits to specify which center this ball is using and another bit to specify if the examples covered are inside or outside the ball. Using the same notation as before, the radius ρi of a center xi of a ball of type Po is given by ρi = maxxj∈C′ n∪B′ n d(xi, xj)+ϵ, and for a center xi of a ball of type No, its radius is given by ρi = maxxj∈C′ p∪B′ p d(xi, xj) + ϵ. With these modifications, the same proof of Theorem 1 can be used to obtain the next theorem. Theorem 2 Let A be the learning algorithm BuildDLM that uses data-dependent balls of type Pi, Ni, Po, and No for its set of features. Consider all the definitions used for Theorem 1 with c def = cp+cn. With probability 1−δ over all random training sets S of size m, we have er(A(S)) ≤ 1 −exp ½ −1 m −λ −k µ ln Bλ + ln λ + r ln(2c) + ln 1 δλ ¶¾ Basically, our bound states that good generalization is expected when we can find a small DLM that makes few training errors. In principle, we could use it as a guide for choosing the model selection parameters s, pp, and pn since it depends only on what the hypothesis has achieved on the training data. 5 Empirical Results on Natural data We have compared the practical performance of the DLM with the support vector machine (SVM) equipped with a Radial Basis Function kernel of variance 1/γ. The data sets used and the results obtained are reported in Table 1. All these data sets where obtained from the machine learning repository at UCI. For each data set, we have removed all examples that contained attributes with unknown values (this has reduced substantially the “votes” data set) and we have removed examples with contradictory labels (this occurred only for a few examples in the Haberman data set). The remaining number of examples for each data set is reported in Table 1. No other preprocessing of the data (such as scaling) was performed. For all these data sets, we have used the 10-fold cross validation error as an estimate of the generalization error. The values reported are expressed as the total number of errors (i.e. the sum of errors over all testing sets). We have ensured that each training set and each testing set, used in the 10-fold cross validation process, was the same for each learning machine (i.e. each machine was trained on the same training sets and tested on the same testing sets). The results reported for the SVM are only those obtained for the best values of the kernel parameter γ and the soft margin parameter C found among an exhaustive list of many values. The values of these parameters are reported in Marchand and Shawe-Taylor (2002). The “size” column refers to the average number of support vectors contained in SVM machines obtained from the 10 different training sets of 10-fold cross-validation. We have reported the results for the SCM (Marchand and Shawe-Taylor, 2002) and the DLM when both machines are equipped with data-dependent balls under the L2 metric. For the SCM, the T column refers to type of the best machine found Data Set SVM SCM with balls DLM with balls Name #exs size errors T p s errors T pp pn s errors BreastW 683 58 19 c 1.8 2 15 c 2.1 1 2 14 Votes 52 18 3 d 0.9 1 6 s 0.1 0.3 1 3 Pima 768 526 203 c 1.1 3 189 c 1.5 1.5 6 189 Haberman 294 146 71 c 1.4 1 71 s 2 3 7 65 Bupa 345 266 107 d 2.8 9 106 c 2 2 4 108 Glass 214 125 34 d ∞ 2 36 c 4.8 ∞ 12 28 Credit 653 423 190 d 1.2 4 194 c 1 ∞ 11 197 Table 1: Data sets and results for SVMs, SCMs, and DLMs. (c for conjunction, and d for disjunction), the p column refers the best value found for the penalty parameter, and the s column refers the the best stopping point in terms of the number of balls. The same definitions applies also for DLMs except that two different penalty values (pp and pn) are used. In the T column of the DLM results, we have specified by s (simple) when the DLM was trained by using only balls of type Pi and Ni and by c (complex) when the four possible types of balls where used (see section 3). Again, only the values that gave the smallest 10-fold cross-validation error are reported. The most striking feature in Table 1 is the level of sparsity achieved by the SCM and the DLM in comparison with the SVM. This difference is huge. The other important feature is that DLMs often provide slightly better generalization than SCMs and SVMs. Hence, DLMs can provide a good alternative to SCMs and SVMs. Acknowledgments Work supported by NSERC grant OGP0122405 and, in part, by the EU under the NeuroCOLT2 Working Group, No EP 27150. References Aditi Dhagat and Lisa Hellerstein. PAC learning with irrelevant attributes. In Proc. of the 35rd Annual Symposium on Foundations of Computer Science, pages 64–74. IEEE Computer Society Press, Los Alamitos, CA, 1994. Sally Floyd and Manfred Warmuth. Sample compression, learnability, and the Vapnik-Chervonenkis dimension. Machine Learning, 21(3):269–304, 1995. N. Littlestone and M. Warmuth. Relating data compression and learnability. Technical report, University of California Santa Cruz, 1986. Mario Marchand and Mostefa Golea. On learning simple neural concepts: from halfspace intersections to neural decision lists. Network: Computation in Neural Systems, 4:67–85, 1993. Mario Marchand and John Shawe-Taylor. Learning with the set covering machine. Proceedings of the Eighteenth International Conference on Machine Learning (ICML 2001), pages 345–352, 2001. Mario Marchand and John Shawe-Taylor. The set covering machine. Journal of Machine Learning Reasearch (to appear), 2002. Ronald L. Rivest. Learning decision lists. Machine Learning, 2:229–246, 1987.
|
2002
|
138
|
2,145
|
How to Combine Color and Shape Information for 3D Object Recognition: Kernels do the Thick B. Caputo Smith-Kettlewell Eye Research Institute, 2318 Fillmore Street, 94115 San Francisco, California, USA caputo@ski.org Gy. Dorko Department of Computer Science, Chair for Pattern Recognition, University of Erlangen-Nuremberg, dorko@informatik.uni-erlangen.de Abstract This paper presents a kernel method that allows to combine color and shape information for appearance-based object recognition. It doesn't require to define a new common representation, but use the power of kernels to combine different representations together in an effective manner. These results are achieved using results of statistical mechanics of spin glasses combined with Markov random fields via kernel functions. Experiments show an increase in recognition rate up to 5.92% with respect to conventional strategies. 1 Introduction Consider the two cars in Figure 1. They look very similar, but this wouldn't be the case if we would look at color pictures: as the left car is yellow and the right car is red, we would realize at a first glance that they are different. This simple example shows that color and shape information are both important cues for object recognition. In spite of this, just a few systems employ both. This is because most of representations proposed in literature aren't suitable for both type of information [5, 11, 13, 2]. Some authors tackled this problem building up new representations, containing both color and shape information; these approaches show very good performances [7, 12,6]. However, this strategy has two important drawbacks: • both types of information must be used always. Although there are many cases where it is convenient to have both, a huge literature shows that color only, or shape only representations work very well for many applications [9, 13, 11, 2]. A new, common representation doesn't always permit to use just color or just shape information alone, depending on the task considered; • the dimension of the feature vector. If the new representation brings as much information as separate representations do, then we must expect it to have a higher dimensionality than each separate Figure 1: An example of objects similar with respect to shape but not with respect to color (the left car is yellow while the right car is red). representation alone, with all the risks of a curse of dimensionality effect. If the dimension of the new representation vector is kept under control, we can expect that the representation contains less information that single ones, with a possible decrease of effectiveness Our goal in this paper is to present a system that uses both types of information while keeping them distinct, allowing the flexibility to use the information sometimes combined, sometimes separated, depending on the application considered. We achieve this goal focusing the attention on how two given shape and color representations can be combined together as they are, rather than define a new representation. We obtain this using Spin Glass-Markov Random Fields (SG-MRF), a new kernel method that integrates results of statistical physics of spin glasses with Gibbs probability distributions via nonlinear kernel mapping. SG-MRFs have been used for robust appearance-based object recognition with very good results, using a kernelized Hopfield energy [3]. Here we extend SG-MRF to a new SG-like energy function, inspired by the ultrametric properties of the SG phase space. The structure of this energy provides a natural framework for combining shape and color representations together, without defining a new common representation (such as a concatenated one, see for instance [7]). This approach presents two main advantages: • it permits us to use existing and well tested representations both for shape and color information; • it permits us to use this knowledge in a flexible manner, depending on the task considered. To the best of our knowledge, there are no previous similar approaches to this problem. Experimental results show the effectiveness of the new proposed kernel method. The paper is organized as follows: section 2 defines the probabilistic framework for object recognition, section 3 reviews SG-MRF and section 4 presents the new energy function and how it can be used for combining together color and shape information. Section 5 presents experiments that show the effectiveness of our approach, compared to other conventional strategies (NNe, x2 and SVM [10, 14]). The paper concludes with a summary discussion. 2 Probabilistic Appearance-based Object Recognition Probabilistic appearance-based object recognition methods consider images as random feature vectors. Let x == [xij],i = 1, ... N,j = 1, ... M be an M x N image. We will consider each image as a random feature vector x E RMN. Assume we have k different classes fh, fh, .. . ,Dk of objects, and that for each object is given a set ofnj data samples, dj = {xLx~, ... ,x~),j = 1, ... k. We will assign each object to a pattern class 01,fh, ... ,Ok. How the object class OJ is represented, given a set of data samples dj (relative to that object class), varies for different appearance-based approaches: it can consider shape information only, or color information only or both. This is equivalent to consider a set of features {hL ht· .. , h~}, j = 1, ... k, where each feature vector h~ is computed from the , , image x~ o, h~ o = T(x~),ht E G == ~m. Assuming that the data samples dJ o are 1 J J 1 a sufficient statistic for the pattern class OJ, the goal will be to estimate the probability distribution Po; (h) that has generated them. Then, given a test image x and its associate feature vector h, the decision will be made using a Maximum A Posteriori (MAP) classifier: 1* = argmaxPo; (h) = argmaxP(Ojlh) = argmaxP(hIOj)P(Oj), (1) j j j using Bayes rule. P(hIOj ) are the Likelihood Functions (LFs) and P(Oj) are the prior probabilities of the classes. In the rest of the paper we will assume that the prior P(Oj) is the same for all object classes; thus the Bayes classifier (1) simplifies to j* = argmaxP(hIOj ). (2) j A possible strategy for modeling P(hIOj ) is to use Gibbs distributions within a Markov Random Field (MRF) framework. The MRF joint probability distribution is given by Z = Lexp(-E(hIOj )). {h} (3) The normalizing constant Z is called the partition function, and E(hIOj ) is the energy function. Using MRF modeling for appearance-based object recognition, eq (2) will become (4) J J Only a few MRF approaches have been proposed for high level vision problems such as object recognition [8], due to the modeling problem for MRF on irregular sites (for a detailed discussion about this point, we refer the reader to [3]). Spin Glass-Markov Random Fields overcome this limitation and can be effectively used for robust appearance-based object recognition [3]0 Next sections review SG-MRF and introduce a new energy function that allows to combine shape and color only representations in a common probabilistic framework. 3 Spin Glass-Markov Random Fields Consider k object classes 0 1 , O2 , ... , Ok, and for each object a set of nj data samples, dj = {xL ... x~), j = 1, ... k. We will suppose to extract, from each data sample dJ o a set of features {hi, ... h~ 0 } . For instance, h~ 0 can be a color histogram , , computed from x~. The SG-MRF probability distribution is given by , Descendant Descendant Descendant Figure 2: Hierarchical structure induced by the ultrametric energy function. where ESGMRF (hIOj ) is a kernelized spin glass energy function. The most general SG energy is given by [1] E = - L Jij Si Sj ( i,j) i,j = 1, ... N, (6) where the Si are random variables taking values in [-1, + 1], s = (Sl, ... , S N) is a configuration and J = [Jij ],(i,j) = 1, ... ,N is the connection matrix. When the Jij is given by the Hopfield's prescription P Jij = ~ L dl') ~]I') , (7) 1'=1 with {~(I') }~=1 given configurations of the system ( prototypes) having the following properties: (aj ~(I') .1 ~(v), \;fjJ f:. V j (bj p = aN, a :::; 0.14, N --+ 00 , then it can be demonstrated that ESGMRF becomes [3] pj 2 ESGMRF(hIOj) = - L [K(h,h(l'j))] , (8) 1'=1 where the function K(h, h(l'j)) is a Generalized Gaussian kernel [14]: K(x, y) = exp{ -pda,b(X, y)}, (9) {h(l'j)}~~l>j E [1, k] are the prototypes selected (according to a chosen ansatz, [3]) from the training data. The number of prototypes per class must be finite, and they must satisfy the condition K(h(i),h(l)) = 0, for all i,l = 1, ... pj,i f:.l and j = 1, ... k. Note that SG-MRFs are defined on features rather than on raw pixels data. The sites are fully connected, which ends in learning the neighborhood system from the training data instead of choosing it heuristically. A key characteristic of the model is that in SG-MRF the functional form of the energy is given by construction. 4 Ultrametric Spin Glass-Markov Random Fields Consider the energy function (6) with the following connection matrix: 1 P (q". ) 1 PIP q". Jij = N ~ ~~JL) ~)JL) 1 + ?; 1]~JLv) 1] )JLv) = N ~ ~~JL) ~)JL) + N ~ ?; dJLv) ~)JLv) (10) with ~~JLv) = ~~JL)1]~JLv). This energy induces a hierarchical organization of stored prototypes ([1], see Figure 2). The set of prototypes {~(JL) g=1 are stored at the first level of the hierarchy and are usually called the ancestors. Each of them will have q descendants {~(JLv)} ~~ 1. The parameter 1]~JLv) measures the similarity between ancestors and descendants. The first term in eq (10), right, is the Hopfield energy (6)-(7); the second is a new term that allows us to store as prototypes patterns correlated with the {~(JL) g=1; this is the case if we want to store, as separate sets of prototypes, shape only and color only representations computed from the same view. This energy will have p+ L~= 1 qJL minima, of which p absolute (ancestor level) and L~=1 qJL local (descendant level). For a complete discussion on the properties of this energy, we refer the reader to [1, 4]. Here we are interested in using this energy in the SG-MRF framework shown in Section 4. To this purpose, we show that the energy (6), with the connection matrix (10), can be written as a function of scalar product between configurations [4]: E = - ~ 2: [~ t dJL) ~)JL) (1 + t 1]~JLV)1]JJLV))] SiSj = ~ JL=1 v= 1 = [~2 [t;(~(JL). S)2 + t;~(~(JLV) .S)2]]. (11) The ultrametric energy (11) can be kernelized as done for the Hopfield energy and thus can be used in a MRF framework. We call the resulting new MRF model Ultrametric Spin Glass-Markov Random Fields (USG-MRF). Now, consider the probabilistic appearance-based framework described in section 2. Given a set of data samples dj for each object class Dj,j = 1, ... k, we will extract two kinds of feature vectors, {hS~i }7=1 containing shape information and {he~i }7=1 containing color information. USG-MRF provides a straightforward manner to use the Bayes classifier (2) using both these representations separately. We will consider the color features {he~i }7=1 at the ancestor level and the shape features {hS~i }7=1 at the descendant level. The USG-MRF energy function will be Pi Pi q". " (JL) 2" " - (JLV) 2 EUSGMRF = - L.)Kc(he ,he)] - L.J L.J[Ks(hs ,hs)] , (12) JL=1 JL=1v=1 where {he (JL) }~~1 will be the set of prototypes relative to the ancestor level, and - (JLV) q {hs } v~1' J1 = 1, ... Pj the set of prototypes at the descendant level. These prototypes are selected from the training data as described in section 3 for SG-MRF. Kc is the generalized Gaussian kernel at the ancestor level, and Ks is the generalized Gaussian kernel at the descendant level. We stress that the kernel must be the same at each level of the hierarchy, but can be different between levels (as to say between ancestor and descendant). The Bayes classifier based on USG-MRF will be (13) Note that the parametric form of kernels is known (eq (9); thus, when (U)SG-MRF is used in a Bayes classifier for classification purposes, it permits to learn the kernel to be used from the training data, with a leave-one-out strategy. 5 Experiments In order to show the effectiveness of USG-MRF for appearance-based object recognition, we perform several sets of experiments. All of them were ran on the COIL database [9]; it consists of 7200 color images of 100 objects (72 views for object); each image is of 128 x 128 pixels. The images were obtained by placing the objects on a turntable and taking a view every 5°. In all the experiments we performed, the training set consisted of 12 views per object (one every 30°). The remaining views constituted the test set. Among the many representations proposed in literature, we chose a shape only and color only representation, and we ran experiments using these representations separated, concatenated together in a common feature vector and combined together in the USG-MRF. The purpose of these experiments is to prove the effectiveness of the USG-MRF model rather than select the optimal combination for the shape and color representations. Thus, we limited the experiments to one shape only and one color only representations; but USG-MRF can be applied to any other kind of shape and/or color representation (see for instance [4]). As color only representation, we chose two dimensional rg Color Histogram (CH), with resolution of bin axis equal to 8 [13]. The CH was normalized to 1. As shape only representation, we chose Multidimensional receptive Field Histograms (MFH) [11], with two local characteristics based on Gaussian derivatives along x and y directions, with u = 1.0 and resolution of bin axis equal to 8. The histograms were normalized to 1. These two representations were used for performing the following sets of experiments: • Shape experiments: we ran the experiments using the shape features only. Classification was performed using SG-MRF with the kernelized Hopfield energy (6)-(7). The kernel parameters (a, b, p) were learned using a leave-one-out strategy. The results were benchmarked with those obtained with a X2 and n similarity measures, which proved to be very effective for this representation, and with SVM with Gaussian kernel, p E [0.001,10] (here we report only the best results obtained). • Color experiments: we ran the experiments using the color features only. Classification and benchmarking were performed as in the shape experiment. • Color-Shape experiments: we ran the experiments using the color and shape features concatenated together to form a unique feature vector. Again, classification and benchmarking were performed as in the shape experiment. • Ultrametric experiment: we ran a single experiment using the shape and color representation disjoint in the USG-MRF framework. The kernel parameters relative to each level (as, bs, Ps and ae, be, Pc) are learned with the leave-one-out technique. Results obtained with this approach cannot be directly benchmarked with other similarity measures. Anyway, it is possible to compare the obtained results with those of the previous experiments. Table 1 reports the error rates obtained for the 4 sets of experiments. II Color (%) I Shape (%) I Color-Shape (%) I Ultrametric (%) x2 23.47 9.47 19.17 n 25.68 24.94 21.72 SVM 19.78 25.3 18.38 SG-MRF 20.10 6.28 8.43 3.55 Table 1: Classification results; we report for each set of experiments the obtained error rates. Results presented in Table 1 show that for all series of experiments, for all representations, SG-MRF always gave the best recognition result. Moreover, the overall best recognition result is obtained with USG-MRF. USG-MRF has an increase of performance of +2.73% with respect to SG-MRF, best result, and of +5.92% with respect to X2 (best result obtained with a non SG-MRF technique). Table 2 shows some examples of objects misclassified by SG-MRF and correctly classified by USGMRF. We see that USG-MRF classifies correctly in cases where shape only or color only gives the right answer (but not both, and not in the concatenated representation; Table 2, left and middle column), and also in cases where color only and shape only don't classify correctly (Table 2, right column). These examples show clearly that the better performance of USG-MRF is due to its hierarchical structure that permits to use different kernels on different features, thus to weight their relevance in a flexible manner with respect to the considered application. We remark once again that all the kernel parameters (thus ultimately the kernel itself) are learned from the training data; to the best of our knowledge (U)SG-MRF is the first kernel method for vision application that doesn't select heuristically the kernel to be used. USG-MRF 1st match 1st match 1st match SG - MRFs 2nd match 1st match 3rd match SG - MRFe 1st match 2nd match 7th match SG - MRFse 3rd match 2nd match 5th match Table 2: Classification results for sample objects; USG-MRF classifies always correctly even when color only (SG - MRF s), color only (SG - MRF c) and common representation (SG - MRFse) fail (right column). 6 Summary In this paper we presented a kernel method that permits us to combine color and shape information for appearance-based object recognition. It does not require us to define a new common representation, but use the power of kernels to combine different representations together in an effective manner. This result is achieved using results of statistical mechanics of Spin Glasses combined with Markov Random Fields via kernel functions. Experiments confirm the effectiveness of the proposed approach. Future work will explore the possibility to use different representations for color and shape and to use this method for tackling other challenging problems in object recognition, such as recognition of objects in heterogeneous background and under different lighting conditions. Acknowledgments This work has been supported by the "Graduate Research Center of the University of Erlangen-Nuremberg for 3D Image Analysis and Synthesis" , and by the Foundation BLANCEFLOR Boncompagni-Ludovisi. References [1] D. J. Amit, "Modeling Brain Function", Cambridge University Press, 1989. [2] S. Belongie, J. Malik, J. Puzicha, "Matching Shapes" , ICCV01 , 454-461. [3] B. Caputo, S. Bouattour, H. Niemann, "A new kernel method for robust appearancebased object recognition: Spin Glass-Markov random fields", submitted to PR, available at http : //www.ski.org/ALYuillelabf. [4] B. Caputo, Gy. Dorko, H. Niemann, "An ultrametric approach to object recognition" , submitted to VMV02, availabe at http://www.ski.org/ALYuillelab/. [5] A. Leonardis, H. Bischof, "Robust recognition using eigenimages" , CVIU,78:99-118, 2000. [6] J. Matas, R, Marik, J. Kittler, "On representation and matching of multi-coloured objects", Proc ICCV95, 726-732, 1995. [7] B. W. Mel, "SEEM ORE: combining color, shape and texture histogramming in a neurally-inspired approach to visual object recognition", NC, 9: 777-804, 1997 [8] J.W. Modestino, J. Zhang. "A Markov random field model- based approach to image interpretation" . PAMI, 14(6) ,606- 615,1992. [9] Nene, S. A., Nayar, S. K., Murase, H., "Columbia Object Image Library (COIL-100)", TR CUCS-006-96, Dept. Compo Sc., Columbia University, 1996. [10] Pontil, M., Verri, A. "Support Vector Machines for 3D Object Recognition", PAMI, 20(6):637-646, 1998. [11] B. Schiele, J. L. Crowley, "Recognition without correspondence using multidimensional receptive field histograms", IJCV, 36(1),:31- 52, 2000. [12] D. Slater, G. Healey, "Combining color and geometric information for the illumination invariant recognition of 3-D objects" , Proc ICCV95, 563-568, 1995. [13] M. Swain, D. Ballard, "Color indexing" ,IJCV, 7(1):11-32, 1991. [14] B. Scholkopf, A. J. Smola, Learning with kernels, 2002, the MIT Press, Cambridge, MA.
|
2002
|
139
|
2,146
|
Spike Timing-Dependent Plasticity in the Address Domain R. Jacob Vogelstein1, Francesco Tenore2, Ralf Philipp2, Miriam S. Adlerstein2, David H. Goldberg2 and Gert Cauwenberghs2 1Department of Biomedical Engineering 2Department of Electrical and Computer Engineering Johns Hopkins University, Baltimore, MD 21218 {jvogelst,fra,rphilipp,mir,goldberg,gert}@jhu.edu Abstract Address-event representation (AER), originally proposed as a means to communicate sparse neural events between neuromorphic chips, has proven efficient in implementing large-scale networks with arbitrary, configurable synaptic connectivity. In this work, we further extend the functionality of AER to implement arbitrary, configurable synaptic plasticity in the address domain. As proof of concept, we implement a biologically inspired form of spike timing-dependent plasticity (STDP) based on relative timing of events in an AER framework. Experimental results from an analog VLSI integrate-and-fire network demonstrate address domain learning in a task that requires neurons to group correlated inputs. 1 Introduction It has been suggested that the brain’s impressive functionality results from massively parallel processing using simple and efficient computational elements [1]. Developments in neuromorphic engineering and address-event representation (AER) have provided an infrastructure suitable for emulating large-scale neural systems in silicon, e.g., [2, 3]. Although an integral part of neuromorphic engineering since its inception [1], only recently have implemented systems begun to incorporate adaptation and learning with biological models of synaptic plasticity. A variety of learning rules have been realized in neuromorphic hardware [4, 5]. These systems usually employ circuitry incorporated into the individual cells, imposing constraints on the nature of inputs and outputs of the implemented algorithm. While well-suited to small assemblies of neurons, these architectures are not easily scalable to networks of hundreds or thousands of neurons. Algorithms based both on continuous-valued “intracellular” signals and discrete spiking events have been realized in this way, and while analog computations may be performed better at the cellular level, we argue that it is advantageous to implement spike-based learning rules in the address domain. AER-based systems are inherently scalable, and because the encoding and decoding of events is performed at the periphery, learning algorithms can be arbitrarily complex without increasing the size of repeating neural units. Furthermore, AER makes no assumptions about the signals repre0 1 2 3 3 0 1 1 2 2 0 3 Sender Receiver Data bus time REQ REQ ACK ACK Decoder Encoder Figure 1: Address-event representation. Sender events are encoded into an address, sent over the bus, and decoded. Handshaking signals REQ and ACK are required to ensure that only one cell pair is communicating at a time. Note that the time axis goes from right to left. sented as spikes, so learning can address any measure of cellular activity. This flexibility can be exploited to achieve learning mechanisms with high degrees of biological realism. Much previous work has focused on rate-based Hebbian learning (e.g., [6]), but recently, the possibility of modifying synapses based on the timing of action potentials has been explored in both the neuroscience [7, 8] and neuromorphicengineering disciplines [9]–[11]. This latter hypothesis gives rise to the possibility of learning based on causality, as opposed to mere correlation. We propose that AER-based neuromorphic systems are ideally suited to implement learning rules founded on this notion of spike-timing dependent plasticity (STDP). In the following sections, we describe an implementation of one biologicallyplausible STDP learning rule and demonstrate that table-based synaptic connectivity can be extended to table-based synaptic plasticity in a scalable and reconfigurable neuromorphic AER architecture. 2 Address-domain architecture Address-event representation is a communication protocol that uses time-multiplexing to emulate extensive connectivity [12] (Fig. 1). In an AER system, one array of neurons encodes its activity in the form of spikes that are transmitted to another array of neurons. The “brute force” approach to communicating these signals would be to use one wire for each pair of neurons, requiring N wires for N cell pairs. However, an AER system identifies the location of a spiking cell and encodes this as an address, which is then sent across a shared data bus. The receiving array decodes the address and routes it to the appropriate cell, reconstructing the sender’s activity. Handshaking signals REQ and ACK are required to ensure that only one cell pair is using the data bus at a time. This scheme reduces the required number of wires from N to ∼log2 N. Two pieces of information uniquely identify a spike: its location, which is explicitly encoded as an address, and the time that it occurs, which need not be explicitly encoded because the events are communicated in real-time. The encoded spike is called an address-event. In its original formulation, AER implements a one-to-one connection topology, which is appropriate for emulating the optic and auditory nerves [12, 13]. To create more complex neural circuits, convergent and divergent connectivity is required. Several authors have discussed and implemented methods of enhancing the connectivity of AER systems to this end [14]–[16]. These methods call for a memory-based projective field mapping that enables routing an address-event to multiple receiver locations. The enhanced AER system employed in this paper is based on that of [17], which en0 2 1 ‘‘Receiver’’ ‘‘Sender’’ 3 8 -1 4 0 2 1 Sender address Synapse index Receiver address Weight polarity Weight magnitude 1 Decoder Encoder 0 2 EG - - 0 1 2 - - 0 1 2 0 1 2 0 1 3 - - 0 0 1 0 - - 0 1 2 2 1 8 2 1 4 - - 0 REQ POL Look-up table Integrate-and-fire array (a) (b) Figure 2: Enhanced AER for implementing complex neural networks. (a) Example neural network. The connections are labeled with their weight values. (b) The network in (a) is mapped to the AER framework by means of a look-up table. ables continuous-valued synaptic weights by means of graded (probabilistic or deterministic) transmission of address-events. This architecture employs a look-up table (LUT), an integrate-and-fire address-event transceiver (IFAT), and some additional support circuitry. Fig. 2 shows how an example two-layer network can be mapped to the AER framework. Each row in the table corresponds to a single synaptic connection—it contains information about the sender location, the receiver location, the connection polarity (excitatory or inhibitory), and the connection magnitude. When a spike is sent to the system, the sender address is used as an index into the LUT and a signal activates the event generator (EG) circuit. The EG scrolls through all the table entries corresponding to synaptic connections from the sending neuron. For each synapse, the receiver address and the spike polarity are sent to the IFAT, and the EG initiates as many spikes as are specified in the weight magnitude field. Events received by the IFAT are temporally and spatially integrated by analog circuitry. Each integrate-and-fire cell receives excitatory and inhibitory inputs that increment or decrement the potential stored on an internal capacitance. When this potential exceeds a given threshold, the cell generates an output event and broadcasts its address to the AE arbiter. The physical location of neurons in the array is inconsequential as connections are routed through the LUT, which is implemented in random-access memory (RAM) outside of the chip. An interesting feature of the IFAT is that it is insensitive to the timescale over which events occur. Because internal potentials are not subject to decay, the cells’ activities are only sensitive to the order of the events. Effects of leakage current in real neurons are emulated by regularly sending inhibitory events to all of the cells in the array. Modulating the timing of the “global decay events” allows us to dynamically warp the time axis. We have designed and implemented a prototype system that uses the IFAT infrastructure to implement massively connected, reconfigurable neural networks. An example setup is described in detail in [17], and is illustrated in Fig. 3. It consists of a custom VLSI IFAT chip with a 1024-neuron array, a RAM that stores the look-up table, and a microcontroller unit (MCU) that realizes the event generator. As discussed in [18, p. 91], a synaptic weight w can be expressed as the combined effect Receiver address Weight polarity RAM DATA ADDRESS IFAT IN OUT POL MCU PC board magnitude Sender address Weight index IN Synapse OUT Receiver address Weight polarity RAM DATA ADDRESS IFAT IN OUT POL MCU PC board magnitude Sender address Weight index Synapse OUT (a) (b) Figure 3: Hardware implementation of enhanced AER. The elements are an integrate-andfire array transceiver (IFAT) chip, a random-access memory (RAM) look-up table, and a microcontroller unit (MCU). (a) Feedforward mode. Input events are routed by the RAM look-up table, and integrated by the IFAT chip. (b) Recurrent mode. Events emitted by the IFAT are sent to the look-up table, where they are routed back to the IFAT. This makes virtual connections between IFAT cells. of three physical mechanisms: w = npq (1) where n is the number of quantal neurotransmitter sites, p is the probability of synaptic release per site, and q is the measure of the postsynaptic effect of the synapse. Many early neural network models held n and p constant and attributed all of the variability in the weight to q. Our architecture is capable of varying all three components: n by sending multiple events to the same receiver location, p by probabilistically routing the events (as in [17]), and q by varying the size of the potential increments and decrements in the IFAT cells. In the experiments described in this paper, the transmission of address-events is deterministic, and the weight is controlled by varying the number of events per synapse, corresponding to a variation in n. 3 Address-domain learning The AER architecture lends itself to implementations of synaptic plasticity, since information about presynaptic and postsynaptic activity is readily available and the contents of the synaptic weight fields in RAM are easily modifiable “on the fly.” As in biological systems, synapses can be dynamically created and pruned by inserting or deleting entries in the LUT. Like address domain connectivity, the advantage of address domain plasticity is that the constituents of the implemented learning rule are not constrained to be local in space or time. Various forms of learning algorithms can be mapped onto the same architecture by reconfiguring the MCU interfacing the IFAT and the LUT. Basic forms of Hebbian learning can be implemented with no overhead in the address domain. When a presynaptic event, routed by the LUT through the IFAT, elicits a postsynaptic event, the synaptic strength between the two neurons is simply updated by incrementing the data field of the LUT entry at the active address location. A similar strategy can be adopted for other learning rules of the incremental outer-product type, such as delta-rule or backpropagation supervised learning. Non-local learning rules require control of the LUT address space to implement spatial and/or temporal dependencies. Most interesting from a biological perspective are forms of −τ+ τ− tpre − tpost ∆w(tpre − tpost) presynaptic postsynaptic ∆w Presynaptic Postsynaptic x1 x1 x1 x1 x2 x2 x2 x3 x3 x3 x1 x2 x3 y y2 y2 y2y2 y1y1 y1 y1y1 y2 y1 y1y1 x Presynaptic Postsynaptic y1 y2 t τ+ −τ− Postsynaptic Queue Presynaptic Queue t ∆w ∆w (a) (b) Figure 4: Spike timing-dependent plasticity (STDP) in the address domain. (a) Synaptic updates ∆w as a function of the relative timing of presynaptic and postsynaptic events, with asymmetric windows of anti-causal and causal regimes τ−> τ+. (b) Address-domain implementation using presynaptic (top) and postsynaptic (bottom) event queues of window lengths τ+ and τ−. spike timing-dependent plasticity (STDP). 4 Spike timing-dependent plasticity Learning rules based on STDP specify changes in synaptic strength depending on the time interval between each pair of presynaptic and postsynaptic events. “Causal” postsynaptic events that succeed presynaptic action potentials (APs) by a short duration of time potentiate the synaptic strength, while “anti-causal” presynaptic events succeeding postsynaptic APs by a short duration depress the synaptic strength. The amount of strengthening or weakening is dependent on the exact time of the event within the causal or anti-causal regime, as illustrated in Fig. 4 (a). The weight update has the form ∆w = ( −η[τ−−(tpre −tpost)] 0 ≤tpre −tpost ≤τ− η[τ+ + (tpre −tpost)] −τ+ ≤tpre −tpost ≤0 0 otherwise (2) where tpre and tpost denote time stamps of presynaptic and postsynaptic events. For stable learning, the time windows of causal and anti-causal regimes τ+ and τ−are subject to the constraint τ+ < τ−. For more general functional forms of STDP ∆w(tpre − tpost), the area under the synaptic modification curve in the anti-causal regime must be greater than that in the causal regime to ensure convergence of the synaptic strengths [7]. The STDP synaptic modification rule (2) is implemented in the address domain by augmenting the AER architecture with two event queues, one each for presynaptic and postsynaptic events, shown in Figure 4 (b). Each time a presynaptic event is generated, the sender’s address is entered into a queue with an associated value of τ+. All values in the queue are decremented every time a global decay event is observed, marking one unit of time T. A postsynaptic event triggers a sequence of synaptic updates by iterating backwards through the queue to find the causal spikes, in turn locating the synaptic strength entries in the LUT corresponding to the sender addresses and synaptic index, and increasing x1 x20 y x2 x3 x4 x5 x16 x17 x18 x19 Figure 5: Pictorial representation of our experimental neural network, with actual spike train data sent from the workstation to the first layer. All cells are identical, but x18 . . . x20 (shaded) receive correlated inputs. Activity becomes more sparse in the hidden and output layers as the IFAT integrates spatiotemporally. Note that connections are virtual, specified in the RAM look-up-table. the synaptic strengths in the LUT according to the values stored in the queue. Anti-causal events require an equivalent set of operations, matching each incoming presynaptic spike with a second queue of postsynaptic events. In this case, entries in the queue are initialized with a value of τ−and decremented after every interval of time T between decay events, corresponding to the decrease in strength to be applied at the presynaptic/postsynaptic pair. We have chosen a particularly simple form of the synaptic modification function (2) as proof of principle in the experiments. More general functions can be implemented by a table that maps time bins in the history of the queue to specified values of ∆w(nT), with positive values of n indexing the postsynaptic queue, and negative values indexing the presynaptic queue. 5 Experimental results We have implemented a Hebbian spike timing-based learning rule on a network of 21 neurons using the IFAT system (Fig. 5). Each of the 20 neurons in the input layer is driven by an externally supplied, randomly generated list of events. Sufficiently high levels of input cause these neurons to produce spikes that subsequently drive the output layer. All events are communicated over the address-event bus and are monitored by a workstation communicating with the MCU and RAM. As shown in [7], temporally asymmetric Hebbian learning using STDP is useful for detecting correlations between inputs. We have proved that this can be accomplished in hardware in the address domain by presenting the network with stimulus patterns containing a set of correlated inputs and a set of uncorrelated inputs: neurons x1 . . . x17 are all stimulated independently with a probability of 0.05 per unit of time, while neurons x18 . . . x20 have the same likelihood of stimulation but are always activated together. Thus, over a sufficiently long period of time each neuron in the input layer will receive the same amount of activation, but the correlated group will fire synchronous spikes more frequently than any other combination of neurons. In the implemented learning rule (2), causal activity results in synaptic strengthening and anti-causal activity results in synaptic weakening. As described in Section 4, for an anticausal regime τ−larger than the causal regime τ+, random activity results in overall weak1 20 0 5 10 15 20 25 30 35 Synaptic Strength Synapse Address Maximum Strength = 31 1 20 0 5 10 15 20 25 30 35 Synaptic Strength Synapse Address Maximum Strength = 31 (a) (b) Figure 6: Experimental synaptic strengths in the second layer, recorded from the IFAT system after the presentation of 200,000 input events. (a) Typical experimental run. (b) Average (+SE) over 20 experimental runs. ening of a synapse. All synapses connecting the input and output layers are equally likely to be active during an anti-causal regime. However, the increase in average contribution to the postsynaptic membrane potential for the correlated group of neurons renders this population slightly more likely to be active during the causal regime than any single member of the uncorrelated group. Therefore, the synaptic strengths for this group of neurons will increase with respect to the uncorrelated group, further augmenting their likelihood of causing a postsynaptic spike. Over time, this positive feedback results in a random but stable distribution of synaptic strengths in which the correlated neurons’ synapses form the strongest connections and the remaining neurons are distributed around an equilibrium value for weak connections. In the experiments, we have chosen τ+ = 3 and τ−= 6. An example of a typical distribution of synaptic strengths recorded after 200,000 events have been processed by the input layer is shown in Fig. 6 (a). For the data shown, synapses driving the input layer were fixed at the maximum strength (+31), the rate of decay was −4 per unit of time, and the plastic synapses between the input and output layers were all initialized to +8. Because the events sent from the workstation to the input layer are randomly generated, fluctuations in the strengths of individual synapses occur consistently throughout the operation of the system. Thus, the final distribution of synaptic weights is different each time, but a pattern can be clearly discerned from the average value of synaptic weights after 20 separate trials of 200,000 events each, as shown in Fig. 6 (b). The system is robust to changes in various parameters of the spike timing-based learning algorithm as well as to modifications in the number of correlated, uncorrelated, and total neurons (data not shown). It also converges to a similar distribution regardless of the initial values of the synaptic strengths (with the constraint that the net activity must be larger than the rate of decay of the voltage stored on the membrane capacitance of the output neuron). 6 Conclusion We have demonstrated that the address domain provides an efficient representation to implement synaptic plasticity that depends on the relative timing of events. Unlike dedicated hardware implementations of learning functions embedded into the connectivity, the address domain implementation allows for learning rules with interactions that are not constrained in space and time. Experimental results verified this for temporally-antisymmetric Hebbian learning, but the framework can be extended to general learning rules, including reward-based schemes [10]. The IFAT architecture can be augmented to include sensory input, physical nearestneighbor connectivity between neurons, and more realistic biological models of neural computation. Additionally, integrating the RAM and IFAT into a single chip will allow for increased computational bandwidth. Unlike a purely digital implementation or software emulation, the AER framework preserves the continuous nature of the timing of events. References [1] C. Mead, Analog VLSI and Neural Systems. Reading, Massachusetts: Addison-Wesley, 1989. [2] S. R. Deiss, R. J. Douglas, and A. M. Whatley, “A pulse-coded communications infrastructure for neuromorphic systems,” in Pulsed Neural Networks (W. Maas and C. M. Bishop, eds.), pp. 157–178, Cambridge, MA: MIT Press, 1999. [3] K. Boahen, “A retinomorphic chip with parallel pathways: Encoding INCREASING, ON, DECREASING, and OFF visual signals,” Analog Integrated Circuits and Signal Processing, vol. 30, pp. 121–135, February 2002. [4] G. Cauwenberghs and M. A. Bayoumi, eds., Learning on Silicon: Adaptive VLSI Neural Systems. Norwell, MA: Kluwer Academic, 1999. [5] M. A. Jabri, R. J. Coggins, and B. G. Flower, Adaptive analog VLSI neural systems. London: Chapman & Hall, 1996. [6] T. J. Sejnowski, “Storing covariance with nonlinearly interacting neurons,” Journal of Mathematical Biology, vol. 4, pp. 303–321, 1977. [7] S. Song, K. D. Miller, and L. F. Abbott, “Competitive Hebbian learning through spike-timingdependent synaptic plasticity,”Nature Neuroscience, vol. 3, no. 9, pp. 919–926, 2000. [8] M. C. W. van Rossum, G. Q. Bi, and G. G. Turrigiano, “Stable Hebbian learning from spike timing-dependent plasticity,”Journal of Neuroscience, vol. 20, no. 23, pp. 8812–8821, 2000. [9] P. Hafliger and M. Mahowald, “Spike based normalizing Hebbian learning in an analog VLSI artificial neuron,”in Learning On Silicon (G. Cauwenberghs and M. A. Bayoumi, eds.), pp. 131– 142, Norwell, MA: Kluwer Academic, 1999. [10] T. Lehmann and R. Woodburn, “Biologically-inspired on-chip learning in pulsed neural networks,”Analog Integrated Circuits and Signal Processing, vol. 18, no. 2-3, pp. 117–131, 1999. [11] A. Bofill, A. F. Murray, and D. P. Thompson, “Circuits for VLSI implementation of temporallyasymmetric Hebbian learning,” in Advances in Neural Information Processing Systems 14 (T. Dietterich, S. Becker, and Z. Ghahramani, eds.), Cambridge, MA: MIT Press, 2002. [12] M. Mahowald, An analog VLSI system for stereoscopic vision. Boston: Kluwer Academic Publishers, 1994. [13] J. Lazzaro, J. Wawrzynek, M. Mahowald, M. Sivilotti, and D. Gillespie, “Silicon auditory processors as computer peripherals,” IEEE Trans. Neural Networks, vol. 4, no. 3, pp. 523–528, 1993. [14] K. A. Boahen, “Point-to-point connectivity between neuromorphic chips using address events,” IEEE Trans. Circuits and Systems—II: Analog and Digital Signal Processing, vol. 47, no. 5, pp. 416–434, 2000. [15] C. M. Higgins and C. Koch, “Multi-chip neuromorphic motion processing,” in Proceedings 20th Anniversary Conference on Advanced Research in VLSI (D. Wills and S. DeWeerth, eds.), (Los Alamitos, CA), pp. 309–323, IEEE Computer Society, 1999. [16] S.-C. Liu, J. Kramer, G. Indiveri, T. Delbr¨uck, and R. Douglas, “Orientation-selective aVLSI spiking neurons,” in Advances in Neural Information Processing Systems 14 (T. Dietterich, S. Becker, and Z. Ghahramani, eds.), Cambridge, MA: MIT Press, 2002. [17] D. H. Goldberg, G. Cauwenberghs, and A. G. Andreou, “Probabilistic synaptic weighting in a reconfigurable network of VLSI integrate-and-fire neurons,”Neural Networks, vol. 14, no. 6/7, pp. 781–793, 2001. [18] C. Koch, Biophysics of Computation: Information Processing in Single Neurons. New York: Oxford University Press, 1999.
|
2002
|
14
|
2,147
|
Reconstructing Stimulus-Driven Neural Networks from Spike Times Duane Q. Nykamp UCLA Mathematics Department Los Angeles, CA 90095 nykamp@math.ucla.edu Abstract We present a method to distinguish direct connections between two neurons from common input originating from other, unmeasured neurons. The distinction is computed from the spike times of the two neurons in response to a white noise stimulus. Although the method is based on a highly idealized linear-nonlinear approximation of neural response, we demonstrate via simulation that the approach can work with a more realistic, integrate-and-fire neuron model. We propose that the approach exemplified by this analysis may yield viable tools for reconstructing stimulus-driven neural networks from data gathered in neurophysiology experiments. 1 Introduction The pattern of connectivity between neurons in the brain is fundamental to understanding the function the brain’s neural networks. Related properties of closely connected neurons, for example, may lead to inferences on how the observed properties are built or enhanced by the neural connections. Unfortunately, the complexity of higher organisms makes obtaining combined functional and connectivity data extraordinarily difficult. The most common tool for recording in vivo the activity of neurons in higher organisms is the extracellular electrode. Typically, one uses this electrode to record only the times of output spikes, or action potentials, of neurons. In such an experiment, the states of the measured neurons remain hidden. The ability to infer connectivity patterns from spike times alone would greatly expand the attainable connectivity data and provide the opportunity to better address the link between function and connectivity. Attempts to infer connectivity from spike time data have focused on second-order statistics of the spike times of two simultaneously recorded neurons. In particular, the joint peristimulus time histogram (JPSTH) and its integral, the shuffle-corrected correlogram [1, 2, 3] have become widely used tools to analyze such data. However, the JPSTH and correlogram cannot distinguish correlations induced by connections between the two measured neurons (direct connection correlations) from correlations induced by common connections from a third, unmeasured neuron (common input correlations). Inferences from the JPSTH or correlogram about the connections between the two measured neurons are ambiguous. Analysis tools such as partial coherence [4] can distinguish between a direct connection and common input when one can also measure neurons inducing the common input effects. The distinction of present approach is that all other neurons are unmeasured. We demonstrate that, by characterizing how each neuron responds to the stimulus, one may be able to distinguish between direct connection and common input correlations. In that case, one could determine if a connection existed between two neurons simply by measuring their spike times in response to a stimulus. Since the properties of the neurons would be determined by the same measurements, such an analysis would be the basis for inferring links between connectivity and function. 2 The model To make the subtle distinction between direct connection correlations and common input correlations, one needs to exploit an explicit model. The model must be sufficiently simple so that all necessary model parameters can be determined from experimental measurements. For this reason, the analysis is limited to phenomenological lumped models. We present analysis based on a linear-nonlinear model of neural response to white noise. Let the stimulus X be a vector of independent Gaussian random variables with zero mean and standard deviation σ = 1. X is a discrete approximation to temporal or spatio-temporal white noise. Let Ri p = 1 if neuron p spiked at the discrete time point i and be zero otherwise. Let the probability of a spike from a neuron be a linear-nonlinear function of the stimulus and the previous spike times of the other neurons, Pr Ri p = 1 X = x, Rq = rq, ∀q = gp hi p · x + X q̸=p X j>0 ¯W j qpri−j q , (1) where hi p is the linear kernel of neuron p shifted i units in time (normalized so that ∥hi p∥= 1), gp(·) is its output nonlinearity (representing, for example, its spike generating mechanism), and ¯W j qp is a connectivity term representing how a spike of neuron q at a particular time modifies the response of neuron p after j time steps. The network of Eq. (1) is an extension of the standard linear-nonlinear model of a single neuron. The linear-nonlinear model of a single neuron can be completely reconstructed from measured spike times in response to white noise [5]. We will demonstrate that the network of linear-nonlinearneurons can be similarly analyzed to determine the connectivity between two measured neurons. 3 Analysis of model Let neurons 1 and 2 be the only two measured neurons. The spike times of all other neurons will remain unmeasured. Given further simplifying assumptions detailed below, we can isolate the connectivity terms between neurons 1 and 2 ( ¯W j 12 and ¯ W j 21). We will outline a method to determined these connectivity terms from a few statistics of the two measured spikes trains and the white noise stimulus. 3.1 Assumptions The first assumption is that the coupling is sufficiently weak to justify a first order approximation in the ¯ W j qp. We will neglect all quadratic and higher order terms in the ¯W j qp with one important exception. Common input correlations are second order in the ¯W j qp because common input requires two connections. Since our analysis must include common input to the measured neurons, we retain terms containing ¯ W j p1 ¯W k q2 with p, q > 2. The second assumption is that the unmeasured neurons do not respond to essentially identical stimulus features as the measured neurons (1 & 2) or each other. We quantify similarity to stimulus features by the inner product between linear kernels, cos ¯θk pq = hi−k p · hi q. We require each cos ¯θ to be small so that we can ignore terms of the form ¯W cos ¯θ. We allow one exception and retain ¯W cos ¯θk 21 terms so that no assumption is made on the two measured neurons. Last, we assume the nonlinearity is an error function of the form gp(x) = 1 2 h 1 + erf x −¯Tp ϵp √ 2 i (2) with parameters ¯Tp and ϵp, where erf(y) = 2 √π R y 0 e−t2dt. 3.2 Outline of method The first step in analyzing the network response is to ignore the fact that the neurons are embedded in a neural network and analyze the spike trains of neurons 1 and 2 as though each were an isolated linear-nonlinear system. Using the procedure outlined in Ref. [5], one can determine the effective linear-nonlinear parameters from the average firing rates (E{Ri 1} and E{Ri 2})1 and the stimulus-spike correlations (E{XRi 1} and E{XRi 2}). These effective linear-nonlinear parameters clearly will not match the parameters for neurons 1 and 2 in the complete system (Eq. (1)). The network connections alter the mean rates and stimulus-spike correlations of neurons 1 and 2, changing the linear-nonlinear parameters reconstructed from these measurements. Nonetheless, these effective linear-nonlinear system parameters can be written approximately as combinations of parameters from the network in Eq. (1). The connectivity between neurons 1 and 2 can then be determined from the correlation between their spikes (E{Ri 1Ri−k 2 } measured at different positive and negative delays k and the correlation of their spike pairs with the stimulus (E{XRi 1Ri−k 2 }) as follows. Given our assumptions, we obtain equations linear in ¯W j 12, ¯W j 21, and ¯W j p1 ¯W ˜ q2. For each delay k, we obtain three equations: one from E{Ri 1Ri−k 2 }, one from the projection of E{XRi 1Ri−k 2 } onto E{XRi 1}, and one from the projection of E{XRi 1Ri−k 2 } onto E{XRi−k 2 }. At first glance, it appears that the unknowns greatly outnumber the equations. However, the system of equations is well-posed because the ¯W j p1 ¯ W ˜ q2 appear in the same combination for each of the three equations at a given delay. In fact, we have only two sets of unknowns, which can be written as ¯W k = ¯W −k 12 for k < 0, ¯W k 21 for k > 0, (3) and ¯U k = X p>2 X j,˜ ckj˜ p ¯W j p1 ¯W ˜ p2. (4) All other parameters in the equations were already determined in the first stage. If N is the number of delays considered, then we have 3N linear equations and only 2N unknowns. The factor ¯W k is the direct connection between neurons 1 and 2 (the direction of the connection depends on the sign of the delay k). The factor ¯U k is the common input to neuron 2 and neuron 1 (k times steps delayed) from all other neurons in the network. The weighting 1E{·} denotes expected value. (ckj˜ p ) of its terms depends on the properties of the unmeasured neurons. Fortunately, since we can treat ¯U k as a unit, we don’t need to determine the weighting. To analyze spike train data, we approximate the statistics E{Ri 1}, E{Ri 2}, E{XRi 1}, E{XRi 2}, E{Ri 1Ri−k 2 }, and E{XRi 1Ri−k 2 } by averages over an experiment. We then compute the least-squares fit to solve for approximations of ¯W and ¯U. We denote these approximations (or correlation measures) as W and U, respectively. 4 Demonstration We demonstrate the ability of the measures W and U to distinguish direct connection correlations from common input correlations with three example simulations. In the first two examples, we simulated a network of three coupled linear-nonlinear neurons (Eqs. (1) and (2)). In the third example, we simulated a pair of integrate-and-fire neurons driven by the stimulus in a manner similar to the linear-nonlinear neurons. In each example, we measured only the spike times of neuron 1 and neuron 2. Since the white noise stimulus does not repeat, one cannot calculate a JPSTH or shufflecorrected correlogram. Instead, for comparison we calculated the covariance between the spike times, Ck = ⟨Ri 1Ri−k 2 ⟩−⟨Ri 1⟩⟨Ri−k 2 ⟩, and a stimulus independent correlation measure introduced in Ref. [6], Sk = ⟨Ri 1Ri−k 2 ⟩−νk 21, where ⟨⟩represents averaging over the entire stimulus. The quantity νk 21 is the expected value of ⟨Ri 1Ri−k 2 ⟩if neurons 1 and 2 were independent linear-nonlinear systems responding to the same stimulus. We used spatio-temporal linear kernels of the form hp(j, t) = te−t τh e−|j|2 40 sin((j1 cos φp + j2 sin φp)fp + kp) (5) for t > 0 (hp = 0 otherwise), where j = (j1, j2) denotes a discrete space point. For the linear-nonlinear simulations, we sampled this function on a 20 × 20 × 20 grid in space and time, normalizing the resulting vector to obtain the unit vector hi p. The kernels were chosen to be caricatures of receptive fields of simple cells in visual cortex. The only geometry of the kernels that appears in the equations is their inner products cos ¯θk pq = hi−k p · hi q. For the first example, we simulated a network of three linear-nonlinear neurons. Neuron 2 had an excitatory connection onto neuron 1 with a delay of 5–6 units of time (a positive delay for our sign convention): ¯W 5 21 = ¯ W 6 21 = 0.6. Neuron 3 had one excitatory connection onto neuron 1 and second excitatory connection onto neuron 2 that was delayed by 6–8 units of time (a negative delay): ¯ W 1 31 = ¯W 2 31 = ¯ W 8 32 = ¯W 9 32 = 1.5. In this way, the spike times from neuron 1 and 2 had positive correlations due to both a direct connection and common input. Fig. 1 shows the results after simulating for 600,000 units of time, obtaining 16,000–22,000 spikes per neuron. The covariance C has peaks at both positive and negative delays, corresponding to the direct connection and common input, respectively, as well as a small peak around zero due to the shared stimulus (see Ref. [6]). The measure S eliminates the stimulus-induced correlation, but still cannot distinguish the direct connection from the common input. The proposed measures W and U, however, do separate the two sources of correlation. W contains a peak only at the positive delay corresponding to the direct connection from neuron 2 to neuron 1; U contains a peak only at the negative delay corresponding to the common input from the (unmeasured) third neuron. This distinction was made at the cost of a dramatic increase in the noise. On the order of 20,000 spikes were needed to get clean results even in this idealized simulation, a long experiment given the typically low firing rates in response to white noise stimuli. Theoretically, the method should handle inhibitory connections just as well as excitatory −30 −20 −10 0 10 20 30 0 2 4 x 10 −3 a Delay C −30 −20 −10 0 10 20 30 0 1 2 3 x 10 −3 b Delay S −30 −20 −10 0 10 20 30 0 0.5 1 c Delay W −30 −20 −10 0 10 20 30 0 0.5 1 d Delay U Figure 1: Results from the spike times of two neurons in a simulation of three linearnonlinear neurons. Delay is in units of time and is the spike time of neuron 1 minus the spike time of neuron 2. The correlations at a positive delay are due to a direct connection, while those a negative delay are due to common input. (a) The covariance C between the spike times of neuron 1 and neuron 2 reflects both connections. The third peak around zero delay, due to similarity in the kernels hi 1 and hi 2, is induced by the common stimulus. (b) The correlation measure S removes the correlation induced by the common stimulus, but cannot distinguish between the two connectivity induced correlations. (c–d) The measures W and U do distinguish the connectivity induced correlations. W reflects only the direct connection (c); U reflects only the common input (d). Parameters for g(·): ¯T1 = 2.5, ¯T2 = 3.0, ¯T3 = 2.2, ϵ1 = 0.5, ϵ2 = 1.0, ϵ3 = 0.7. Parameters for h: τh = 1, φ1 = 0, φ2 = π/8, φ3 = π/4, f1 = 0.5, f2 = 0.8, f3 = 1.0, k1 = 0, k2 = −1, k3 = 1. connections. To test the inhibitory case, we modified the connections so that neuron 1 received an inhibitory connection from neuron 2 ( ¯W 5 21 = ¯W 6 21 = −0.3), and neuron 1 received an inhibitory connection from neuron 3 ( ¯W 1 31 = ¯ W 2 31 = −1.0). Neuron 2 continued to receive an excitatory connection from neuron 3 ( ¯W 8 32 = ¯W 9 32 = 1.0). The low firing rates of neurons, however, makes inhibition more difficult to detect via correlations [3]. Similarly, the measures W and U performed less well with inhibition. To demonstrate that they could, at least theoretically, work for inhibition, we increased the firing rates, used ¯Ws with smaller magnitudes, and increased the simulation length. Fig. 2 shows the results after simulating for 1,200,000 units of time, obtaining 130,000–140,000 spikes per neuron. With this extraordinarily large number of spikes, W and U successfully distinguish the direct connection correlations from the common input correlations. To test the robustness of the method to deviations from the linear-nonlinear model, we simulated a system of two integrate-and-fire neurons whose input was a threshold-linear function of the stimulus. The neurons received common input from a threshold-linear unit, −30 −20 −10 0 10 20 30 −5 0 5 x 10 −3 a Delay C −30 −20 −10 0 10 20 30 −4 −2 0 x 10 −3 b Delay S −30 −20 −10 0 10 20 30 −0.3 −0.2 −0.1 0 c Delay W −30 −20 −10 0 10 20 30 −0.1 0 d Delay U Figure 2: Results from the simulation of the same linear-nonlinear network as in Fig. 1, except that the connections from both neuron 2 and neuron 3 onto neuron 1 were made inhibitory. Panels are as in Fig. 1. Again, S eliminates the stimulus-induced peak in C. W reflects only the direct connection correlations, and U reflects only the common input correlations. This inhibitory example, however, required a long simulation for accurate results (see text). Parameters for g(·): ¯T1 = 1.2, ¯T2 = 2.0, ¯T3 = 1.5, ϵ1 = 0.5, ϵ2 = 1.0, ϵ3 = 0.7. Parameters for h are the same as in Fig. 1. and neuron 1 received a direct connection from neuron 2 (see Fig. 3). We let t be given in milliseconds, sampled Eq. (5) on a 20×20×30 grid in space and time, using a 2 ms grid in time, and normalized the resulting vector to obtain the unit vector hi p. A two millisecond sample rate of discrete white noise is unrealistic in many experiments, so we departed further from the assumptions of the derivation and let the stimulus be white noise sampled at 10 ms. We let the stimulus standard deviation be σ = 1/ √ 5 so that it had the same power as discrete white noise sampled at 2 ms with σ = 1. After one hour of simulated time (360,000 frames), we collected approximately 23,000– 25,000 spikes per neuron. Fig. 4 shows that the method still effectively distinguishes direct connection correlations from common input correlations. The separation isn’t perfect as W becomes negative where the common input correlation is positive and U becomes negative where the direct input correlation is positive. To determine whether a combination of positive W and negative U, for example, indicates positive direct connection correlation or negative common input correlation, one simply needs to look to see if S is positive or negative. Fig. 4 dramatically illustrates the increased noise in W and U. For this reason, the measures are useful only when one can run a relatively long experiment to get an acceptable signal-to-noise ratio. The noise is due to the conditioning of the (non-square) matrix in the h1 3 h 2 h T j 2 T j 3 T j 1 T j sp,1 T j sp,1 X 1 2 Figure 3: Diagram of two integrate-and-fire neurons (circles) receiving threshold-linear input from the stimulus. The neurons received common input from threshold-linear unit 3, and neuron 1 received a direct connection from neuron 2. The evolution of the voltage of neuron p in response to input gp(t) was given by τm dVp dt + Vp + gp(t)(Vp −Es) = 0. When Vp(t) reached 1, a spike was recorded, and the voltage was reset to 0 and held there for an absolute refractory period of length τref. We let gp(t) = gext p (t) + gint p (t), where the external input was gext p (t) = 0.05 P j G(t −T j p) + 0.05 P j G(t −T j 3 −δp) with G(t) = e2 4 t τs 2e−t/τs for t > 0 and G(t) = 0 otherwise. The T j p were drawn from a modulated Poisson process with rate given by αp hi p · X + where [x]+ = x if x > 0 and is zero otherwise. The internal input gint 2 (t) to neuron 2 was set to zero, and the internal input to neuron 1 was set to reflect an excitatory connection from neuron 2, gint 1 (t) = 0.05 P j G(t −T j sp,2 −δ21), where the T j sp,2 are the spike times of neuron 2. least-square calculation of W and U. The condition numbers in the three examples were approximately 70, 50, and 110, respectively. Measurement errors or noise could be magnified by as much as these factors. The high condition numbers reflect the subtlety of the distinction we are making. Obtaining values of W and U significantly beyond the noise level in real experiments may prove a formidable challenge. However, the utility of W and U with noisy data greatly improves when they are used in conjunction with other measures. One can use a less noisy measure such as S to find significant stimulus-independent correlations and determine their magnitudes. Then, assuming one can rule out causes like covariation in latency or excitability [7], one simply needs to determine if the correlations were caused by a direct connection or by common input. One does not need to use W and U to reject the null hypothesis of no connectivity-induced correlations; they are needed only to make the remaining binary distinction. The proposed method should be viewed simply as an example of a new framework for reconstructing stimulus-driven neural networks. Clearly, extensions beyond the presented model will be necessary since the linear-nonlinear model can adequately describe the behavior of only a small subset of neurons in primary sensory areas. Furthermore, methods to validate the assumed model will be required before results of this approach can be trusted. Though limited in scope and model-dependent, we have demonstrated what appears to be the first example of a definitive dissociation between direct connection and common input correlations from spike time data. At least in the case of excitatory connections, this distinction can be made with a realistic, albeit large, amount of data. With further refinements, this approach may yield viable tools for reconstructing stimulus-driven neural networks. −150 −100 −50 0 50 100 150 0 2 4 6 x 10 −5 a Delay (ms) C −150 −100 −50 0 50 100 150 0 2 4 x 10 −5 b Delay (ms) S −150 −100 −50 0 50 100 150 −0.5 0 0.5 1 c Delay (ms) W −150 −100 −50 0 50 100 150 −0.2 0 0.2 0.4 0.6 d Delay (ms) U Figure 4: Results from the simulation of two integrate-and-fireneurons, where neuron 2 had an excitatory connection onto neuron 1 with a delay δ21 = 50 ms. Both neurons received common input, but the common input to neuron 2 was delayed (δ1 = 0 ms, δ2 = 60 ms). Panels are as in Fig. 1. S greatly reduces the central, stimulus-induced correlation from C. W and U successfully distinguish the direct connection correlations from the common input correlations, but also negatively reflect each other. Ambiguity in interpretation of W and U can be eliminated by comparison with S. Integrate-and-fire parameters: τm = 5 ms, Es = 6.5, τ2 = 2 ms, τref = 2 ms, α1 = α2 = 0.25 ms−1, and α3 = 0.1 ms−1. Parameters for h are the same as in Fig. 1 except that τh = 10 ms. References [1] D. H. Perkel, G. L. Gerstein, and G. P. Moore. Neuronal spike trains and stochastic point processes. II. Simultaneous spike trains. Biophys. J., 7:419–40, 1967. [2] A. M. H. J. Aertsen, G. L. Gerstein, M. K. Habib, and G. Palm. Dynamics of neuronal firing correlation: Modulation of “effective connectivity”. J. Neurophysiol., 61:900–917, 1989. [3] G. Palm, A. M. H. J. Aertsen, and G. L. Gerstein. On the significance of correlations among neuronal spike trains. Biol. Cybern., 59:1–11, 1988. [4] J. R. Rosenberg, A. M. Amjad, P. Breeze, D. R. Brillinger, and D. M. Halliday. The Fourier approach to the identification of functional coupling between neuronal spike trains. Prog. Biophys. Mol. Biol., 53:1–31, 1989. [5] D. Q. Nykamp and Dario L. Ringach. Full identification of a linear-nonlinear system via crosscorrelation analysis. J. Vision, 2:1–11, 2002. [6] D. Q. Nykamp. A spike correlation measure that eliminates stimulus effects in response to white noise. J. Comp. Neurosci., 14:193–209, 2003. [7] C. D. Brody. Correlations without synchrony. Neural. Comput., 11:1537–51, 1999.
|
2002
|
140
|
2,148
|
Selectivity and Metaplasticity in a Unified Calcium-Dependent Model Luk Chong Yeung Physics Department and Institute for Brain & Neural Systems Brown University Providence, RI 02912 yeung@physics.brown.edu Brian S. Blais Department of Science & Technology Bryant College Smithfield, RI 02917 Institute for Brain & Neural Systems Brown University bblais@bryant.edu Leon N Cooper Institute for Brain & Neural Systems Physics Department and Department of Neuroscience Brown University Providence, RI 02912 Leon Cooper@brown.edu Harel Z. Shouval Institute for Brain & Neural Systems and Physics Department Brown University Providence, RI 02912 Harel Shouval@brown.edu Abstract A unified, biophysically motivated Calcium-Dependent Learning model has been shown to account for various rate-based and spike time-dependent paradigms for inducing synaptic plasticity. Here, we investigate the properties of this model for a multi-synapse neuron that receives inputs with different spike-train statistics. In addition, we present a physiological form of metaplasticity, an activity-driven regulation mechanism, that is essential for the robustness of the model. A neuron thus implemented develops stable and selective receptive fields, given various input statistics 1 Introduction Calcium influx through NMDA receptors is essential for the induction of diverse forms of bidirectional synaptic plasticity, such as rate-based [1, 2] and spike timedependent plasticity (STDP) [3, 4]. Activation of NMDA receptors is also essential for functional plasticity in vivo [5]. An influential hypothesis holds that modest elevations of Ca above the basal line would induce LTD, while higher elevations would induce LTP[6, 7]. Based on these observations, a Unified Calcium Learning Model (UCM) has been proposed by Shouval et al. [8]. In this model, cellular activity is translated locally into the dendritic calcium concentrations Cai, through the voltage and time-dependence of the NMDA channels. The level of Cai determines the sign and magnitude of synaptic plasticity as determined through a function of local calcium Ω(Cai)(see Methods). A further assumption is that the Back-Propagating Action Potentials (BPAP) has a slow after-depolarizing tail. Implementation of this simple yet biophysical model has shown that it is sufficient to account for the outcome of different induction protocols of synaptic plasticity in a one-dimensional input space, as illustrated in Figure 1. In the pairing protocol, LTD occurs when LFS is paired with a small depolarization of the postsynaptic voltage while a larger depolarization yields LTP (Figure 1a), due to the voltage-dependence of the NMDA currents. In the rate-based protocol, low-frequency stimulation (LFS) gives rise to LTD while high-frequency stimulation (HFS) produces LTP (Figure 1b), due to the time-integration dynamics of the calcium transients. Finally, STDP gives LTD if a post-spike comes before a pre-spike within a time-window, and LTP if a post-spike comes after a pre-spike (Figure 1c); this is due to the coincidencedetector property of the NMDA receptors and the shape of the BPAP. In addition to these results, the model also predicts a previously uncharacterized pre-beforepost depressing regime and rate-dependence of the STDP curve. These findings have had preliminary experimental support [9, 3, 10], and as will be shown have consequences in the multi-dimensional environment that impact the results of this work. Final weight (% of initial w) t (ms) ∆ Clamped voltage (mV) −100 −50 0 50 100 200 150 200 150 100 50 Frequency (Hz) 0 5 10 15 −100−50 0 50 100 150 50 150 100 a) b) c) Figure 1: Calcium-Dependent Learning Rule and the various experimental plasticity-induction paradigms: implementation of (a) Pairing Protocol, (b) RateDependent Plasticity and (c) Spike-Time Dependent Plasticity. The Pairing Protocol was simulated with a fixed input rate of 3 Hz; STDP curve is shown for 1 Hz. Notice the new pre-before-post depression regime. In this study we investigate characteristics of the Calcium Control Hypothesis such as cooperativity and competition, and examine how they give rise to input selectivity. A neuron is called selective to a specific input pattern if it responds strongly to it and not to other patterns, which is equivalent to having a potentiated pathway to this pattern. Input selectivity is a general feature of neurons and underlies the formation of receptive fields and topographic mappings. We demonstrate that using the UCM alone, selectivity can arise, but only within a narrow range of parameters. Metaplasticity, the activity-dependent modulation of synaptic plasticity, is essential for robustness of the BCM model [11]. Furthermore, it has significant experimental support [12]. Here we propose a more biologically realistic implementation, compatible with the Calcium Control Hypothesis, which is based on experimental observations [13]. We find that it makes the UCM model more robust significantly expanding the range of parameters that result in selectivity. 2 Selectivity to Spike Train Correlations The development of neuronal selectivity, given any learning rule, depends on the statistical structures of the input environment. For spiking neurons, this structure may include temporal, in addition to spatial statistics. One method of examining this feature is to generate input spike trains with different statistics across synapses. We use a simple scenario in which half of the synapses (group B) receive noisy Poisson spike trains with a mean rate ⟨rin⟩, and the other half (group A), receive correlated spikes with the same rate ⟨rin⟩. Input spikes in group A have an enhanced probability of arriving together (see Methods). One might expect that, by firing together, group A will gain control of the post-synaptic firing times and thus be potentiated, while group B will be depressed, in a manner similar to the STDP described by Song et al. [14]. In addition to the 100 excitatory neurons our neuron receives 20 inhibitory inputs. The results are shown in Figure 2. There exists a range of input frequencies (Figure 2a, left) at which segregation occurs between the correlated and uncorrelated groups. The cooperativity among the synapses in group A enhances its probability of generating a post-spike, which, through the BPAP causes strong depolarization. Since the NMDA channels are still open due to a recent pre-spike, this is likely to potentiates these synapses in a Hebbian-associative fashion. Group B will fire with equal probability before and after a post-spike which, given a sufficiently low NMDA receptor conductance, ensures that, on average, depression takes place. At the final state, the output spike train is irregular (Figure 2a, right) but its rate is stable (Figure 2a, center), indicating that the system had reached a fixed point with a balance between excitation and inhibition. 0 1 2 0 0.5 1 Average weight 12 Hz 0 5 10 0 0.5 1 Average weight Output rate (Hz) 0 5 10 0 5 10 CV 0 5 10 1 2 b) a) 0 1 2 0 0.5 1 Average weight 8 Hz Time (ms x 10 ) Time (ms x 10 ) Time (ms x 10 ) 5 Time (ms x 10 ) 5 Time (ms x 10 ) 5 5 5 Figure 2: Segregation of the synapses for different input structures. (a) Segregation at 10 Hz. Left, time evolutions of the average synaptic weight for the groups A (solid) and B (dashed). Center, the output rate, calculated as the number of output spikes over non-overlapping time bins of 20 seconds. Right, the coefficient of variation, CV = std(isi)/ mean(isi), where isi is the interspike interval of the output train. (b) Results for 8 Hz (left) and 12 Hz (right). All the synapses are potentiated and depressed, respectively. These results, however, are sensitive to the simulation parameters. In fact, a slight change in the value of ⟨rin⟩disrupts the segregation described previously (Figure 2b). For too high or too low values of ⟨rin⟩, both channels are potentiated and depressed, respectively. This occurs because, unlike standard STDP models, the unified model exhibits frequency dependence in addition to spike-time dependence. This suggests that a stabilizing mechanism must be incorporated into the model. 3 Metaplasticity In the BCM theory the threshold between LTD and LTP moves as a function of the history of postsynaptic activity [11]. This type of activity-dependent regulation of the properties of synaptic plasticity, or metaplasticity, was developed to ensure selectivity and stability. Experimental results have linked some forms of metaplasticity to the magnitude of the NMDA conductance; it is shown that as the cellular activity increases, NMDA conductance is down-regulated, and vice-versa [15, 16, 13, 17]. Under the Calcium Control Hypothesis, this sets the ground for a more physiological formulation of metaplasticity [18]. NMDA conductance is interpreted here as the total number (gm) of NMDA channels inserted in the membrane of the postsynaptic terminal. Consider a simple kinetic model in which additional channels can be inserted from an intracellular pool (gi) or removed and returned to the pool in an activity dependent manner. We assume a fixed removal rate k- and a voltage sensitive insertion rate k+V α: gm k– −→ ←− k+V α gi (Our results are not very sensitive to the details of the voltage dependence of insertion and removal rates) This scheme leads us to a dynamic equation for gm, ˙gm = −(k– + k+V α) gm + k+V αgt, where gt is a normalizing factor, gt = gm + gi. The fixed point is: g∗ m = gt k–/(k+V α) + 1 (1) If, in this model, cellular activity is translated into Ca, then gm can be loosely interpreted as the inverse of the BCM sliding threshold θm [18]. Notice that in the original form of BCM, θm is the time average of a non-linear function of the postsynaptic the activity level. In order to achieve competition, gm should not depend solely on local (synaptic) variables, but should rather detect changes of the global patterns of cellular activity. Here, the activity-signaling global variable is taken to be postsynaptic membrane potential. Implementation of metaplasticity widens significantly the range of input frequencies for which segregation between the weights of correlated and uncorrelated synapses is observed; this is shown in Figure 3a. At low spiking activity, the subthreshold depolarization levels prevent significant inward Ca currents. Under these conditions metaplasticity causes gm to grow. Persistent post-spike generation will lead gm and therefore Ca to decrease, hence scaling the synaptic weights downwards. Competition arises as the system searches for the balance between the selective positive feed-back of a standard Hebbian rule and the overall negative feed-back of a sliding threshold mechanism. However, consistent with the rate-based protocol described before, at too low and too high ⟨rin⟩selectivity is disrupted, and the synapses will eventually all depress or potentiate, regardless of the statistical structures of the stimulus. Strengthening the correlation increases segregation (Figure 3b), demonstrating the effects of lateral cooperativity in potentiation. On the other hand, increasing the fraction of correlated inputs weakens the final weight of the correlated group (Figure 3c), suggesting that less potentiation is needed to control the output spike-timing. Notice that in the presence of metaplasticity, no upper saturation limit is required; the equilibrium of the fixed point is homeostatic, rather than imposed. Input Rate (Hz) Average final weight (arbitrary units) Correlation parameter % of correlated inputs 0 10 20 30 40 0 0.5 1 0 0 50 100 a) b) c) 0 5 10 15 20 50 40 30 20 10 60 80 40 20 0 Figure 3: The effects of metaplasticity. (a) The weights segregate within the range of input frequency = [5, 35] Hz in a half correlated (solid), half uncorrelated (dashed) input environment; shown are the average final weights within each group, correlation parameter c = 0.8 (see Methods). (b) The average final weight as a function of the correlation parameter, ⟨rin⟩= 10 Hz. (c) The average final weight as a function of the fraction of correlated inputs, ⟨rin⟩= 10 Hz, c = 0.8. 4 Selectivity to patterns of rate distribution An alternative input environment is one in which the rates vary across the synapses and over time. This is a plausible representation for sensory neurons that are differentially excited. A straightforward method is to use rate distributions that are piecewise constant. We use a simple example in which the rate distributions are non-overlapping square patterns, as illustrated in Figure 4a (see Methods). The patterns are randomly presented to the neuron, being switched at regular epochs. Since the mean switching time is constant and much smaller than the time constant of learning, each synapse receives the same average input over time. However, we observe that, after training, the neuron spontaneously breaks the symmetry, as a subset of synapses becomes potentiated, while others are depressed (Figure 4b). It should be noticed that, because the choice of the training pattern at each epoch is random, the selected pattern is different at each run. Due to metaplasticity, these results are robust across different pattern amplitudes and pattern dimensions (not shown). 0 1500 3000 0 1500 3000 0 1500 3000 0 1500 3000 0 5 10 0 5 10 0 5 10 0 5 10 synapse Average weight rate 1 50 100 Synapses 1−25 Synapses 26−50 Synapses 51−75 Synapses 76−100 Time (sec) a) b) Figure 4: (a) Four non-overlapping patterns of input rate distribution and (b) the average weight evolution of each channel. In this particular simulation, the higher and the lower rates correspond to 30 Hz and 10 Hz, respectively. The final state of the neuron is one that is selective to the last pattern ( a), left most). 5 Discussion Neurons in many cortical areas develop receptive fields that are selective to a small subset of stimulating inputs. This property has been shown to be experiencedependent [19, 20] and also dependent on NMDA receptors[5, 21]. It is likely, therefore, that receptive field formation relies on the same type of NMDA-dependent synaptic plasticity observed in vitro [1, 2, 4]. Previous work has shown that these in vitro rate and spike time-induced plasticity can be accounted for by the biologicallyinspired Unified Calcium Model. In this work, we have shown that the same model can lead to the experience-dependent development of neuronal selectivity. Metaplasticity adds robustness to the system and reinforces temporal competition between input patterns [11] , by controlled scaling of NMDAR currents. We have shown here that even in simple input environments there is segregation among the synaptic strengths, depending on the temporal input statistics of different channels. This is analogous to the explanation of ocular dominance that depends on temporal competition [22], and is likely to hold with more realistic assumptions. Because the UCM is responsive to input rates, in addition to spike-timing, we are able to achieve selectivity for rate-distribution patterns in spiking neurons that is comparable to the selectivity obtained in simplified, continuous-valued systems [23]. This result suggests that the coexistence and complementarity of rate- and spike time-dependent plasticities, previously demonstrated for a one-dimensional neuron [8], can also be extended to multi-dimensional input environments. We are currently investigating the formation of receptive fields in more realistic environments, such as natural stimuli and examining how the their statistical properties can be translated into a physiological mechanism for emergence of input selectivity. 6 Methods We simulate a single neuron with 20 non-plastic inhibitory synapses and 100 excitatory synapses undergoing the Calcium-Dependent learning rule: ˙wi = η(Cai) (Ω(Cai) −λw) , (2) where wi is the synaptic weight of the synapse i, i = 1, ..., 100, η is a linear calcium-dependent learning rate η = 10−3Ca and Ωis a difference of sigmoids: Ω= σ(Ca, α1, β1)−0.5σ(Ca, α2, β2), with σ(x, a, b) := exp(b(x−a)) [1 + exp(b(x −a))]−1 and (α1, β1, α2, β2) = (0.25, 60, 0.4, 20). Here, we use λ = 0. The initial condition for all weights is 0.5; additionally, wi is constrained within hard boundaries: wi ∈[0, 1] for the cases where no metaplasticity is used. The NMDA-mediated calcium concentration varies as: dCai dt = I −Cai τCa , (3) where I is the NMDA current and τCa = 20 ms is the passive decay time constant [24]. I depends on the association between pre-spike times and postsynaptic depolarization level, described by I = gmf(t, tpre)H(V ) [7]. At the non-metaplastic cases, we use gm = 2.53 × 10−4µM/(mV.ms). Upon a pre-spike, f reaches its peak value of 1. 70% of this value decays with time constant τ N f = 50 ms, the remaining decays with time constant τ N s = 200 ms. H is the magnesium-block function: H(V ) = (V −Vrev) 1 + e−0.062V /3.57, (4) with the reversal potential for calcium Vrev = 130 mV. The dynamics of the membrane potential is simulated with the standard Integrateand-Fire model: dVm(t) dt = 1 τm Vrest −Vm(t) + Gex(t) (Vex −Vm(t)) + Gin(t) (Vin −Vm(t)) , (5) where τm =20 ms, (Vrest, Vex, Vin) = (−65, 0, 65) mV. If a pre-spike arrives at the excitatory [inhibitory] synapse i, Gex[in](t) = Gex[in](t −1) + gmax ex[in]gi; otherwise, Gex and Gin decay exponentially with time constant τ = 5 ms. For excitatory and inhibitory synapses, (gi, gmax) = (wi, 0.09) and (1, 0.3) respectively. If Vm(t) reaches firing threshold of -55 mV, a post-spike is generated and the BPAP is updated to its peak value of 60 mV. 75% of this value decays rapidly (τ B f = 3 ms) and the remaining decays slowly (τ B s = 35 ms) [25]. The voltage at the synaptic site is thus given by the sum V = Vm+BPAP. To implement input correlations, we adopt the method used by [26]. Let the number of correlated input be N. For a pre-assigned correlation parameter c, N0 Poisson events are generated, N0 = N + √c(1 −N), and, at each time step, randomly distributed among the N synapses. It is clear that each resulting spike-train still has the same Poisson distribution, but with a probability of spiking together with other synapses. For simulations involving different rates, the 100 synapses were first divided into 4 channels of 25 synapses. Time epochs were generated according to an exponential distribution of mean τc = 500 ms. At each epoch, one of the channels was randomly chosen and assigned a mean rate r∗, while others receive spike-trains with mean rate r < r∗. For metaplasticity in Equation 1, we use the parameters: k–/(k+) = 9.1739 × 107, gt = −0.0184 and α = 4. All of the simulations use time steps of dt = 1 ms. Acknowledgments This work is partly funded by the Brown Brain Science Program BurroughsWellcome Fund fellowship program. The authors thank the members of the Institute for Brain and Neural Systems and the participants of the 2001 EU Summer School on Computational Neuroscience for helpful conversations. References [1] T.V.P. Bliss and G.L. Collingridge. A synaptic model of memory; long-term potentiation the hippocampus. Nature, 361:31–9, 1993. [2] S.M. Dudek and M.F. Bear. Homosynaptic long-term depression in area CA1 of hippocampus and the effects on NMDA receptor blockade. Proc. Natl. Acad. Sci., 89:4363–7, 1992. [3] H. Markram, J. L¨ubke, M. Frotscher, and B. Sakmann. Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs. Science, 275:213–5, 1997. [4] G. Bi and M. Poo. Synaptic modifications in cultured hippocampal neurons: Dependence on spike timing, synaptic strength, and postsynaptic cell type. J. Neurosci., 18 (24):10464–72, 1998. [5] A. Kleinschmidt, M.F. Bear, and W. Singer. Blockade of NMDA receptors disrupts experience-dependent plasticity of kitten striate cortex. Science, 238:355–358, 1987. [6] M.F. Bear, L.N Cooper, and F.F. Ebner. A physiological basis for a theory of synapse modification. Science, 237:42–8, 1987. [7] J.A. Lisman. A mechanism for the Hebb and the anti-Hebb processes underlying learning and memory. Proc. Natl. Acad. Sci., 86:9574–8, 1989. [8] H.Z. Shouval, M.F. Bear, and L.N Cooper. A unified theory of nmda receptordependent bidirectional synaptic plasticity. Proc. Natl. Acad. Sci., 99:10831–6, 2002. [9] M. Nishiyama, K. Hong, K. Mikoshiba, M.M. Poo, and K. Kato. Calcium stores regulate the polarity and input specificity of synaptic modification. Nature, 408:584– 8, 2000. [10] P.J. Sj¨ostr¨om, G.G. Turrigiano, and S.B. Nelson. Rate, timing, and cooperativity jointly determine cortical synaptic plasticity. Neuron, 32:1149–64, 2001. [11] E.L. Bienenstock, L.N Cooper, and P.W. Munro. Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex. J. Neurosci., 2:32–48, 1982. [12] A. Kirkwood, M.G. Rioult, and M.F. Bear. Experience-dependent modification of synaptic plasticity in visual cortex. Nature, 381:526–8, 1996. [13] B.D. Philpot, A.K. Sekhar, H.Z. Shouval, and M.F. Bear. Visual experience and deprivation bidirectionally modify the composition and function of NMDA receptors in visual cortex. Neuron, 29:157–69, 2001. [14] S. Song, K.D. Miller, and L.F. Abbott. Competitive hebbian learning through spiketiming dependent synaptic plasticity. Nature Neurosci., 3:919–26, 2000. [15] G. Carmignoto and S. Vicini. Activity dependent increase in NMDA receptor responses during development of visual cotex. Science, 258:1007–11, 1992. [16] E.M. Quinlan, B.D. Philpot, R.L. Huganir, and M.F. Bear. Rapid, experiencedependent expression of synaptic NMDA receptors in visual cortex in vivo. Nature Neurosci., 2(4):352–7, 1999. [17] A.J. Watt, M.C.W. van Rossum, K.M. MacLeod, S.B. Nelson, and G.G. Turrigiano. Activity co-regulates quantal ampa and nmda currents at neocortical synapses. Neuron, 26:659–70, 2000. [18] H.Z. Shouval, G.C. Castellani, L.C. Yeung, B.S. Blais, and L.N Cooper. Converging evidence for a simplified biophysical model of synaptic plasticity. Bio. Cyb., 87:383–91, 2002. [19] Y. Fr´egnac and M. Imbert. Early development of visual cortical cells in normal and dark reared kittens: relationship between orientation selectivity and ocular dominance. J. Physiol. Lond., 278:27–44, 1978. [20] B. Chapman, M.P. Stryker, and T. Bonhoeffer. Development of orientation preference maps in ferret primary visual cortex. J. Neurosci., 16:6443–53, 1996. [21] A.S. Ramoa, A.F. Mower, D. Liao, and S.I. Jafri. Suppression of cortical nmda receptor function prevents development of orientation selectivity in the primary visual cortex. J. Neurosci., 21:4299–309, 2001. [22] B.S. Blais, H.Z. Shouval, and L.N Cooper. The role of presynaptic activity in monocular deprivation: Comparison of homosynaptic and heterosynaptic mechanisms. Proc. Natl. Acad. Sci., 96:1083–7, 1999. [23] E.E. Clothiaux, L.N Cooper, and M.F. Bear. Synaptic plasticity in visual cortex: Comparison of theory with experiment. J. Neurophys., 66:1785–804, 1991. [24] B.L. Sabatini, T.G. Oerthner, and K. Svoboda. The life cycle of ca2+ ions in dendritic spines. Neuron, 33:439–52, 2002. [25] J.C. Magee and D. Johnston. A synaptically controlled, associative signal for hebbian plasticity in hippocampal neurons. Science, 275:209–13, 1997. [26] M. Rudolph and A. Destexhe. Correlation detection and resonance in neural systems with distributed noise sources. Phys. Rev. Lett., 86(16):3662–5, 2001.
|
2002
|
141
|
2,149
|
Learning to Detect Natural Image Boundaries Using Brightness and Texture David R. Martin Charless C. Fowlkes Jitendra Malik Computer Science Division, EECS, U.C. Berkeley, Berkeley, CA 94720 dmartin,fowlkes,malik @cs.berkeley.edu Abstract The goal of this work is to accurately detect and localize boundaries in natural scenes using local image measurements. We formulate features that respond to characteristic changes in brightness and texture associated with natural boundaries. In order to combine the information from these features in an optimal way, a classifier is trained using human labeled images as ground truth. We present precision-recall curves showing that the resulting detector outperforms existing approaches. 1 Introduction Consider the image patches in Figure 1. Though they lack global context, it is clear which contain boundaries and which do not. The goal of this paper is to use features extracted from the image patch to estimate the posterior probability of a boundary passing through the center point. Such a local boundary model is integral to higher-level segmentation algorithms, whether based on grouping pixels into regions [21, 8] or grouping edge fragments into contours [22, 16]. The traditional approach to this problem is to look for discontinuities in image brightness. For example, the widely employed Canny detector [2] models boundaries as brightness step edges. The image patches show that this is an inadequate model for boundaries in natural images, due to the ubiquitous phenomenon of texture. The Canny detector will fire wildly inside textured regions where high-contrast contours are present but no boundary exists. In addition, it is unable to detect the boundary between textured regions when there is only a subtle change in average image brightness. These significant problems have lead researchers to develop boundary detectors that explicitly model texture. While these work well on synthetic Brodatz mosaics, they have problems in the vicinity of brightness edges. Texture descriptors over local windows that straddle a boundary have different statistics from windows contained in either of the neighboring regions. This results in thin halo-like regions being detected around contours. Clearly, boundaries in natural images are marked by changes in both texture and brightness. Evidence from psychophysics [18] suggests that humans make combined use of these two cues to improve detection and localization of boundaries. There has been limited work in computational vision on addressing the difficult problem of cue combination. For example, the authors of [8] associate a measure of texturedness with each point in an image in order to suppress contour processing in textured regions and vice versa. However, their solution is full of ad-hoc design decisions and hand chosen parameters. The main contribution of this paper is to provide a more principled approach to cue combination by framing the task as a supervised learning problem. A large dataset of natural images that have been manually segmented by multiple human subjects [10] provides the ground truth label for each pixel as being on- or off-boundary. The task is then to model the probability of a pixel being on-boundary conditioned on some set of locally measured image features. This sort of quantitative approach to learning and evaluating boundary detectors is similar to the work of Konishi et al. [7] using the Sowerby dataset of English countryside scenes. Our work is distinguished by an explicit treatment of texture and brightness, enabling superior performance on a more diverse collection of natural images. The outline of the paper is as follows. In Section 2 we describe the oriented energy and texture gradient features used as input to our algorithm. Section 3 discusses the classifiers we use to combine the local features. Section 4 presents our evaluation methodology along with a quantitative comparison of our method to existing boundary detection methods. We conclude in Section 5. 2 Image Features 2.1 Oriented Energy In natural images, brightness edges are more than simple steps. Phenomena such as specularities, mutual illumination, and shading result in composite intensity profiles consisting of steps, peaks, and roofs. The oriented energy (OE) approach [12] can be used to detect and localize these composite edges [14]. OE is defined as:
where and are a quadrature pair of even- and odd-symmetric filters at orientation and scale . Our even-symmetric filter is a Gaussian second-derivative, and the corresponding odd-symmetric filter is its Hilbert transform. has maximum response for contours at orientation . We compute OE at 3 half-octave scales starting at ! #" the image diagonal. The filters are elongated by a ratio of 3:1 along the putative boundary direction. 2.2 Texture Gradient We would like a directional operator that measures the degree to which texture varies at a location %$'&)( in direction . A natural way to operationalize this is to consider a disk of radius centered on *$'&+( , and divided in two along a diameter at orientation . We can then compare the texture in the two half discs with some texture dissimilarity measure. Oriented texture processing along these lines has been pursued by [19]. What texture dissimilarity measure should one use? There is an emerging consensus that for texture analysis, an image should first be convolved with a bank of filters tuned to various orientations and spatial frequencies [4, 9]. After filtering, a texture descriptor is then constructed using the empirical distribution of filter responses in the neighborhood of a pixel. This approach has been shown to be very powerful both for texture synthesis [5] as well as texture discrimination [15]. Puzicha et al. [15] evaluate a wide range of texture descriptors in this framework. We choose the approach developed in [8]. Convolution with a filter bank containing both even and odd filters at multiple orientations as well as a radially symmetric center-surround filter associates a vector of filter responses to every pixel. These vectors are clustered using k-means and each pixel is assigned to one of the cluster centers, or textons. Texture dissimilarities can then be computed by comparing the histograms of textons in the two disc halves. Let ,.- and /0- count how many pixels of texton type 1 occur in each half disk. Image Intensity Non-Boundaries Boundaries Figure 1: Local image features. In each row, the first panel shows the image patch. The following panels show feature profiles along the line marked in each patch. The features are raw image intensity, raw oriented energy , localized oriented energy , raw texture gradient , and localized texture gradient . The vertical line in each profile marks the patch center. The challenge is to combine these features in order to detect and localize boundaries. We define the texture gradient (TG) to be the distance between these two histograms: , & /
, - / ,# /0The texture gradient is computed at each pixel *$'&+( over 12 orientations and 3 half-octave scales starting at " of the image diagonal. 2.3 Localization The underlying function we are trying to learn is tightly peaked around the location of image boundaries marked by humans. In contrast, Figure 1 shows that the features we have discussed so far don’t have this structure. By nature of the fact that they pool information over some support, they produce smooth, spatially extended outputs. The texture gradient is particularly prone to this effect, since the texture in a window straddling the boundary is distinctly different than the textures on either side of the boundary. This often results in a wide plateau or even double peaks in the texture gradient. Since each pixel is classified independently, these spatially extended features are particularly problematic as both on-boundary pixels and nearby off-boundary pixels will have large OE and TG. In order to make this spatial structure available to the classifier we transform the raw OE and TG signals in order to emphasize local maxima. Given a feature %$ defined over spatial coordinate $ orthogonal to the edge orientation, consider the derived feature %$ *$ %$ , where %$
%$ %$ is the first-order approximation of the distance to the nearest maximum of *$ . We use the stabilized version %$ *$
%$ %$
(1) with chosen to optimize the performance of the feature. By incorporating the %$ localization term, *$ will have narrower peaks than the raw %$ . To robustly estimate the directional derivatives and localize the peaks, we fit a cylindrical parabola over a circular window of radius centered at each pixel. The coefficients of the quadratic fit $ $ provide directly the signal derivatives, so the transform above becomes , where and require half-wave rectification.1 This transformation is applied to the oriented energy and texture gradient signals at each orientation and scale separately. In order to set and , we optimized the performance of each feature independently with respect to the training data.2 Columns 4 and 6 in Figure 1 show the results of applying this transformation which clearly has the effect of reducing noise and tightly localizing the boundaries. Our final feature set consists of these localized signals and , each at three scales. This yields a 6-element feature vector at 12 orientations at each pixel. 3 Cue Combination Using Classifiers We would like to combine the cues given by the local feature vector in order to estimate the posterior probability of a boundary at each image location *$'&+( &) . Previous work on learning boundary models includes [11, 7]. We consider several parametric and nonparametric models, covering a range of complexity and computational cost. The simplest are able to capture the complementary information in the 6 features. The more powerful classifiers have the potential to capture non-linear cue “gating” effects. For example, one may wish to ignore brightness edges inside high-contrast textures where OE is high and TG is low. These are the classifiers we use: Density Estimation Adaptive bins are provided by vector quantization using k-means. Each centroid provides the density estimate of its Voronoi cell as the fraction of onboundary samples in the cell. We use k=128 and average the estimates from 10 runs. Classification Trees The domain is partitioned hierarchically. Top-down axis-parallel splits are made so as to maximize the information gain. A 5% bound on the error of the density estimate is enforced by splitting cells only when both classes have 400 points present. Logistic Regression This is the simplest of our classifiers, and the one perhaps most easily replicated by neurons in the visual cortex. Initialization is random, and convergence is fast and reliable by maximizing the likelihood. We also consider two variants: quadratic combinations of features, and boosting using the confidence-rated generalization of AdaBoost by Schapire and Singer [20]. No more than 10 rounds of boosting are required for this problem. Hierarchical Mixtures of Experts The HME model of Jordan and Jacobs [6] is a mixture model where both the components and mixing coefficients are fit by logistic functions. We 1Windowed parabolic fitting is known as 2nd-order Savitsky-Golay filtering. We also considered Gaussian derivative filters !#"%$'&")( $ &*")( ( $,+ to estimate !#-$'&*-( $ &*-( ( $.+ with nearly identical results. 2The fitted values are / = ! 0.1,0.075,0.013 + and 0 = ! 2.1,2.5,3.1 + for OE, and / = ! .057,.016,.005 + and 0 = ! 6.66,9.31,11.72 + for TG. 0 is measured in pixels. 0 0.25 0.5 0.75 1 0 0.25 0.5 0.75 1 Precision Recall Raw Features all F=.65 oe0 F=.59 oe1 F=.60 oe2 F=.61 tg0 F=.64 tg1 F=.64 tg2 F=.61 0 0.25 0.5 0.75 1 0 0.25 0.5 0.75 1 Precision Recall Localized Features all F=.67 oe0 F=.60 oe1 F=.62 oe2 F=.63 tg0 F=.65 tg1 F=.65 tg2 F=.63 Figure 2: Performance of raw (left) and localized features (right). The precision and recall axes are described in Section 4. Curves towards the top (lower noise) and right (higher accuracy) are more desirable. Each curve is scored by the F-measure, the value of which is shown in the legend. In all the precision-recall graphs in this paper, the maximum F-measure occurs at a recall of approximately 75%. The left plot shows the performance of the raw OE and TG features using the logistic regression classifier. The right plot shows the performance of the features after applying the localization process of Equation 1. It is clear that the localization function greatly improves the quality of the individual features, especially the texture gradient. The top curve in each graph shows the performance of the features in combination. While tuning each feature’s ! / & 0 + parameters individually is suboptimal, overall performance still improves. consider small binary trees up to a depth of 3 (8 experts). The model is initialized in a greedy, top-down manner and fit with EM. Support Vector Machines We use the SVM package libsvm [3] to do soft-margin classification using Gaussian kernels. The optimal parameters were =0.2 and =0.2. The ground truth boundary data is based on the dataset of [10] which provides 5-6 human segmentations for each of 1000 natural images from the Corel image database. We used 200 images for training and algorithm development. The 100 test images were used only to generate the final results for this paper. The authors of [10] show that the segmentations of a single image by the different subjects are highly consistent, so we consider all humanmarked boundaries valid. We declare an image location %$ &+( &+ to be on-boundary if it is within $ =2 pixels and =30 degrees of any human-marked boundary. The remainder are labeled off-boundary. This classification task is characterized by relatively low dimension, a large amount of data (100M samples for our 240x160-pixel images), and poor separability. The maximum feasible amount of data, uniformly sampled, is given to each classifier. This varies from 50M samples for density estimation to 20K samples for the SVM. Note that a high degree of class overlap in any local feature space is inevitable because the human subjects make use of both global constraints and high-level information to resolve locally ambiguous boundaries. 4 Results The output of each classifier is a set of oriented images, which provide the probability of a boundary at each image location %$'&)( &+ based on local information. For several of the 0 0.25 0.5 0.75 1 0 0.25 0.5 0.75 1 Precision Recall (a) Feature Combinations all F=.67 oe2+tg1 F=.67 tg* F=.66 oe* F=.63 0 0.25 0.5 0.75 1 0 0.25 0.5 0.75 1 Precision Recall (b) Classifiers Density Estimation F=.68 Classification Tree F=.68 Logistic Regression F=.67 Quadratic LR F=.68 Boosted LR F=.68 Hier. Mix. of Experts F=.68 Support Vector Machine F=.66 Figure 3: Precision-recall curves for (a) different feature combinations, and (b) different classifiers. The left panel shows the performance of different combinations of the localized features using the logistic regression classifier: the 3 OE features (oe*), the 3 TG features (tg*), the best performing single OE and TG features (oe2+tg1), and all 6 features together. There is clearly independent information in each feature, but most of the information is captured by the combination of one OE and one TG feature. The right panel shows the performance of different classifiers using all 6 features. All the classifiers achieve similar performance, except for the SVM which suffers from the poor separation of the data. Classification trees performs the best by a slim margin. Based on performance, simplicity, and low computation cost, we favor the logistic regression and its variants. classifiers we consider, the image provides actual posterior probabilities, which is particularly appropriate for the local measurement model in higher-level vision applications. For the purpose of evaluation, we take the maximum over orientations. In order to evaluate the boundary model against the human ground truth, we use the precision-recall framework, a standard evaluation technique in the information retrieval community [17]. It is closely related to the ROC curves used for by [1] to evaluate boundary models. The precision-recall curve captures the trade-off between accuracy and noise as the detector threshold is varied. Precision is the fraction of detections which are true positives, while recall is the fraction of positives that are detected. These are computed using a distance tolerance of 2 pixels to allow for small localization errors in both the machine and human boundary maps. The precision-recall curve is particularly meaningful in the context of boundary detection when we consider applications that make use of boundary maps, such as stereo or object recognition. It is reasonable to characterize higher level processing in terms of how much true signal is required to succeed, and how much noise can be tolerated. Recall provides the former and precision the latter. A particular application will define a relative cost between these quantities, which focuses attention at a specific point on the precision-recall curve. The F-measure, defined as , captures this trade-off. The location of the maximum F-measure along the curve provides the optimal threshold given , which we set to 0.5 in our experiments. Figure 2 shows the performance of the raw and localized features. This provides a clear quantitative justification for the localization process described in Section 2.3. Figure 3a shows the performance of various linear combinations of the localized features. The combination of multiple scales improves performance, but the largest gain comes from using OE and TG together. 0 0.25 0.5 0.75 1 0 0.25 0.5 0.75 1 Precision Recall (a) Detector Comparison Human F=.75 Us F=.67 Nitzberg F=.65 Canny F=.57 0.4 0.5 0.6 0.7 0.8 1 1.5 2 2.5 3 F-Measure Tolerance (in pixels) (b) F-Measure vs. Tolerance Human Us Nitzberg Canny Figure 4: The left panel shows precision-recall curves for a variety of boundary detection schemes, along with the precision and recall of the human segmentations when compared with each other. The right panel shows the F-measure of each detector as the distance tolerance for measuring precision and recall varies. We take the Canny detector as the baseline due to its widespread use. Our detector outperforms the learning-based Nitzberg detector proposed by Konishi et al. [7], but there is still a significant gap with respect to human performance. The results presented so far use the logistic regression classifier. Figure 3b shows the performance of the 7 different classifiers on the complete feature set. The most obvious trend is that they all perform similarly. The simple non-parametric models – the classification tree and density estimation – perform the best, as they are most able to make use of the large quantity of training data to provide unbiased estimates of the posterior. The plain logistic regression model performs extremely well, with the variants of logistic regression – quadratic, boosted, and HME – performing only slightly better. The SVM is a disappointment because of its lower performance, high computational cost, and fragility. These problems result from the non-separability of the data, which requires 20% of the training examples to be used as support vectors. Balancing considerations of performance, model complexity, and computational cost, we favor the logistic model and its variants.3 Figure 4 shows the performance of our detector compared to two other approaches. Because of its widespread use, MATLAB’s implementation of the classic Canny [2] detector forms the baseline. We also consider the Nitzberg detector [13, 7], since it is based on a similar supervised learning approach, and Konishi et al. [7] show that it outperforms previous methods. To make the comparisons fair, the parameters of both Canny and Nitzberg were optimized using the training data. For Canny, this amounts to choosing the optimal scale. The Nitzberg detector generates a feature vector containing eigenvalues of the 2nd moment matrix; we train a classifier on these 2 features using logistic regression. Figure 4 also shows the performance of the human data as an upper-bound for the algorithms. The human precision-recall points are computed for each segmentation by comparing it to the other segmentations of the same image. The approach of this paper is a clear improvement over the state of the art in boundary detection, but it will take the addition of high-level and global information to close the gap between the machine and human performance. 3The fitted coefficients for the logistic are ! .088,-.029,.019 + for OE and ! .31,.26,.27 + for TG, with an offset of -2.79. The features have been separately normalized to have unit variance. 5 Conclusion We have defined a novel set of brightness and texture cues appropriate for constructing a local boundary model. By using a very large dataset of human-labeled boundaries in natural images, we have formulated the task of cue combination for local boundary detection as a supervised learning problem. This approach models the true posterior probability of a boundary at every image location and orientation, which is particularly useful for higherlevel algorithms. Based on a quantitative evaluation on 100 natural images, our detector outperforms existing methods. References [1] K. Bowyer, C. Kranenburg, and S. Dougherty. Edge detector evaluation using empirical ROC curves. Proc. IEEE Conf. Comput. Vision and Pattern Recognition, 1999. [2] J. Canny. A computational approach to edge detection. IEEE Trans. Pattern Analysis and Machine Intelligence, 8:679–698, 1986. [3] C. Chang and C. Lin. LIBSVM: a library for support vector machines, 2001. Software available at http://www.csie.ntu.edu.tw/˜cjlin/libsvm. [4] I. Fogel and D. Sagi. Gabor filters as texture discriminator. Bio. Cybernetics, 61:103–13, 1989. [5] D. J. Heeger and J. R. Bergen. Pyramid-based texture analysis/synthesis. In Proceedings of SIGGRAPH ’95, pages 229–238, 1995. [6] M. I. Jordan and R. A. Jacobs. Hierarchical mixtures of experts and the EM algorithm. Neural Computation, 6:181–214, 1994. [7] S. Konishi, A. L. Yuille, J. Coughlan, and S. C. Zhu. Fundamental bounds on edge detection: an information theoretic evaluation of different edge cues. Proc. IEEE Conf. Comput. Vision and Pattern Recognition, pages 573–579, 1999. [8] J. Malik, S. Belongie, T. Leung, and J. Shi. Contour and texture analysis for image segmentation. Int’l. Journal of Computer Vision, 43(1):7–27, June 2001. [9] J. Malik and P. Perona. Preattentive texture discrimination with early vision mechanisms. J. Optical Society of America, 7(2):923–32, May 1990. [10] D. Martin, C. Fowlkes, D. Tal, and J. Malik. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proc. 8th Int’l. Conf. Computer Vision, volume 2, pages 416–423, July 2001. [11] M. Meil˘a and J. Shi. Learning segmentation by random walks. In NIPS, 2001. [12] M.C. Morrone and D.C. Burr. Feature detection in human vision: a phase dependent energy model. Proc. R. Soc. Lond. B, 235:221–45, 1988. [13] M. Nitzberg, D. Mumford, and T. Shiota. Filtering, Segmentation and Depth. Springer-Verlag, 1993. [14] P. Perona and J. Malik. Detecting and localizing edges composed of steps, peaks and roofs. In Proc. Int. Conf. Computer Vision, pages 52–7, Osaka, Japan, Dec 1990. [15] J. Puzicha, T. Hofmann, and J. Buhmann. Non-parametric similarity measures for unsupervised texture segmentation and image retrieval. In Computer Vision and Pattern Recognition, 1997. [16] X. Ren and J. Malik. A probabilistic multi-scale model for contour completion based on image statistics. Proc. 7th Europ. Conf. Comput. Vision, 2002. [17] C. Van Rijsbergen. Information Retrieval, 2nd ed. Dept. of Comp. Sci., Univ. of Glasgow, 1979. [18] J. Rivest and P. Cavanagh. Localizing contours defined by more than one attribute. Vision Research, 36(1):53–66, 1996. [19] Y. Rubner and C. Tomasi. Coalescing texture descriptors. ARPA Image Understanding Workshop, 1996. [20] R. E. Schapire and Y. Singer. Improved boosting algorithms using confidence-rated predictions. Machine Learning, 37(3):297–336, 1999. [21] Z. Tu, S. Zhu, and H. Shum. Image segmentation by data driven markov chain monte carlo. In Proc. 8th Int’l. Conf. Computer Vision, volume 2, pages 131–138, July 2001. [22] L.R. Williams and D.W. Jacobs. Stochastic completion fields: a neural model of illusory contour shape and salience. In Proc. 5th Int. Conf. Computer Vision, pages 408–15, June 1995.
|
2002
|
142
|
2,150
|
Concurrent Object Recognition and Segmentation by Graph Partitioning Stella x. YuH, Ralph Gross t and Jianbo Shit Robotics Institute t Carnegie Mellon University Center for the Neural Basis of Cognition+ 5000 Forbes Ave, Pittsburgh, PA 15213-3890 {stella.yu, rgross, jshi}@cs.cmu.edu Abstract Segmentation and recognition have long been treated as two separate processes. We propose a mechanism based on spectral graph partitioning that readily combine the two processes into one. A part-based recognition system detects object patches, supplies their partial segmentations as well as knowledge about the spatial configurations of the object. The goal of patch grouping is to find a set of patches that conform best to the object configuration, while the goal of pixel grouping is to find a set of pixels that have the best low-level feature similarity. Through pixel-patch interactions and between-patch competition encoded in the solution space, these two processes are realized in one joint optimization problem. The globally optimal partition is obtained by solving a constrained eigenvalue problem. We demonstrate that the resulting object segmentation eliminates false positives for the part detection, while overcoming occlusion and weak contours for the low-level edge detection. 1 Introduction A good image segmentation must single out meaningful structures such as objects from a cluttered scene. Most current segmentation techniques take a bottom-up approach [5], where image properties such as feature similarity (brightness, texture, motion etc), boundary smoothness and continuity are used to detect perceptually coherent units. Segmentation can also be performed in a top-down manner from object models, where object templates are projected onto an image and matching errors are used to determine the existence of the object [1]. Unfortunately, either approach alone has its drawbacks. Without utilizing any knowledge about the scene, image segmentation gets lost in poor data conditions: weak edges, shadows, occlusions and noise. Missed object boundaries can then hardly be recovered in subsequent object recognition. Gestaltists have long recognized this issue, circumventing it by adding a grouping factor called familiarity [6]. Without being subject to perceptual constraints imposed by low level grouping, an object detection process can produce many false positives in a cluttered scene [3]. One approach is to build a better part detector, but this has its own limitations, such as increase in the complexity of classifiers and the number of training examples required. Another approach, which we adopt in this paper, is based on the observation that the falsely detected parts are not perceptually salient (Fig. 1), thus they can be effectively pruned away by perceptual organization. Right arm: 7 Right leg: 3 Head: 4 Left arm: 4 Left leg: 9 Figure 1: Human body part detection. A total of 27 parts are detected, each labeled by one of the five part detectors for arms, legs and head. False positives cannot be validated on two grounds. First, they do not form salient structures based on low-level cues, e.g. the patch on the floor that is labeled left leg has the same features as its surroundings. Secondly, false positives are often incompatible with nearby parts, e.g. the patch on the treadmill that is labeled head has no other patches in the image to make up a whole human body. These two conditions, low-level image feature saliency and high-level part labeling consistency, are essential for the segmentation of objects from background. Both cues are encoded in our pixel and patch grouping respectively. We propose a segmentation mechanism that is coupled with the object recognition process (Fig. 2). There are three tightly coupled processes. I)Top-level: part-based object recognition process. It learns classifiers from training images to detect parts along with the segmentation patterns and their relative spatial configurations. A few approaches based on pattern classification have been developed for part detection [9,3]. Recent work on object segmentation [1] uses image patches and their figure-ground labeling as building blocks for segmentation. 2)Bottom-level: pixel-based segmentation process. This process finds perceptually coherent groups using pairwise local feature similarity. The local features we use here are contour cues. 3)Interactions: coupling object recognition with segmentation by linking patches with their corresponding pixels. With such a representation, we concurrently carry out object recognition and image segmentation processes. The final output is an object segmentation where the object group consists of pixels with coherent low-level features and patches with compatible part configurations. We formulate our object segmentation task in a graph partitioning framework. We represent low-level grouping cues with a graph where each pixel is a node and edges between the nodes encode the affinity of pixels based on their feature similarity [4]. We represent highlevel grouping cues with a graph where each detected patch is a node and edges between the nodes encode the labeling consistency based on prior knowledge of object part configurations. There are also edges connecting patch nodes with their supporting pixel nodes. We seek the optimal graph cut in this joint graph, which separates the desired patch and pixel nodes from the rest nodes. We build upon the computational framework of spectral graph partitioning [7], and achieve patch competition using the subspace constraint method proposed in [10]. We show that our formulation leads to a constrained eigenvalue problem, whose global-optimal solutions can be obtained efficiently. 2 Segmentation model We illustrate our method through a synthetic example shown in Fig. 3. Suppose we are interested in detecting a human-like configuration. Furthermore, we assume that some object recognition system has labeled a set of patches as object parts. Every patch has a local segmentation according to its part label. The recognition system has also learned the • ') ( Figure 2: Model of object segmentation. Given an image, we detect edges using a set of oriented filter banks. The edge responses provide low-level grouping cues, and a graph can be constructed with one node for each pixel. Shown on the middle right are affinity patterns of five center pixels within a square neighbourhood, overlaid on the edge map. Dark means larger affinity. We detect a set of candidate body parts using learned classifiers. Body part labeling provides high-level grouping cues, and a consistency graph can be constructed with one node for each patch. Shown on the middle left are the connections between patches. Thicker lines mean better compatibility. Edges are noisy, while patches contain ambiguity in local segmentation and part labeling. Patches and pixels interact by expected local segmentation based on object knowledge, as shown in the middle image. A global partitioning on the coupled graph outputs an object segmentation that has both pixel-level saliency and patch-level consistency. statistical distribution of the spatial configurations of object parts. Given such information, we need to address two issues. One is the cue evaluation problem, i.e. how to evaluate low-level pixel cues, high-level patch cues and their segmentation correspondence. The other is the integration problem, i.e. how to fuse partial and imprecise object knowledge with somewhat unreliable low-level cues to segment out the object of interest. patches I[WJcrDJ [0, .---___ im _ a...;;g_e __ ---,/ pixel-patch rebtio", ~ edges o object segmentation Figure 3: Given the image on the left, we want to detect the object on the right). 11 patches of various sizes are detected (middle top). They are labeled as head(l), left-upper-arm(2, 9), left-lower-arm(3, 10), left-leg (11), left-upper-leg(4), left-lower-leg(5), right-arm(6), right-leg(7, 8). Each patch has a partial local segmentation as shown in the center image. Object pixels are marked black, background white and others gray. The image intensity itself has its natural organization, e.g. pixels across a strong edge (middle bottom) are likely to be in different regions. Our goal is to find the best patchpixel combinations that conform to the object knowledge and data coherence. 2.1 Representations We denote the graph in Fig. 2 by G = (V, E, W). Let N be the number of pixels and M the number of patches. Let A be the pixel-pixel affinity matrix, B be the patch-patch affinity matrix, and C be the patch-pixel affinity matrix. All these weights are assumed nonnegative. Let f3B and f3c be scalars reflecting the relative importance of Band C with respect to A. Then the node set and the weight matrix for the pairwise edge set E are: V {I,··· ,N, }V+1, . .. ,N+M), '"--v--' W(A , B , C; f3B, f3c) pixels [ A N x N f3c· CM x N patches f3c . C~ X M ] f3B . B Mx M . (1) Object segmentation corresponds to a node bipartitioning problem, where V = VI U V 2 and VI n V 2 = 0. We assume VI contains a set of pixel and patch nodes that correspond to the object, and V 2 is the rest of the background pixels and patches that correspond to false positives and alternative labelings. Let Xl be an (N + M) x 1 vector, with Xl (k) = 1 if node k E VI and 0 otherwise. It is convenient to introduce the indicator for V2, where X 2 = 1 - Xl and 1 is the vector of ones. We only need to process the image region enclosing all the detected patches. The rest pixels are associated with a virtual background patch, which we denote as patch N + M, in addition to M - 1 detected object patches. Restriction of segmentation to this region of interest (ROI) helps binding irrelavent background elements into one group [10]. 2.2 Computing pixel-pixel similarity A The pixel affinity matrix A measures low-level image feature similarity. In this paper, we choose intensity as our feature and calcuate A based on edge detection results. We first convolve the image with quadrature pairs of oriented filters to extract the magnitude of edge responses OE [4]. Let i denote the location of pixel i. Pixel affinity A is inversely correlated with the maximum magnitude of edges crossing the line connecting two pixels. A( i , j) is low if i, j are on the two sides of a strong edge (Fig. 4): . . _ ( _ _ 1_. [maXtE (Q,I ) OE(i + t . j)] 2) A(~ , J) - exp 2(J"~ maxk 0 E(k.) . o (2) A(1 ,3) ;:::: 1 A(1 ,2) ;:::: 0 D image oriented filter pairs edge magnitudes Figure 4: Pixel-pixel similarity matrix A is computed based on intensity edge magnitudes. 2.3 Computing patch-patch compatibility B and competition For object patches, we evaluate their position compatibility according to learned statistical distributions. For object part labels a and b, we can model their spatial distribution by a Gaussian, with mean /Lab and variance ~ab estimated from training data. Let p be the object label of patch p. Let p be the center location of patch p. For patches p and q, B(p, q) is low if p, q form rare configurations for their part labels p and q (Fig. Sa): ( IT - 1 ) B(p, q)= exp - -(p - q - /Lprj) ~ .. (p - q - /Lpq) . 2-pq-(3) We manually set these values for our image examples. As to the virtual background patch node, it only has affinity of 1 to itself. Patch compatibility measures alone do not prevent the desired pixel and patch group from including falsely detected patches and their pixels, nor does it favor the true object pixels to be away from unlabeled background pixels. We need further constraints to restrict a feasible grouping. This is done by constraining the partition indicator X. In Fig. Sb, there are four pairs of patches with the same object part labels. To encode mutual exclusion between patches, we enforce one winner among patch nodes in competition. For example, only one of the patches 7 and 8 can be validated to the object group: Xl (N + 7) + Xl (N + 8) = 1. We also set an exclusion constraint between a reliable patch and the virtual background patch so that the desired object group stands out alone without these unlabeled background pixels, e.g Xl (N + 1) + Xl (N + M) = 1. Formally, let S be a superset of nodes to be separated and let I . I denote the cardinality of a set. We have: L Xl(k) = 1, m = 1 : lSI· (4) 7 and 8 cannot both be part of the object a) compatibility patches b) competition Figure 5: a) Patch-patch compatibility matrix B is evaluated based on statistical configuration plausibility. Thicker lines for larger affinity. b) Patches of the same object part label compete to enter the object group. Only one winner from each linked pair of patches can be validated as part of the object. 2.4 Computing pixel-patch association C Every object part label also projects an expected pixel segmentation within the patch window (Fig. 6). The pixel-patch association matrix C has one column for each patch: { I if i is an object pixel of patch p, C(i,p) = 0: otherwise. For the virtual background patch, its member pixels are those outside the ROI. I Head detector -> 1 Patch 1 • Arm detector -> 19 12 Patch 2 110 13 l_ Leg detector -> I" 15 71 Patch 11 61 si patches expected local segmentation association (5) Figure 6: Pixel-patch association C for object patches. Object pixels are marked black, background white and others gray. A patch is associated with its object pixels in the given partial segmentation. Finally, we desire (38 to balance the total weights between pixel and patch grouping so that M « N does not render patch grouping insignificant, and we want (3c to be large enough so that the results of patch grouping can bring along their associated pixels: ITAI (3B (3B = 0·01 1TB1 , (3c = maxC. (6) 2.5 Segmentation as an optimization problem We apply the normalized cuts criterion [7] to the joint pixel-patch graph in Eg. (1): 2 xTwXt maXE(X1) = L T ,s. t. L Xl(k) = 1, m = 1 : lSI· t = l X t DXt (7) D is the diagonal degree matrix of W, D(i, i) = Lj W(i,j) . Let x = Xl Xfr~~'. By relaxing the constraints into the form of LT x = 0 [10], Eq. (7) becomes a constrained eigenvalue problem [10], the maximizer given by the nontrivial leading eigenvector: x* s. t. LT X = O. AX', 1 - D - l L(LT D - l L) - l LT. (8) (9) (10) Once we get the optimal eigenvector, we compare 10 thresholds uniformly distributed within its range and choose the discrete segmentation that yields the best criterion E. Below is an overview of our algorithm. 1: Compute edge response OE and calculate pixel affinity A, Eq. (2). 2: Detect parts and calculate patch affinity B , Eq. (3). 3: Formulate constraints Sand L among competing patches, Eq. (4). 4: Set pixel-patch affinity C, Eq. (5). 5: Calculate weights (3B and (3c , Eq. (6). 6: Form Wand calculate its degree matrix D, Eq. (1). 7: Solve QD- lWx* = AX', Eq. (9,10). 8: Threshold x' to get a discrete segmentation. 3 Results and conclusions In Fig. 7, we show results on the 120 x 120 synthetic image. Image segmentation alone gets lost in a cluttered scene. With concurrent segmentation and recognition, regions forming the object of interest pop out, with unwanted edges (caused by occlusion) and weak edges (illusory contours) corrected in the final segmentation. It is also faster to compute the pixel-patch grouping since the size of the solution space is greatly reduced. I segmentation alone concurrent segmentation and recognition I 44 seconds 17 seconds Figure 7: Eigenvectors (row 1) and their segmentations (row 2) for Fig. 3. On the right, we show the optimal eigenvector on both pixels and patches, the horizontal dotted line indicating the threshold. Computation times are obtained in MATLAB 6.0 on a PC with 10Hz CPU and 10 memory. We apply our method to human body detection in a single image. We manually label five body parts (both arms, both legs and the head) of a person walking on a treadmill in all 32 images of a complete gait cycle. Using the magnitude thresholded edge orientations in the hand-labeled boxes as features, we train linear Fisher classifiers [2] for each body part. In order to account for the appearance changes of the limbs through the gait cycle, we use two separate models for each arm and each leg, bringing the total number of models to 9. Each individual classifier is trained to discriminate between the body part and a random image patch. We iteratively re-train the classifiers using false positives until the optimal performance is reached over the training set. In addition, we train linear colorbased classifiers for each body part to perform figure-ground discrimination at the pixel level. Alternatively a general model of human appearance based on filter responses as in [8] could be used. In Fig. 8, we show the results on the test image in Fig. 2. Though the pixelpatch affinity matrix C, derived from the color classifier, is neither precise nor complete, and the edges are weak at many object boundaries, the two processes complement each other in our pixel-patch grouping system and output a reasonably good object segmentation. segmentation alone: 68 seconds segmentation-recognition: 58 seconds Figure 8: Eigenvectors and their segmentations for the 261 x 183 human body image in Fig. 2. Acknowledgments. We thank Shy jan Mahamud and anonymous referees for valuable comments. This research is supported by ONR NOOOI4-00-1-09IS and NSF IRI-98 17496. References [1] E. Borenstein and S. Ullman. Class-specific, top-down segmentation. In European Conference on Computer Vision, 2002. [2] K. Fukunaga. Introduction to statistical pattern recognition. Academic Press, 1990. [3] S. Mahamud, M. Hebert, and J. Lafferty. Combining simple discriminators for object discrimination. In European Conference on Computer Vision, 2002. [4] J. Malik, S. Belongie, T. Leung, and J. Shi. Contour and texture analysis for image segmentation. International Journal of Computer Vision, 200l. [5] D. Marr. Vision. CA: Freeman, 1982. [6] S. E. Palmer. Vision science: from photons to phenomenology. MIT Press, 1999. [7] J. Shi and J. Malik. Normalized cuts and image segmentation. In IEEE Conference on Computer Vision and Pattern Recognition, pages 731- 7, June 1997. [8] H. Sidenbladh and M. Black. Learning image statistics for Bayesian tracking. In International Conference on Computer Vision, 200l. [9] P. Viola and M. Jones. Rapid object detection using a boosted cascade of simple features. In IEEE Conference on Computer Vision and Pattern Recognition, 200l. [10] S. X. Yu and J. Shi. Grouping with bias. In Neural Information Processing Systems, 2001.
|
2002
|
143
|
2,151
|
Approximate Linear Programming for Average-Cost Dynamic Programming Daniela Pucci de Farias IBM Almaden Research Center 650 Harry Road, San Jose, CA 95120 pucci@mit.edu Benjamin Van Roy Department of Management Science and Engineering Stanford University Stanford, CA 94305 bvr@stanford.edu Abstract This paper extends our earlier analysis on approximate linear programming as an approach to approximating the cost-to-go function in a discounted-cost dynamic program [6]. In this paper, we consider the average-cost criterion and a version of approximate linear programming that generates approximations to the optimal average cost and differential cost function. We demonstrate that a naive version of approximate linear programming prioritizes approximation of the optimal average cost and that this may not be well-aligned with the objective of deriving a policy with low average cost. For that, the algorithm should aim at producing a good approximation of the differential cost function. We propose a twophase variant of approximate linear programming that allows for external control of the relative accuracy of the approximation of the differential cost function over different portions of the state space via state-relevance weights. Performance bounds suggest that the new algorithm is compatible with the objective of optimizing performance and provide guidance on appropriate choices for state-relevance weights. 1 Introduction The curse of dimensionality prevents application of dynamic programming to most problems of practical interest. Approximate linear programming (ALP) aims to alleviate the curse of dimensionality by approximation of the dynamic programming solution. In [6], we develop a variant of approximate linear programming for the discounted-cost case which is shown to scale well with problem size. In this paper, we extend that analysis to the average-cost criterion. Originally introduced by Schweitzer and Seidmann [11], approximate linear programming combines the linear programming approach to exact dynamic programming [9] to approximation of the differential cost function (cost-to-go function, in the discounted-cost case) by a linear architecture. More specifically, given a collection of basis functions
, mapping states in the system to be controlled to real numbers, approximate linear programming involves solution of a linear program for generating an approximation to the differential cost function of the form Extension of approximate linear programming to the average-cost setting requires a different algorithm and additional analytical ideas. Specifically, our contribution can be summarized as follows: Analysis of the usual formulation of approximate linear programming for averagecost problems. We start with the observation that the most natural formulation of averagecost ALP, which follows immediately from taking limits in the discounted-cost formulation and can be found, for instance, in [1, 2, 4, 10], can be interpreted as an algorithm for approximating of the optimal average cost. However, to obtain a good policy, one needs a good approximation to the differential cost function. We demonstrate through a counterexample that approximating the average cost and approximating the differential cost function so that it leads to a good policy are not necessarily aligned objectives. Indeed, the algorithm may lead to arbitrarily bad policies, even if the approximate average cost is very close to optimal and the basis functions have the potential to produce an approximate differential cost function leading to a reasonable policy. Proposal of a variant of average-cost ALP. A critical limitation of the average-cost ALP algorithm found in the literature is that it does not allow for external control of how the approximation to the differential cost function should be emphasized over different portions of the state space. In situations like the one described in the previous paragraph, when the algorithm produces a bad policy, there is little one can do to improve the approximation other than selecting new basis functions. To address this issue, we propose a two-phase variant of average-cost ALP: the first phase is simply the average-cost ALP algorithm already found in the literature, which is used for generating an approximation for the optimal average cost. This approximation is used in the second phase of the algorithm for generating an approximation to the differential cost function. We show that the second phase selects an approximate differential cost function minimizing a weighted sum of the distance to the true differential cost function, where the weights (referred to as state-relevance weights) are algorithm parameters to be specified during implementation of the algorithm, and can be used to control which states should have more accurate approximations for the differential cost function. Development of bounds linking the quality of approximate differential cost functions to the performance of the policy associated with them. The observation that the usual formulation of ALP may lead to arbitrarily bad policies raises the question of how to design an algorithm for directly optimizing performance of the policy being obtained. With this question in mind, we develop bounds that relate the quality of approximate differential cost functions — i.e., their proximity to the true differential cost function — to the expected increase in cost incurred by using a greedy policy associated with them. The bound suggests using a weighted sum of the distance to the true differential cost function for comparing different approximate differential cost functions. Thus the objective of the second phase of our ALP algorithm is compatible with the objective of optimizing performance of the policy being obtained, and we also have some guidance on appropriate choices of state-relevance weights. 2 Stochastic Control Problems and the Curse of Dimensionality We consider discrete-time stochastic control problems involving a finite state space of cardinality "! . For each state $# , there is a finite set %'& of available actions. When the current state is and action # %'& is taken, a cost is incurred. State transition probabilities represent, for each pair of states and each action # % & , the probability that the next state will be given that the current state is and the current action is # % & . A policy is a mapping from states to actions. Given a policy , the dynamics of the system follow a Markov chain with transition probabilities
& . For each policy , we define a transition matrix
whose th entry is
& , and a cost vector
whose th entry is
& . We make the following assumption on the transition probabilities: Assumption 1 (Irreducibility). For each pair of states and and each policy , there is such that
In stochastic control problems, we want to select a policy optimizing a given criterion. In this paper, we will employ as an optimality criterion the average cost
"!$# &%(' *)
,+ + + -) /. Irreducibility implies that, for each policy , this limit exists and
10
for all — the average cost is independent of the initial state in the system. We denote the minimal average cost by 0*2 345
0
. For any policy , we define the associated dynamic programming operator 6
by 6
87
:9
87 Note that 6
operates on vectors 7 #;< =>< corresponding to functions on the state space . We also define the dynamic programming operator 6 by 6 7 ?45
6
7 A policy is called greedy with respect to 7 if it attains the minimum in the definition of 6 . An optimal policy minimizing the average cost can be derived from the solution of Bellman’s equation 0-@ 9A7 6 7 where @ is the vector of ones. We denote solutions to Bellman’s equation by pairs 0*2 7 2 . The scalar 0*2 is unique and equal to the the optimal average cost. The vector 7 2 is called a differential cost function. The differential cost function is unique up to a constant factor; if 7 2 solves Bellman’s equation, then 7 2 9CB @ is also a solution for all B , and all other solutions can be shown to be of this form. We can ensure uniqueness by imposing 7 2 for an arbitrary state . Any policy that is greedy with respect to the differential cost function is optimal. Solving Bellman’s equation involves computing and storing the differential cost function for all states in the system. This is computationally infeasible in most problems of practical interest due to the explosion on the number of states as the number of state variables grows. We try to combat the curse of dimensionality by settling for the more modest goal of finding an approximation to the differential cost function. The underlying assumption is that, in many problems of practical interest, the differential cost function will exhibit some regularity, or structure, allowing for reasonable approximations to be stored compactly. We consider a linear approximation architecture: given a set of functions &D FEG ;
H , we generate approximations of the form 7 2 &IKJ 7 L M (1) We define a matrix N #O;< =><QP L by N 3R TSUSUS LCV , i.e., each of the basis functions is stored as a column of N , and each row corresponds to a vector of the basis functions evaluated at a distinct state . We represent J 7 S in matrix notation as N . In the remainder of the paper, we assume that (a manageable number of) basis functions are prespecified, and address the problem of choosing a suitable parameter vector . For simplicity, we choose an arbitrary state — henceforth called state “0”— for which we set 7 2 W ; accordingly, we assume that the basis functions are such that X , Y . 3 Approximate Linear Programming Approximate linear programming [11, 6] is inspired by the traditional linear programming approach to dynamic programming, introduced by [9]. Bellman’s equation can be solved by the average-cost exact LP (ELP): 0 (2) 0 @ 9 7 6 7 Note that the constraints 0 @ 937 6 7 can be replaced by 0 9 7 9
7 Y therefore we can think of problem (2) as an LP. In approximate linear programming, we reduce the generally intractable dimensions of the average-cost ELP by constraining 7 to be of the form N . This yields the first-phase approximate LP (ALP) 0 (3) 0 @ 9 N 6 N Problem (3) can be expressed as an LP by the same argument used for the exact LP. We denote its solution by 0
The following result is immediate. Lemma 1. The solution 0 of the first-phase ALP minimizes 0*2F0 over the feasible region. Proof: Maximizing 0 in (3) is equivalent to maimizing 0>2C0 . Since the first-phase ALP corresponds to the exact LP (2) with extra constraints 7 N , we have 0 0>2 for all feasible 0 . Hence 0"2 0 0"2 0 , and the claim follows. Lemma 1 implies that the first-phase ALP can be seen as an algorithm for approximating the optimal average cost. Using this algorithm for generating a policy for the averagecost problem is based on the hope that approximation of the optimal average cost should also implicitly imply approximation of the differential cost function. Note that it is not unreasonable to expect that some approximation of the differential cost function should be involved in the minimization of 0*2 0 ; for instance, we know that 0 0 2 iff N 7 2 . The ALP has as many variables as the number of basis functions plus one, which will usually amount to a dramatically smaller number of variables than what we had in the ELP. However, the ALP still has as many constraints as the number of state-action pairs. This problem is also found in the discounted-cost formulation and there are several approaches in the literature for dealing with it, including constraint sampling [7] and exploitation of problem-specific structures for efficient elimination of redundant constraints [8, 10]. Our first step in the analysis of average-cost ALP is to demonstrate through a counterexample that it can produce arbitrarily bad policies, even if the approximation to the average cost is very accurate. 4 Performance of the first-phase ALP: a counterexample We consider a Markov process with states
, each representing a possible number of jobs in a queue with buffer of size . The system state evolves according to "! # %$'&(%)+* */, .-0/ 9 "! # %$'&(%)+* */, .H ) $213(! 1 From state , transitions to states and occurs with probabilities H and H , respectively. From state , transitions to states and occur with probabilities / and / , respectively. The arrival probability H is the same for all states and we let H . The action to be chosen in each state is the departure probability or service rate / , which takes values the set
. The cost incurred at state if action / is taken is given by %/ 9 / . We use basis functions ) ) ,
. For , the first-phase ALP yields an approximation 0 for the optimal average cost, which is within 2% of the true value 0*2 . However, the average cost yielded by the greedy policy with respect to N is 9842.2 for , and goes to infinity as we increase the buffer size. Figure 1 explains this behavior. Note that N J is a very good approximation for 7 2 over states , and becomes progressive worse as increases. States correspond to virtually all of the stationary probability under the optimal policy ( I ), hence it is not surprising that the first-phase ALP yields a very accurate approximation for 0*2 , as other states contribute very little to the optimal average cost. However, fitting the optimal average cost and the differential cost function over states visited often under the optimal policy is not sufficient for getting a good policy. Indeed, N severely underestimates costs in large states, and the greedy policy drives the system to those states, yielding a very large average cost and ultimately making the system unstable, when the buffer size goes to infinity. It is also troublesome to note that our choice of basis function actually has the potential to lead to a reasonably good policy — indeed, for R# V , the greedy policy associated with N has an average cost approximately equal to , regardless of the buffer size, which is only about larger than the optimal average cost. Hence even though the first-phase ALP is being given a relatively good set of basis functions, it is producing a bad approximate differential cost function, which cannot be improved unless different basis functions are selected. 5 Two-phase average-cost ALP A striking difference between the first-phase average-cost ALP and discounted-cost ALP is the presence in the latter of state relevance weights. These are algorithm parameters that can be used to control the accuracy of the approximation to the cost-to-go function (the discounted-cost counterpart of the differential cost function) over different portions of the state space and have been shown in [6] to have a first-order impact on the performance of the policy being generated. For instance, in the example described in the previous section, in the discounted-cost formulation one might be able to improve the policy yielded by ALP by choosing state-relevance weights that put more emphasis on states . Inspired by this observation, we propose a two-phase algorithm with the characteristic that staterelevance weights are present and can be used to control the quality of the differential cost function approximation. The first phase is simply the first-phase ALP introduced in Section 3, and is used for generating an approximation to the optimal average cost. The second phase consists of solving the second-phase ALP for finding approximations to the differential cost function: N (4) 6 N 0 9 N Y The state-relevance weights 1 and 0 are algorithm parameters to be specified by the user and denotes the transpose of . We denote the optimal solution of the second-phase ALP by . We now demonstrate how the state-relevance weights and 0 can be used for controlling the quality of the approximation to the differential cost function. We first define, for any given 0 , the function 7 , given by the unique solution to [3] 7 6 7 0 Y 7 W (5) If 0 is our estimate for the optimal average cost, then 7 can be seen as an estimate to the differential cost function 7 2 . Our first result links the difference between 7 2 and 7 to the difference between 0*2 and 0 , when 0 0"2 . For simplicity of notation, we implicitly drop from all vectors and matrices rows and columns corresponding to state 0, so that, for instance, 7 2 corresponds to the original vector 7 2 without the row corresponding to state 0, and
corresponds to the original matrix
without rows and columns corresponding to state 0. Lemma 2. For all 0 , we have 7 0 7 2 0 2 0
@ Proof: Equation (5), satisfied by 7 , corresponds to Bellman’s equation for the problem of finding the stochastic shortest path to state 0, when costs are given by 0 [3]. Hence 7 corresponds to the vector of smallest expected lengths of paths until state 0. It follows that 7
0 @
0 2 @ 9 0 2 0 @ 7 2 9 0 2 0
@ Note that if 0 0"2 , we also have 7 7 2 , and 7 7 2 0"2 0
@ . In the following theorem, we show that the second-phase ALP minimizes 7 N over the feasible region. The weighted
norm S , which will be used in the remainder of the paper, is defined as 7 & 7 , for any . Theorem 1. Let be the optimal solution to the second-phase ALP. Then it minimizes 7 N over the feasible region of the second-phase ALP. Proof: Maximizing N is equivalent to minimizing 7 N . It is a well-known result that, for all 7 such that 6 7 0 @ 7 , we have 7 7 . It follows that N 7 over the feasible region of the second-phase ALP, and N minimizes 7 N 7 N 7 N . For any fixed choice of 0 satisfying 0 0 2 , we have 7 2 N 7 N 9 0 2 0
@ (6) hence the second-phase ALP minimizes an upper bound on the weighted
norm 7 2 N of the error in the differential cost function approximation. Note that state-relevance weights determine how errors over different portions of the state space are weighted in the decision of which approximate differential cost function to select, and can be used for balancing accuracy of the approximation over different states. In the next section, we will provide performance bounds that tie a certain
norm of the difference between 7 2 and N to the expect increase in cost incurred by using the greedy policy with respect to N . This demonstrates that the objective optimized by the second-phase ALP is compatible with the objective of optimizing performance of the policy being obtained, and it also provides some insight about appropriate choices of state-relevance weights. We have not yet specified how to choose 0 . An obvious choice is 0 F0 , since 0 is the estimate for the optimal average cost yielded by the first-phase ALP and it satisfies 0 0"2 , so that bound (6) holds. In practice, it may be advantageous to perform a line search over 0 to optimize performance of the ultimate policy being generated. An important issue is the feasibility of the second-phase ALP will be feasible for a given choice of 0 ; for 0 0 , this will always be the case. It can also be shown that, under certain conditions on the basis functions N , the second-phase ALP possesses multiple feasible solutions regardless of the choice of 0 . 6 A performance bound In this section, we present a bound on the performance of greedy policies associated with approximate differential cost functions. This bound provide some guidance on appropriate choices for state-relevance weights. Theorem 2. Let Assumption 1 hold. For all 7 , let 0 and denote the average cost and stationary state distribution of the greedy policy associated with 7 . Then, for all 7 such that 7 7 2 , we have 0 0 2 9 7 2 7 Proof: We have 0 9 7 7 6 7 7 , where and denote the costs and transition matrix associated with the greedy policy with respect to 7 , and we have used in the first equality. Now if 7 7 2 , we have 0 6 7 7 6 7 2 7 7 2 9 0"2 7 (0 2 9 7 2 7 . Theorem 2 suggests that one approach to selecting state-relevance weights may be to run the second-phase ALP adaptively, using in each iteration weights corresponding to the stationary state distribution associated with the policy generated by the previous iteration. Alternatively, in some cases it may suffice to use rough guesses about the stationary state distribution of the MDP as choices for the state-relevance weights. We revisit the example from Section 4 to illustrate this idea. Example 1. Consider applying the second-phase ALP to the controlled queue described in Section 4. We use weights of the form & . This is similar to what is done in [6] and is motivated by the fact that, if the system runs under a “stabilizing” policy, there are exponential lower and upper bounds to the stationary state distribution [5]. Hence S is a reasonable guess for the shape of the stationary distribution. We also let 0 0 . Figure 1 demonstrates the evolution of N as we increase . Note that there is significant improvement in the shape of N relative to N . The best policy is obtained for , and incurs an average cost of approximately , regardless of the buffer size. This cost is only about higher than the optimal average cost. 7 Conclusions We have extended the analysis of ALP to the case of minimization of average costs. We have shown how the ALP version commonly found in the literature may lead to arbitrarily bad policies even if the choice of basis functions is relatively good; the main problem is that this version of the algorithm — the first-phase ALP — prioritizes approximation of the optimal average cost, but does not necessarily yield a good approximation for the differential cost function. We propose a variant of approximate linear programming — the two-phase approximate linear programming method — that explicitly approximates the differential cost function. The main attractive of the algorithm is the presence of staterelevance weights, which can be used for controlling the relative accuracy of the differential cost function approximation over different portions of the state space. Many open issues must still be addressed. Perhaps most important of all is whether there is an automatic way of choosing state-relevance weights. The performance bound suggest in Theorem 2 suggests an iterative scheme, where the second-phase ALP is run multiple 0 10 20 30 40 50 60 70 80 90 100 1 0.5 0 0.5 1 x 10 6 Φr (ρ=0.9) 2 Φr (ρ=0.8) 2 Φr (ρ=0.7) 2 Φr 1 h∗ Figure 1: Controlled queue example: Differential cost function approximations as a function of . From top to bottom, differential cost function 7 2 , approximations N (with ), and approximation N . times state-relevance weights are updated in each iteration according to the stationary state distribution obtained with the policy generated by the algorithm in the previous iteration. It remains to be shown whether such a scheme converges. It is also important to note that, in principle, Theorem 2 holds only for 7 7 2 . If 0 (0 2 , this condition cannot be verified for N , and the appropriateness of minimizing 7 2 N is only speculative. References [1] D. Adelman. A price-directed approach to stochastic inventory/routing. Preprint, 2002. [2] D. Adelman. Price-directed replenishment of subsets: Methodology and its application to inventory routing. Preprint, 2002. [3] D. Bertsekas. Dynamic Programming and Optimal Control. Athena Scientific, 1995. [4] D. Bertsekas and J.N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, 1996. [5] D. Bertsimas, D. Gamarnik, and J.N. Tsitsiklis. Performance of multiclass Markovian queueing networks via piecewise linear Lyapunov functions. Annals of Applied Probability, 11. [6] D.P. de Farias and B. Van Roy. The linear programming approach to approximate dynamic programming. To appear in Operations Research, 2001. [7] D.P. de Farias and B. Van Roy. On constraint sampling in the linear programming approach to approximate dynamic programming. Conditionally accepted to Mathematics of Operations Research, 2001. [8] C. Guestrin, D. Koller, and R. Parr. Efficient solution algorithms for factored MDPs. Submitted to Journal of Artificial Intelligence Research, 2001. [9] A.S. Manne. Linear programming and sequential decisions. Management Science, 6(3):259– 267, 1960. [10] J.R. Morrison and P.R. Kumar. New linear program performance bounds for queueing networks. Journal of Optimization Theory and Applications, 100(3):575–597, 1999. [11] P. Schweitzer and A. Seidmann. Generalized polynomial approximations in Markovian decision processes. Journal of Mathematical Analysis and Applications, 110:568–582, 1985.
|
2002
|
144
|
2,152
|
Learning Sparse Topographic Representations with Products of Student-t Distributions Max Welling and Geoffrey Hinton Department of Computer Science University of Toronto 10 King’s College Road Toronto, M5S 3G5 Canada welling,hinton @cs.toronto.edu Simon Osindero Gatsby Unit University College London 17 Queen Square London WC1N 3AR, UK simon@gatsby.ucl.ac.uk Abstract We propose a model for natural images in which the probability of an image is proportional to the product of the probabilities of some filter outputs. We encourage the system to find sparse features by using a Studentt distribution to model each filter output. If the t-distribution is used to model the combined outputs of sets of neurally adjacent filters, the system learns a topographic map in which the orientation, spatial frequency and location of the filters change smoothly across the map. Even though maximum likelihood learning is intractable in our model, the product form allows a relatively efficient learning procedure that works well even for highly overcomplete sets of filters. Once the model has been learned it can be used as a prior to derive the “iterated Wiener filter” for the purpose of denoising images. 1 Introduction Historically, two different classes of statistical model have been used for natural images. “Energy-based” models assign to each image a global energy, , that is the sum of a number of local contributions and they define the probability of an image to be proportional to
. This class of models includes Markov Random Fields where combinations of nearby pixel values contribute local energies, Boltzmann Machines in which binary pixels are augmented with binary hidden variables that learn to model higher-order statistical interactions and Maximum Entropy methods which learn the appropriate magnitudes for the energy contributions of heuristically derived features [5] [9]. It is difficult to perform maximum likelihood fitting on most energy-based models because of the normalization term (the partition function) that is required to convert
to a probability. The normalization term is a sum over all possible images and its derivative w.r.t. the parameters is required for maximum likelihood fitting. The usual approach is to approximate this derivative by using Markov Chain Monte Carlo (MCMC) to sample from the model, but the large number of iterations required to reach equilibrium makes learning very slow. The other class of model uses a “causal” directed acyclic graph in which the lowest level nodes correspond to pixels and the probability distribution at a node (in the absence of any observations) depends only on its parents. When the graph is singly or very sparsely connected there are efficient algorithms for maximum likelihood fitting but if nodes have many parents, it is hard to perform maximum likelihood fitting because this requires the intractable posterior distribution over non-leaf nodes given the pixel values. There is much debate about which class of model is the most appropriate for natural images. Is a particular image best characterized by the states of some hidden variables in a causal generative model? Or is it best characterized by its peculiarities i.e. by saying which of a very large set of normally satisfied constraints are violated? In this paper we treat violations of constraints as contributions to a global energy and we show how to learn a large set of constraints each of which is normally satisfied fairly accurately but occasionally violated by a lot. The ability to learn efficiently without ever having to generate equilibrium samples from the model and without having to confront the intractable partition function removes a major obstacle to the use of energy-based models. 2 The Product of Student-t Model Products of Experts (PoE) are a restricted class of energy-based model [1]. The distribution represented by a PoE is simply the normalized product of all the distributions represented by the individual “experts”:
(1) where are un-normalizedexperts and denotes the overall normalization constant. In the product of Student-t (PoT) model, un-normalized experts have the following form, "! (2) where is called a filter and is the # -th column in the filter-matrix . When properly normalized, this represents a Student-t distribution over the filtered random variable $ . An important feature of the Student-t distribution is its heavy tails, which makes it a suitable candidate for modelling constraints of the kind that are found in images. Defining % % & , the energy of the PoT model becomes % ' (
)+*-, . (3) Viewed this way, the model takes the form of a maximum entropy distribution with weights on real-valued “features” of the image. Unlike previous maximum entropy models, however, we can fit both the weights and the features at the same time. When the number of input dimensions is equal to the number of experts, the normally intractable partition function becomes a determinant and the PoT model becomes equivalent to a noiseless ICA model with Student-t prior distributions [2]. In that case the rows of the inverse filters /0 21 will represent independent directions in input space. So noiseless ICA can be viewed as an energy-based model even though it is usually interpeted as a causal generative model in which the posterior over the hidden variables collapses to a point. However, when we consider more experts than input dimensions (i.e. an overcomplete representation), the energy-based view and the causal generative view lead to different generalizations of ICA. The natural causal generalization retains the independence of the hidden variables in the prior by assuming independent sources. In contrast, the PoT model simply multiplies together more experts than input dimensions and re-normalizes to get the total probability. 3 Training the PoT Model with Contrastive Divergence When training energy-based models we need to shape the energy function so that observed images have low energy and empty regions in the space of all possible images have high energy. The maximum likelihood learning rule is given by,
(4) It is the second term which causes learning to be slow and noisy because it is usually necessary to use MCMC to compute the average over the equilibrium distribution. A much more efficient way to fit the model is to use the data distribution itself to initialize a Markov Chain which then starts moving towards the model’s equilibrium distribution. After just a few steps, we observe how the chain is diverging from the data and adjust the parameters to counteract this divergence. This is done by lowering the energy of the data and raising the energy of the “confabulations” produced by a few steps of MCMC. 1! #" $%"&' (5) It can be shown that the above update rule approximately minimizes a new objective function called the contrastive divergence [1]. As it stands the learning rule will be inefficient if the Markov Chain mixes slowly because the two terms in equation 5 will almost cancel each other out. To speed up learning we need a Markov chain that mixes rapidly so that the confabulations will be some distance away from the data. Rapid mixing can be achieved by alternately Gibbs sampling a set of hidden variables given the random variables under consideration and vice versa. Fortunately, the PoT model can be equipped with a number of hidden random variables equal to the number of experts as follows, )(#* ,+ 1%-/. 10&2
3 4 5 76 2 8 5:9; =<> 8 > 6 5 1 >@? ACB 4 ED (6) Integrating over the * variables results in the density of the PoT model, i.e. eqns. (1) and (2). Moreover, the conditional distributions are easy to identify and sample from, namely F*'
HG 4 JI LK .
M (7) * N <PO !K RQ 1 S Q UTPVFWYX 3 Z D (8) where G denotes a Gamma distribution and N a normal distribution. From (8) we see that the variables * can be interpreted as precision variables in the transformed space [ . In this respect our model resembles a “Gaussian scale mixture” (GSM) [8] which also multiplies a positive scaling variable with a normal variate. But GSM is a causal model while PoT is energy-based. The (in)dependencyrelations between the variables in a PoT model are depicted graphically in figure (1a,b). The hidden variables are independent given , which allows them to be Gibbs-sampled in parallel. This resembles the way in which brief Gibbs sampling is used to fit binary “Restricted Boltzmann Machines” [1]. To learn the parameters of the PoT model we thus propose to iterate the following steps: 1) Sample \ *^] given the data _ ] for every data-vector according to the Gammadistribution (7). u x T (Jx) 2 u x J 2 T (Jx) J x u W 1 2 3 4 5 (a) (b) (c) (d) Figure 1: (a)- Undirected graph for the PoT model. (b)-Expanded graph where the deterministic relation (dashed lines) between the random variable and the activities of the filters
is made explicit. (c)-Graph for the PoT model including weights . (d)-Filters with large (decreasing from left to right) weights into a particular top level unit . Top level units have learned to connect to filters similar in frequency, location and orientation. 2) Sample reconstructions of the data \ ] given the sampled values of \ * ] for every data-vector according to the Normal distribution (8). 3) Update the parameters according to (5) where the “k-step samples” are now given by the reconstructions \ R] , the energy is given by (3), and the parameters are given by )( . 4 Overcomplete Representations The above learning rules are still valid for overcomplete representations. However, step-2 of the learning algorithm is much more efficient when the inverse of the filter matrix exists. In that case we simply draw standard normal random numbers (with the number of data-vectors) and multiply each of them with 1 Q 1 2 8 ] . This is efficient because the data dependent matrix Q 1 2 8 ] is diagonal while the costly inverse 1 is data independent. In contrast, for the overcomplete case we ought to perform a Cholesky factorization on RQP] for each data-vector separately. We have, however, obtained good results by proceeding as in the complete case and replacing the inverse of the filter matrix with its pseudo-inverse. From experiments we have also found that in the overcomplete case we should fix the norm of the filters, # , in order to prevent some of them from decaying to zero. This operation is done after every step of learning. Since controlling the norm removes the ability of the experts to adapt to scale it is necessary to whiten the data first. 4.1 Experiment: Overcomplete Representations for Natural Images We randomly generated ! ( !!! patches of ! ! pixels from images of natural scenes1. The patches were centered and sphered using PCA and the DC component (eigen-vector with largest variance) was removed. The algorithm for overcomplete representations using the pseudo-inverse was used to train ! ( "! experts, i.e. a representation that is more than !-! times overcomplete. We fixed the weights to have and the the filters to have a -norm of . A small weight decay term and a momentum term were included in the gradient updates of the filters. The learning rate was set so that initially the change in the filters was approximately !# !! . In figure (2a) we show a small subset of the inverse-filters given by the pseudo-inverse of %$'&)(%* , where $+&)(%* is the !!,.-/- matrix used for sphering the data. 1Collected from http://www.cis.hut.fi/projects/ica/data/images 5 Topographically Ordered Features In [6] it was shown that linear filtering of natural images is not enough to remove all higher order dependencies. In particular, it was argued that there are residual dependencies among the activities $ of the filtered inputs. It is therefore desirable to model those dependencies within the PoT model. By inspection of figure (1b) we note that these dependencies can be modelled through a non-negative weight matrix ! , which connects the hidden variables Z with the activities . The resultant model is depicted in figure (1c). Depending on how many nonzero weights emanate from a hidden unit Z (say ), each expert now occupies input dimensions instead of just one. The expressions for these richer experts can be obtained from (2) by replacing, . We have found that learning is assisted by fixing the -norm of the weights ( # ). Moreover, we have found that the sparsity of the weights can be controlled by the following generalization of the experts,
- ( "! ( ! (9) The larger the value for the sparser the distribution of values. Joint and conditional distributions over hidden variables are obtained through similar replacements in eqn. (6) and (7) respectively. Sampling the reconstructions given the states of the hidden variables proceeds by first sampling from independent generalized Laplace distributions with precision parameters 0 * which are subsequently transformed into \ 1 . Learning in this model therefore proceeds with only minor modifications to the algorithm described in the previous section. When we learn the weight matrix from image data we find that a particular hidden variable Z develops weights to the activities of filters similar in frequency, location and orientation. The * variables therefore integrate information from these filters and as a result develop certain invariances that resemble the behavior of complex cells. A similar approach was studied in [4] using a related causal model2 in which a number of scale variables generate correlated variances for conditionally Gaussian experts. This results in topography when the scale-generating variables are non-adaptive and connect to a local neighborhood of filters only. We will now argue that fixed local weights also give rise to topography in the PoT model. The reason is that averaging the squares of randomly chosen filter outputs (eqn.9) produces an approximately Gaussian distribution which is a poor fit to the heavy-tailed experts. However, this “smoothing effect” may be largely avoided by averaging squared filter outputs that are highly correlated (i.e. ones that are similar in location, frequency and orientation). Since the averaging is local, this results in a topographic layout of the filters. 5.1 Experiment: Topographic Representations for Natural Images For this experiment we collected ! ( !!-! image patches of size ! ! pixels in the same way as described in section (4.1). The image data were sphered and reduced to ! - dimensions by removing ! low variance and high variance (DC) direction. We learned an overcomplete representation with !! experts which were organized on a square . !, . ! grid. Each expert connects with a fixed weight of & - to itself and all its neighbors, where periodic boundary conditions were imposed for the experts on the boundary. 2Interestingly, the update equations for the filters presented in [4], which minimize a bound on the log-likelihood of a directed model, reduce to the same equations as our learning rules when the representation is complete and the filters orthogonal. (a) (b) Figure 2: (a)-Small subset of the learned filters from a times overcomplete representation for natural image patches. (b)-Topographically ordered filters. The weights were fixed and connect to neighbors only, using periodic boundary conditions. Neighboring filters have learned to be similar in frequency, location and orientation. One can observe a pinwheel structure to the left of the low frequency cluster. We adapted the filters ( -norm ) and used fixed values for and . . The resulting inverse-filters are shown in figure (2b). We note that the weights have enforced a topographic ordering on the experts, where location, scale and frequency of the Gabor-like filters all change smoothly across the map. In another experiment we used the same data to train a complete representation of ! experts where we learned the weights ( -norm ), and the filters (unconstrained), but with a fixed value of . The weights and were kept positive by adapting their logarithm. Since the weights can now connect to any other expert we do not expect topography. To study whether the weights were modelling the dependencies between the energies of the filter outputs we ordered the filters for each complex cell Z according to the strength of the weights connecting to it. For a representative subset of the complex cells * , we show the ! filters with the strongest connections to that cell in figure (1d). Since the cells connect to similar filters we may conclude that the weights are indeed learning the dependencies between the activities of the filter outputs. 6 Denoising Images: The Iterated Wiener Filter If the PoT model provides an accurate description of the statistics of natural image data it ought to be a good prior for cleaning up noisy images. In the following we will apply this idea to denoise images contaminated with Gaussian pixel noise. We follow the standard Bayesian approach which states that the optimal estimate of the original image is given by the maximum a posteriori (MAP) estimate of , where denotes the noisy image. For the PoT model this reduces to,
* & W X V < . 1 ( )*, . ( (10) (a) (b) (c) (d) Figure 3: (a)- Original “rock”-image. (b)-Rock-image with noise added. (c)-Denoised image using Wiener filtering. (d) Denoised image using IWF. To minimize this we follow a variational procedure where we upper bound the logarithm using )+*-, % % )+*-, . The bound is saturated when & % . Applying this to every logarithm in the summation in eqn. (10) and iteratively minimizing this bound over and we find the following update equations, & . ( (11)
* & 1 T 1 1 T0UTPVFWYX 3 D (12) where denotes componentwise multiplication. Since the second equation is just a Wiener filter with noise covariance and a Gaussian prior with covariance T 1 we have named the above denoising equations the iterated Wiener filter (IWF). When the filters are orthonormal, the noise covariance isotropic and the weight matrix the identity, the minimization in (10) decouples into minimizations over the transformed variables $ . Defining we can easily derive that $ is the solution of the following cubic equation (for which analytic solutions exist), $ $ . $ . ! (13) We note however that constraining the filters to be orthogonal is a rather severe restriction if the data are not pre-whitened. On the other hand, if we decide to work with whitened data, the isotropic noise assumption seems unrealistic. Having said that, Hyvarinen’s shrinkage method for ICA models [3] is based on precisely these assumptions and seems to give good results. The proposed method is also related to approaches based on the GSM [7]. 6.1 Experiment: Denoising To test the iterated Wiener filter, we trained a complete set of - experts on the data described in section (4.1). The norm of the filters was unconstrained, the were free to adapt, but we did not include any weights . The image shown in figure (3a) was corrupted with Gaussian noise with standard deviation . ! , which resulted in a PSNR of .. # ! dB (figure (3b)). We applied the adaptive Wiener filter from matlab (Wiener2.m) with an optimal neighborhood size and known noise-variance. The denoised image using adaptive Wiener filtering has a PSNR of ."! # - dB and is shown in figure (3c). IWF was run on every possible ! ! patch in the image, after which the results were averaged. Because the filters were trained on sphered data without a DC component, the same transformations have to be applied to the test patches before IWF is applied. The denoised image using IWF is shown in (3d) and has a PSNR of . # dB, which is a significant improvement of # . dB over Wiener filtering. It is our hope that the use of overcomplete representations and weights will further improve those results. 7 Discussion It is well known that a wavelet transform de-correlates natural image data in good approximation. In [6] it was found that in the marginal distribution the wavelet coefficients are sparsely distributed but that there are significant residual dependencies among their energies $ . In this paper we have shown that the PoT model can learn highly overcomplete filters with sparsely distributed outputs. With a second hidden layer that is locally connected, it captures the dependencies between filter outputs by learning topographic representations. Our approach improves upon earlier attempts (e.g. [4],[8]) in a number of ways. In the PoT model the hidden variables are conditionally independent so perceptual inference is very easy and does not require iterative settling even when the model is overcomplete. There is a fairly simple and efficient procedure for learning all the parameters, including the weights connecting top-level units to filter outputs. Finally, the model leads to an elegant denoising algorithm which involves iterating a Wiener-filter. Acknowledgements This research was funded by NSERC, the Gatsby Charitable Foundation, and the Wellcome Trust. We thank Yee-Whye Teh for first suggesting a related model and Peter Dayan for encouraging us to apply products of experts to topography. References [1] G.E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation, 14:1771–1800, 2002. [2] G.E. Hinton, M. Welling, Y.W. Teh, and K. Osindero. A new view of ICA. In Int. Conf. on Independent Component Analysis and Blind Source Separation, 2001. [3] A. Hyvarinen. Sparse code shrinkage: Denoising of nongaussian data by maximum likelihood estimation. Neural Computation, 11(7):1739–1768, 1999. [4] A. Hyvarinen, P.O. Hoyer, and M. Inki. Topographic independent component analysis. Neural Computation, 13(7):1525–1558, 2001. [5] S. Della Pietra, V.J. Della Pietra, and J.D. Lafferty. Inducing features of random fields. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(4):380–393, 1997. [6] E.P. Simoncelli. Modeling the joint statistics of images in the wavelet domain. In Proc SPIE, 44th Annual Meeting, volume 3813, pages 188–195, Denver, 1999. [7] V. Strela, J. Portilla, and E. Simoncelli. Image denoising using a local Gaussian scale mixture model in the wavelet domain. In Proc. SPIE, 45th Annual Meeting, San Diego, 2000. [8] M.J. Wainwright and E.P. Simoncelli. Scale mixtures of Gaussians and the statistics of natural images. In Advances Neural Information Processing Systems, volume 12, pages 855–861, 2000. [9] S.C. Zhu, Z.N. Wu, and D. Mumford. Minimax entropy principle and its application to texture modeling. Neural Computation, 9(8):1627–1660, 1997.
|
2002
|
145
|
2,153
|
Margin Analysis of the LVQ Algorithm Koby Crammer kobics@cs.huji.ac.il Ran Gilad-Bachrach ranb@cs.huji.ac.il Amir Navot anavot@cs.huji.ac.il Naftali Tishby tishby@cs.huji.ac.il School of Computer Science and Engineering and Interdisciplinary Center for Neural Computation The Hebrew University, Jerusalem, Israel Abstract Prototypes based algorithms are commonly used to reduce the computational complexity of Nearest-Neighbour (NN) classifiers. In this paper we discuss theoretical and algorithmical aspects of such algorithms. On the theory side, we present margin based generalization bounds that suggest that these kinds of classifiers can be more accurate then the 1-NN rule. Furthermore, we derived a training algorithm that selects a good set of prototypes using large margin principles. We also show that the 20 years old Learning Vector Quantization (LVQ) algorithm emerges naturally from our framework. 1 Introduction Though fifty years have passed since the introduction of One Nearest Neighbour (1-NN) [1] it is still a popular algorithm. 1-NN is a simple and intuitive algorithm but at the same time achieves state of the art results [2]. However in large, high dimensional data set it often become infeasible. One approach to face this computational problem is to approximate the nearest neighbour [3] using various techniques. Alternative approach is to choose a small data-set (aka prototypes) which represents the original training sample, and apply the nearest neighbour rule only with respect to this small data-set. This solution maintains the “spirit” of the original algorithm, while making it feasible. Moreover, it might improve the accuracy by reducing noise over-fitting. In this setting, the goal of the learning stage is to choose wisely the prototypes, i.e., in a way that will yield good generalization 1. In this paper we use the Maximal Margin principle [4, 5] for this purpose. The training data is used to measure the margin of each proposed positioning of the prototypes. We combine these measurements to calculate a risk for each prototype set and select the prototypes that minimize the risk. Roughly speaking, margins measure the level of confidence a classifiers has with respect to its decisions. This tool has become a primary method in machine learning during the last decade. Two of the most powerful algorithms in the field, Support Vector Machines 1Good generalization means that the probability of misclassifying a new example is small. (SVM) [4] and AdaBoost [5] are motivated and analyzed by margins. Since the introduction of these algorithms dozens of papers were published on different aspect of margins in supervised learning [6, 7, 8]. Learning Vector Quantization (LVQ) [9] is a well-known algorithm that deals with the same problem of selecting prototypes. LVQ iterates over the training data and updates the prototypes position. Although it is known for more then 20 years and in spite of its popularity, no adequate generalization bounds and theory were suggested for this algorithm. In this paper we show that algorithms derived from the maximal margin principle contains LVQ as a special case. We use this result to present generalization bounds and insights for the LVQ algorithm. Buckingham and Geva [10] were the first to explore the relations between maximal margin principle and LVQ. They presented a variant named LMVQ and analyzed it. As in most of the literature about LVQ they look at the algorithm as trying to estimate a density function (or a function of the density) at each point. After estimating the density the Bayesian decision rule is used. We take a different point of view on the problem and look at the geometry of the decision boundary induced by the decision rule. Note that in order to generate a good classification rule the only significant factor is where the decision boundary lies (It is a well known fact that classification is easier then density estimation [11]). Summary of the Results In section 2 we present the model and outline the LVQ family of algorithms. A discussion and definition of margin is provided in section 3. The two fundamental results are a bound on the generalization error and a theoretical reasoning for the LVQ family of algorithms. In section 4 we present a bound on the gap between the empirical and the generalization accuracy. This provides a guaranty on the performance over unseen instances based on the empirical evidence. Although LVQ was designed as an approximation to nearest neighbour the theorem suggests that the former is more accurate in many cases. Indeed a simple experiment shows this prediction to be true. In section 5 we show how LVQ family of algorithms emerges from the generalization bound. These algorithms minimize the bound using gradient descent. The different variants correspond to different tradeoff between opposing quantities. In practice the tradeoff is controlled by loss functions. 2 Problem Setting and the LVQ algorithm The framework we are interested in is supervised learning for classification problems. In this framework the task is to find a map from Rn into a finite set of labels Y. We focus on classification functions of the following form: the classifiers are parameterized by a set of points µ1, . . . , µk ∈Rn which we refer to as prototypes. Each prototype is associated with a label y ∈Y. Given a new instance x ∈Rn we predict that it has the same label as the closest prototype, similar to the 1-nearest-neighbour rule (1-NN). We denote the label predicted using a set of prototypes {µj}k j=1 by µ(x). The goal of the learning process in this model is to find a set of prototypes which will predict accurately the labels of unseen instances. The Learning Vector Quantization (LVQ) family of algorithms works in this model. The algorithm gets as an input a labelled sample S = {(xl, yl)}m l=1, where xl ∈Rn and yl ∈Y and uses it to find a good set of prototypes. All the variants of LVQ share the following common scheme. The algorithm maintains a set of prototypes each is assigned with a predefined label, which is kept constant during the learning process. It cycles through the training data S and on each iteration modifies the set of prototypes in accordance to one instance (xt, yt). If the prototype µj has the same label as yt it is attracted to xt but if the label of µj is different it is repelled from it. Hence LVQ updates the closest prototypes to xt according to the rule: µj ←µj ± αt(xt −µj) , (1) where the sign is positive if the label of xt and µj agree, and negative otherwise. The parameter αt is updated using a predefined scheme and controls the rate of convergence of the algorithm. The variants of LVQ differ in which prototypes they choose to update in each iteration and in the specific scheme used to modify αt. For instance, LVQ1 and OLVQ1 updates only the closest prototype to xt in each iteration. Another example is the LVQ2.1 which modifies the two closest prototypes µi and µj to xt. It uses the same update rule (1) but apply it only if the following two conditions hold : 1. Exactly one of the prototypes has the same label as xt, i.e. yt. 2. The ratios of their distances from xt falls in a window: 1/s ≤∥xt −µi∥/ ∥xt −µj∥≤s, where s is the window size. More variants of LVQ can be found in [9]. 3 Margins Margin plays an important role in current research of machine learning. It measures the confidence of a classifier with respect to its predictions. One approach is to define margin as the distance between an instance and the decision boundary induced by the classification rule as illustrated in figure 1(a). Support Vector Machines [4] are based on this definition of margin, which we refer to as Sample-Margin. However, an alternative definition, Hypothesis Margin, exists. In this definition the margin is the distance that the classifier can travel without changing the way it labels any of the sample points. Note that this definition requires a distance measure between classifiers. This type of margin is used in AdaBoost [5] and is illustrated in figure 1(b). (a) (b) Figure 1: Sample Margin (figure 1(a)) measures how much can an instance travel before it hits the decision boundary. On the other hand Hypothesis Margin (figure 1(b)) measures how much can the hypothesis travel before it hits an instance. It is possible to apply these two types of margin in the context of LVQ. Recall that in our model a classifier is defined by a set of labeled prototypes. Such a classifier generates a decision boundary by Voronoi tessellation. Although using sample margin is more natural as a first choice, it turns out that this type of margin is both hard to compute and numerically unstable in our context, since small relocations of the prototypes might lead to a dramatic change in the sample margin. Hence we focus on the hypothesis margin and thus have to define a distance measure between two classifiers. We choose to define it as the maximal distance between prototypes pairs as illustrated in figure 2. Formally, let µ = {µj}k j=1 and ˆµ = {ˆµj}k j=1 define two classifiers, then ρ (µ, ˆµ) = k max i=1 ∥µi −ˆµi∥2 . Note that this definition is not invariant to permutations of the prototypes but it upper bounds the invariant definition. Furthermore, the induced margin is easy to compute (lemma 1) and lower bounds the sample-margin (lemma 2). Lemma 1 Let µ = {µj}k j=1 be a set of prototypes and x a sample point. Then the hypothesis margin of µ with respect to x is θ = 1 2 (∥µj −x∥−∥µi −x∥) where µi (µj ) is the closest prototype to x with the same (alternative) label. Lemma 2 Let S = {xl}m l=1 be a sample and µ = (µ1, . . . , µk) be a set of prototypes. sample-marginS(µ) ≥hypothesis-marginS(µ) Lemma 2 shows that if we find a set of prototypes with large hypothesis margin then it has large sample margin as well. 4 Margin Based Generalization Bound Figure 2: The distance measure on the LVQ hypothesis class. The distance between the white and black prototypes set is the maximal distance between prototypes pairs. In this section we present a bound on the generalization error of LVQ type of classifiers. When a classifier is applied to a training data it is natural to use the training error as a prediction to the generalization error (the probability of misclassification of an unseen instance). In prototype based hypothesis the classifier assigns a confidence level, i.e. margin, to its predictions. Taking into account the margin by counting instances with small margin as mistakes gives a better prediction and provide a bound on the generalization error. This bound is given in terms of the number of prototypes, the sample size, the margin and the margin based empirical error. The following theorem states this result formally. Theorem 1 In the following setting: • Let S = {xi, yi}m i=1 ∈{Rn × Y}m be a training sample drawn by some underlying distribution D. • Assume that ∀i ∥xi∥≤R. • Let µ be a set of prototypes with k prototypes from each class. • Let 0 < θ < 1/2. • Let αθ S(µ) = 1 m {i : marginµ(xi) < θ} . • Let eD(µ) be the generalization error: eD(µ) = Pr(x,y)∼D [µ(x) ̸= y]. • Let δ > 0. Then with probability 1 −δ over the choices of the training data: ∀µ eD ≤αθ S(µ) + s 8 m d log2 32m θ2 + log 4 δ (2) where d is the VC dimension: d = min n + 1, 64R2 θ2 2k|Y| log ek2 (3) This theorem leads to a few observations. First, note that the bound is dimension free, in the sense that the generalization error is bounded independently of the input dimension (n) much like in SVM. Hence it makes sense to apply these algorithms with kernels. Second, note that the VC dimension grows as the number of prototypes grows (3). This suggest that using too many prototypes might result in poor performance, therefore there is a non trivial optimal number of prototypes. One should not be surprised by this result as it is a realization of the Structural Risk Minimization (SRM) [4] principle. Indeed a simple experiment supports this prediction. Hence not only that prototype based methods are faster than Nearest Neighbour, they are more accurate as well. Due to space limitations proofs are provided in the full version of this paper only. 5 Maximizing Hypothesis Margin Through Loss Function Once margin is properly defined it is natural to ask for algorithm that maximizes it. We will show that this is exactly what LVQ does. Before going any further we have to understand why maximizing the margin is a good idea. In theorem 1 we saw that the generalization error 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 -1.5 -1 -0.5 0 0.5 1 1.5 loss margin zero one loss hinge loss broken linear loss exponential loss Figure 3: Different loss functions. SVM, LVQ1 and OLVQ1 use the “hinge” loss: (1 −θ)+. LVQ2.1 uses the broken linear: min(2, (1 − 2θ)+). AdaBoost use the exponential loss (e−θ). can be bounded by a function of the margin θ and the empirical θ-error (α). Therefore it is natural to seek prototypes that obtain small θ-error for a large θ. We are faced with two contradicting goals: small θ-error verses large θ. A natural way to solve this problem is through the use of loss function. Loss function are a common technique in machine learning for finding the right balance between opposed quantities [12]. The idea is to associate a margin based loss (a “cost”) for each hypothesis with respect to a sample. More formally, let L be a function such that: 1. For every θ: L(θ) ≥0. 2. For every θ < 0: L(θ) ≥1. We use L to compute the loss of an hypothesis with respect to one instance. When a training set is available we sum the loss over the instances: L(µ) = P l L(θl), where θl is the margin of the l’th instance in the training data. The two axioms of loss functions guarantee that L(µ) bounds the empirical error. It is common to add more restrictions on the loss function, such as requiring that L is a non-increasing function. However, the only assumption we make here is that the loss function L is differentiable. Different algorithms use different loss functions [12]. AdaBoost uses the exponential loss function L(θ) = e−βθ while SVM uses the “hinge” loss L(θ) = (1 −βθ)+, where β > 0 is a scaling factor. See figure 3 for a demonstration of these loss functions. Once a loss function is chosen, the goal of the learning algorithm is finding an hypothesis that minimizes it. Gradient descent is a natural simple choice for the task. Recall that in our case θl = (∥xl −µi∥−∥xl −µj∥)/2 where µj and µi are the closest prototypes to xl with the correct and incorrect labels respectively. Hence we have that2 dθl dµr = Sl(r) xl −µr ∥xl −µr∥ where Sl(r) is a sign function such that Sl(r) = ( 1 if µr is the closest prototype with correct label. −1 if µr is the closest prototype with incorrect label. 0 otherwise. 2Note that if xl = µj the derivative is not defined. This extreme case does not affect our conclusions, hence or the sake of clarity we avoid the treatment of such extreme cases in this paper. Algorithm 1 Online Loss Minimization. Recall that L is a loss function, and γt varies to zero as the algorithm proceeds. 1. Choose an initial positions for the prototypes {µj}k j=1. 2. For t = 1 : T( or ∞) (a) Receive a labelled instance xt, yt (b) Compute the closest correct and incorrect prototypes to xt: µj, µi, and the margin of xt, i.e. θt = 1/2(∥xt −µi∥−∥xt −µj∥) (c) Apply the update rule for r = i, j: µr ←µr + γt dL(θt) dθ Sl(r) xt −µr ∥xt −µr∥ Taking the derivative of L with respect to µr using the chain rule we obtain dL dµr = X l dL(θl) dθl Sl(r) xl −µr ∥xl −µr∥ (4) By comparing the derivative to zero, we get that the optimal solution is achieved when µr = P l wr l xl where αr l = dL(θl) dθl Sl(r) ∥xl−µr∥and wr l = αr l P l αr l . This leads to two conclusions. First, the optimal solution is in the span of the training instances. Furthermore, from its definition it is clear that wr l ̸= 0 only for the closest prototypes to xl. In other words, wr l ̸= 0 if and only if µr is either the closest prototype to xl which have the same label as xl, or the closest prototype to xl with alternative label. Therefore the notion of support vectors [4] applies here as well. 5.1 Minimizing The Loss Using (4) we can find a local minima of the loss function by a gradient descent algorithm. The iteration in time t computes: µr(t + 1) ←µr(t) + γt X l dL(θl) dθ Sl(r) xl −µr(t) ∥xl −µr(t)∥ where γt approaches zero as t increases. This computation can be done iteratively where in each step we update µr only with respect to one sample point xl. This leads to the following basic update step µr ←µr + γt dL(θl) dθ Sl(r) xl −µr ∥xl −µr∥ Note that Sl(r) differs from zero only for the closest correct and incorrect prototypes to xl, therefore a simple online algorithm is obtained and presented as algorithm 1. 5.2 LVQ1 and OLVQ1 The online loss minimization (algorithm 1) is a general algorithm applicable with different choices of loss functions. We will now apply it with a couple of loss functions and see how LVQ emerges. First let us consider the “hinge” loss function. Recall that the hinge loss is defined to be L(θ) = (1 −βθ)+. The derivative3 of this loss function is 3The “hinge” loss has no derivative at the point θ = 1/β. Again as in other cases in this paper, this fact is neglected. 653 654 655 656 657 658 659 660 661 662 663 0 5000 10000 15000 20000 loss number of iterations Figure 4: The ”hinge” loss function (P(1 −θl)+) vs. number of iterations of OLVQ1. One can clearly see that it decreases. dL(θ) dθ = 0 if θ > 1/β −β otherwise If β is chosen to be large enough, the update rule in the online loss minimization is µr = µr ± γtβ xt −µr ∥xt −µr∥ This is the same update rule as in LVQ1 and OLVQ1 algorithm [9] beside the extra factor of β ∥xt−µr∥. However, this is a minor difference since β/ ∥xt −µr∥is just a normalizing factor. A demonstration of the affect of OLVQ1 on the “hinge” loss function is provided in figure 4. We applied the algorithm to a simple toy problem consisting of three classes and a training set of 800 points. We allowed the algorithm 10 prototypes. As expected the loss decreases as the algorithm proceeds. For this purpose we used the lvq pak package [13]. 5.3 LVQ2.1 The idea behind the definition of margin, and especially hypothesis margin was that a minor change in the hypothesis can not change the way it labels an instance which had a large margin. Hence when making small updates (i.e. small γt) one should focus only on the instances which have margins close to zero. The same idea appeared also in Freund’s boost by majority algorithm [14]. Kohonen adapted this idea to his LVQ2.1 algorithm [9]. The major difference between LVQ1 and LVQ2.1 algorithm is that LVQ2.1 updates µr only if the margin of xt falls inside a certain window. The suitable loss function for LVQ2.1 is the broken linear loss function (see figure 3). The broken linear loss is defined to be L(θ) = min(2, (1 −βθ)+). Note that for |θ| > 1/β the loss is constant (i.e. the derivative is zero), this causes the learning algorithm to overlook instances with too high or too low margin. There exist several differences between LVQ2.1 and the online loss minimization presented here, however these differences are minor. 6 Conclusions and Further Research In this paper we used the maximal margin principle together with loss functions to derive algorithms for prototype positioning. We saw that LVQ can be considered as a special case of this general algorithm. We also provide generalization bounds for any prototype based classifier. This formulation allows derivation of new algorithms in several different ways. The first is to use other loss functions such as the exponential loss. A second way is to use other classification rule, such as k-NN or parzan window. The proper way to adapt the algorithm to the chosen rule is to define the margin accordingly, and modify the minimization process in the training stage. We have constructed some basic experiments using the k-NN rule. The performance of the modified classifier did not exceed those of the 1-NN rule. We suggest the following explanation of these results. Usually the k-NN rule perform better than the 1-NN rule as it filters noise better, and in our setting the noise filtering is already achieved by using a small number of prototype. Another extension to use a different distance measure instead of the l2 norm. This may result in more complicated formula of the derivative of the loss function, but may improve the results significantly in some cases. One specific interesting distance measure is the Tangent Distance [2]. We also presented a generalization guarantee for prototype based classifier that is based on the margin training error. The bound is dimension free and thus a kernel version of the algorithm may yield a good performance. This modification is straightforward, as the algorithm can be expressed as function of inner-products only. We performed preliminary experiments with a kernelized version of the algorithm. It seems that it improves the accuracy when it is used with a small number of prototypes. However, allowing more prototypes to the standard version achieves the same improvement. A possible explanation of this phenomenon is the following. Recall that a classifier is parametrised by a set of labelled prototypes that define a Voronoi tessellation. The decision boundary of such a classifier is built of some of the lines of the Voronoi tessellation. In the standard version these lines are straight lines. In the kernel version these lines are smooth non-linear curves. As the number of prototypes grows, the decision boundary consists of more, and shorter lines. Now, if we remember the fact that any smooth curve can be approximated by a broken linear line, we come to the conclusion that any classifier that can be generated by the kernel version, can be approximated by one that is generated by the standard version, when is applied with more prototypes. Acknowledgement We thank Yoram Singer and Gal Chechik for their helpful remarks. References [1] E. Fix and j. Hodges. Discriminatory analysis. nonparametric discrimination: Consistency properties. Technical Report 4, USAF school of Aviation Medicine, 1951. [2] P. Y. Simard, Y. A. Le Cun, and J. Denker. Efficient pattern recognition using a new transformation distance. In Advances in Neural Information Processing Systems, volume 5, pages 50–58. 1993. [3] P. Indyk and R. Motwani. Approximate nearest neighbors: towards removing the curse of dimensionality. In Proceedings of the 30th ACM Symposium on the Theory of Computing, pages 604–613, 1998. [4] V. Vapnik. The Nature Of Statistical Learning Theory. Springer-Verlag, 1995. [5] Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119–139, 1997. [6] R. E. Schapire, Y. Freund, P. Bartlett, and W. S. Lee. Boosting the margin : A new explanation for the effectiveness of voting methods. Annals of Statistics, 1998. [7] Llew Mason, P. Bartlett, and J. Baxter. Direct optimization of margins improves generalization in combined classifier. Advances in Neural Information Processing Systems, 11:288–294, 1999. [8] C. Campbell, N. Cristianini, and A. Smola. Query learning with large margin classifiers. In International Conference on Machine Learning, 2000. [9] T. Kohonen. Self-Organizing Maps. Springer-Verlag, 1995. [10] L. Buckingham and S. Geva. Lvq is a maximum margin algorithm. In Pacific Knowledge Acquisition Workshop PKAW’2000, 2000. [11] L. Devroye, L. Gyorfi, and G. Lugosi. A Probabilistic Theory of Pattern Recognition. Springer, New York, 1996. [12] Y. Singer and D. D. Lewis. Machine learning for information retrieval: Advanced techniques. presented at ACM SIGIR 2000, 2000. [13] T. Kohonen, J. Hynninen, J. Kangas, and K. Laaksonen, J. Torkkola. Lvq pak, the learning vector quantization program package. http://www.cis.hut.fi/research/lvq pak, 1995. [14] Y. Freund. Boosting a weak learning algorithm by majority. Information and Computation, 121(2):256–285, 1995.
|
2002
|
146
|
2,154
|
Feature Selection by Maximum Marginal Diversity Nuno Vasconcelos Department of Electrical and Computer Engineering University of California, San Diego nuno@media.mit.edu Abstract We address the question of feature selection in the context of visual recognition. It is shown that, besides efficient from a computational standpoint, the infomax principle is nearly optimal in the minimum Bayes error sense. The concept of marginal diversity is introduced, leading to a generic principle for feature selection (the principle of maximum marginal diversity) of extreme computational simplicity. The relationships between infomax and the maximization of marginal diversity are identified, uncovering the existence of a family of classification procedures for which near optimal (in the Bayes error sense) feature selection does not require combinatorial search. Examination of this family in light of recent studies on the statistics of natural images suggests that visual recognition problems are a subset of it. 1 Introduction It has long been recognized that feature extraction and feature selection are important problems in statistical learning. Given a classification or regression task in some observation space (typically high-dimensional), the goal is to find the best transform into a feature space (typically lower dimensional) where learning is easier (e.g. can be performed with less training data). While in the case of feature extraction there are few constraints on , for feature selection the transformation is constrained to be a projection, i.e. the components of a feature vector in are a subset of the components of the associated vector in . Both feature extraction and selection can be formulated as optimization problems where the goal is to find the transform that best satisfies a given criteria for “feature goodness”. In this paper we concentrate on visual recognition, a subset of the classification problem for which various optimality criteria have been proposed throughout the years. In this context, the best feature spaces are those that maximize discrimination, i.e. the separation between the different image classes to recognize. However, classical discriminant criteria such as linear discriminant analysis make very specific assumptions regarding class densities, e.g. Gaussianity, that are unrealistic for most problems involving real data. Recently, various authors have advocated the use of information theoretic measures for feature extraction or selection [15, 3, 9, 11, 1]. These can be seen as instantiations of the the infomax principle of neural organization1 proposed by Linsker [7], which also encompasses information theoretic approaches for independent component analysis and blind-source separation [2]. In the classification context, infomax recommends the selection of the feature transform that maximizes the mutual information (MI) between features and class labels. While searching for the features that preserve the maximum amount of information about the class is, at an intuitive level, an appealing discriminant criteria, the infomax principle does not establish a direct connection to the ultimate measure of classification performance - the probability of error (PE). By noting that to maximize MI between features and class labels is the same as minimizing the entropy of labels given features, it is possible to establish a connection through Fano’s inequality: that class-posterior entropy (CPE) is a lower bound on the PE [11, 4]. This connection is, however, weak in the sense that there is little insight on how tight the bound is, or if minimizing it has any relationship to minimizing PE. In fact, among all lower bounds on PE, it is not clear that CPE is the most relevant. An obvious alternative is the Bayes error (BE) which 1) is the tightest possible classifierindependent lower-bound, 2) is an intrinsic measure of the complexity of the discrimination problem and, 3) like CPE, depends on the feature transformation and class labels alone. Minimizing BE has been recently proposed for feature extraction in speech problems [10]. The main contribution of this paper is to show that the two strategies (infomax and minimum BE) are very closely related. In particular, it is shown that 1) CPE is a lower bound on BE and 2) this bound is tight, in the sense that the former is a good approximation to the latter. It follows that infomax solutions are near-optimal in the minimum BE sense. While for feature extraction both infomax and BE appear to be difficult to optimize directly, we show that infomax has clear computational advantages for feature selection, particularly in the context of the sequential procedures that are prevalent in the feature selection literature [6]. The analysis of some simple classification problems reveals that a quantity which plays an important role in infomax solutions is the marginal diversity: the average distance between each of the marginal class-conditional densities and their mean. This serves as inspiration to a generic principle for feature selection, the principle of maximum marginal diversity (MMD), that only requires marginal density estimates and can therefore be implemented with extreme computational simplicity. While heuristics that are close to the MMD principle have been proposed in the past, very little is known regarding their optimality. In this paper we summarize the main results of a theoretical characterization of the problems for which the principle is guaranteed to be optimal in the infomax sense (see [13] for further details). This characterization is interesting in two ways. First, it shows that there is a family of classification problems for which a near-optimal solution, in the BE sense, can be achieved with a computational procedure that does not involve combinatorial search. This is a major improvement, from a computational standpoint, to previous solutions for which some guarantee of optimality (branch and bound search) or near optimality (forward or backward search) is available [6]. Second, when combined with recent studies on the statistics of biologically plausible image transformations [8, 5], it suggests that in the context of visual recognition, MMD feature selection will lead to solutions that are optimal in the infomax sense. Given the computational simplicity of the MMD principle, this is quite significant. We present experimental evidence in support of these two properties of MMD. 2 Infomax vs minimum Bayes error In this section we show that, for classification problems, the infomax principle is closely related to the minimization of Bayes error. We start by defining these quantities. 1Under the infomax principle, the optimal organization for a complex multi-layered perceptual system is one where the information that reaches each layer is processed so that the maximum amount of information is preserved for subsequent layers. Theorem 1 Given a classification problem with classes in a feature space , the decision function which minimizes the probability of classification error is the Bayes classifier
! #" , where $ is a random variable that assigns to one of classes, and &%('*)+-,.,-,#+
/ . Furthermore, the PE is lower bounded by the Bayes error 0 1)32543687 9 ! #" ;:;+ (1) where 436 means expectation with respect to . Proof: All proofs are omitted due to space considerations. They can be obtained by contacting the author. Principle 1 (infomax) Consider an -class classification problem with observations drawn from random variable < % , and the set of feature transformations >= @? . The best feature space is the one that maximizes the mutual information A $CBD where $ is the class indicator variable defined above, D < , and A $EBFD G H IKJ ML !N+F OPRQ*1S.TVU W9X 6 L ZY S.T9X 6 Y S.W9X ZY#[ the mutual information between D and $ . It is straightforward to show that A D + $ \ ]^ $ _2`]^ $ " D , where ]^ D \ 2 IKJ ! PRQ* J [ is the entropy of D . Since the class entropy ]a $ does not depend on , infomax is equivalent to the minimization of the CPE ]^ $ " D . We next derive a bound that plays a central role on the relationship between this quantity and BE. Lemma 1 Consider a probability mass function b c' J d +-,.,.,.+ Jfe / such that g5h J h )+jif and H #J 1) . Then, j)325 J ;lk ) PZQ ]^ b N2 PZQVnm 2o)p PRQ* q ) (2) where ]a b r s2 H .J PZQ J . Furthermore, the bound is tight in the sense that equality holds when J t m 2u) and J v ) m 2o) +Oixwzy u{|, (3) The following theorem follows from this bound. Theorem 2 The BE of an -class classification problem with feature space and class indicator variable $ , is lower bounded by 0 } lk ) PRQ* ]^ $ " D 2 PZQfnm 2o)p PZQ q )+ (4) where D % is the random vector from which features are drawn. When is large ( ?~ ) this bound reduces to 0 } lk d F e ]^ $ " D . It is interesting to note the relationship between (4) and Fano’s lower bound on the PE uk d e ]^ $ " D r2 d e . The two bounds are equal up to an additive constant ( d e PRQ*
e eExd ) that quickly decreases to zero with the number of classes . It follows that, at least when the number of classes is large, Fano’s is really a lower bound on BE, not only on PE. Besides making this clear, Theorem 2 is a relevant contribution in two ways. First, since constants do not change the location of the bound’s extrema, it shows that infomax minimizes a lower bound on BE. Second, unlike Fano’s bound, it sheds considerable insight on the relationship between the extrema of the bound and those of the BE. In fact, it is clear from the derivation of the theorem that, the only reason why the righthand (RHS) and left-hand (LHS) sides of (4) differ is the application of (2). Figure 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 p1 1−max(p) H(p) − log(3) + 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 p1 p2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 p1 p2 Figure 1: Visualization of (2). Left: LHS and RHS versus for
. Middle: contours of the LHS versus
for ! !
. Right: same, for RHS. −5 0 5 −5 0 5 0 0.1 0.2 0.3 0.4 0.5 µx µy L* −5 0 5 −5 0 5 0 0.2 0.4 0.6 0.8 1 µx µy H(Y|X) −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 µx µy Figure 2: The LHS of (4) as an approximation to (1) for a two-class Gaussian problem where "$#% & ('*) ,+.-/ 10324 and "$#% & ('5) 6 +-7 89:2; . All plots are functions of 8 . Left: surface plot of (1). Middle: surface plot of the LHS of (4). Right: contour plots of the two functions. shows plots of the RHS and LHS of this equation when %`'m+< / , illustrating three interesting properties. First, bound (2) is tight in the sense defined in the lemma. Second, the maximum of the LHS is co-located with that of the RHS. Finally, (like the RHS) the LHS is a concave function of b and increasing (decreasing) when the RHS is. Due to these properties, the LHS is a good approximation to the RHS and, consequently, the LHS of (4) a good approximation to its RHS. It follows that infomax solutions will, in general, be very similar to those that minimize the BE . This is illustrated by a simple example in Figure 2. 3 Feature selection For feature extraction, both infomax and minimum BE are complicated problems that can only be solved up to approximations [9, 11, 10]. It is therefore not clear which of the two strategies will be more useful in practice. We now show that the opposite holds for feature selection, where the minimization of CPE is significantly simpler than that of BE. We start by recalling that, because the possible number of feature subsets in a feature selection problem is combinatorial, feature selection techniques rely on sequential search methods [6]. These methods proceed in a sequence of steps, each adding a set of features to the current best subset, with the goal of optimizing a given cost function2. We denote the current subset by D7= , the added features by D7> and the new subset by D/? D/> + D/= . Theorem 3 Consider an -class classification problem with observations drawn from a random variable < % , and a feature transformation = ? . is a infomax feature 2These methods are called forward search techniques. There is also an alternative set of backward search techniques, where features are successively removed from an initial set containing all features. We ignore the latter for simplicity, even though all that is said can be applied to them as well. space if and only if i y 0 K!&" j."Z" k 0
K!&" O-"R"
(5) where D < , D < , j H ! j ! j denotes expectation with respect to the prior class probabilities and 0 7 J "Z" p: I J ! PZQ S X 6 Y X 6 Y [ is the Kullback-Leibler divergence between J and . Furthermore, if D7? D7> + D/= , the infomax cost function decouples into two terms according to 0 9
? " O-"R"
! ? 0 9 L > " = +F j."Z" 9 ! > " = q 0 9. ! = " j."Z" = , (6) Equation (5) exposes the discriminant nature of the infomax criteria. Noting that ! K!&" j , it clearly favors feature spaces where each class-conditional density is as distant as possible (in the KL sense) from the average among all classes. This is a sensible way to quantify the intuition that optimal discriminant transforms are the ones that best separate the different classes. Equation (6), in turn, leads to an optimal rule for finding the features D > to merge with the current optimal solution D = : the set which minimizes 0 9 L > " = +F j."Z" 9 ! > " = . The equation also leads to a straightforward procedure for updating the optimal cost once this set is determined. On the other hand, when the cost function is BE, the equivalent expression is 4
7 ! #" ? O:8 4 4r 7 L p! > " + = ! > " = l ! #" = O:"! , (7) Note that the non-linearity introduced by the operator, makes it impossible to express 4 7 l
8! #" ? ;: as a function of 4 7 *9 ! #" = ;: . For this reason, infomax is a better principle for feature selection problems than direct minimization of BE. 4 Maximum marginal diversity To gain some intuition for infomax solutions, we next consider the Gaussian problem of Figure 3. Assuming that the two classes have equal prior probabilities j) m )$#m , the marginals &%('- r*)9" )p and +%
'# r*)" m* are equal and feature , d does not contain any useful information for classification. On the other hand, because the classes are clearly separated along the ) axis, feature , contains all the information available for discriminating between them. The different discriminating powers of the two variables are reflected by the infomax costs: while % ' *)V +%
'# r*)9" )p -%
'. r.)9" m leads to 0 7 +%
' K*)" j."Z" % ' *)fO:/ g , from %&0 .)f5y -% 0 *)" ) (y c+% 0 r*)" m* it follows that 0 7 +% 0 *)" j-"R" %&0 *)V;: g , and (5) recommends the selection of , . This is unlike energy-based criteria, such as principal component analysis, that would select , d . The key advantage of infomax is that it emphasizes marginal diversity. Definition 1 Consider a classification problem on a feature space , and a random vector D , d +.,-,.,#+ , ? from which feature vectors are drawn. Then, 132 , v o 0 7 +%(4 .)9" j."Z" % 4 *)V;:+ is the marginal diversity of feature , v . The intuition conveyed by the example above can be easily transformed into a generic principle for feature selection. Principle 2 (Maximum marginal diversity) The best solution for a feature selection problem is to select the subset of features that leads to a set of maximally diverse marginal densities. −5 −4 −3 −2 −1 0 1 2 3 4 5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 x1 x2 −50 −40 −30 −20 −10 0 10 20 30 40 50 0 0.005 0.01 0.015 0.02 0.025 x PX 1|Y(x|1) PX 1|Y(x|2) −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 0 0.5 1 1.5 2 2.5 x PX 2|Y(x|1) PX 2|Y(x|2) Figure 3: Gaussian problem with two classes 6 , in the two-dimensions,
. Left: contours of
probability. Middle: marginals for . Right: marginals for
. This principle has two attractive properties. First it is inherently discriminant, recommending the elimination of the dimensions along which the projections of the class densities are most similar. Second, it is straightforward to implement with the following algorithm. Algorithm 1 (MMD feature selection) For a classification problem with features D , d +.,.,-,.+ ,? , classes $ % ' )+.,-,.,#+
/ and class priors jE J the following procedure returns the top MMD features. - foreach feature w % ' )+-,.,.,.+ / : * foreach class &%('*)+-,.,-,#+
/ , compute an histogram estimate v L of -% 4 *)" j , * compute v d e H v L , * compute the marginal diversity 1 2 , v H .J v L PRQ*| v L , # v , where both the and division , # are performed element-wise, - order the features by decreasing diversity, i.e. find 'w d +-,.,-,.+w ? / such that 1 2 , v Mk 1 2 , v ' , and return ' , v ' +.,-,.,-+ , v / . In general, there are no guarantees that MMD will lead to the infomax solution. In [13] we seek a precise characterization of the problems where MMD is indeed equivalent to infomax. Due to space limitations we present here only the main result of this analysis, see [13] for a detailed derivation. Theorem 4 Consider a classification problem with class labels drawn from a random variable $ and features drawn from a random vector D , d +.,.,-,.+ , ? and let D r , d +-,.,-,#+ , be the optimal feature subset of size in the infomax sense. If A , v BD d L v 8d A , v BD d L v 8d " $ +ji8wE%(' )+-,.,., + / (8) where D d L v 8d \' , d +.,.,-,#+ , v xd / , the set D is also the optimal subset of size in the MMD sense. Furthermore, 0 9 r!&" O-"R" ! ! v" d 1 2 , v , (9) The theorem states that the MMD and infomax solutions will be identical when the mutual information between features is not affected by knowledge of the class label. This is an interesting condition in light of various recent studies that have reported the observationof consistent patterns of dependence between the features of various biologically plausible image transformations [8, 5]. Even though the details of feature dependence will vary from one image class to the next, these studies suggest that the coarse structure of the patterns of dependence between such features follow universal statistical laws that hold for all types of images. The potential implications of this conjecture are quite significant. First it implies 10 1 10 2 10 3 10 4 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 Sample size Jain/Zongker score 0 5 10 15 20 25 30 0.75 0.8 0.85 0.9 0.95 1 Number of features Classification rate DCT PCA Wavelet 0 5 10 15 20 25 30 35 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 Number of features Cumulative marginal diversity DCT PCA Wavelet a) b) c) Figure 4: a) JZ score as a function of sample size for the two-class Gaussian problem discussed in the text, b) classification accuracy on Brodatz as a function of feature space dimension, and c) corresponding curves of cumulative marginal density (9). A linear trend was subtracted to all curves in c) to make the differences more visible. that, in the context of visual processing, (8) will be approximately true and the MMD principle will consequently lead to solutions that are very close to optimal, in the minimum BE sense. Given the simplicity of MMD feature selection, this is quite remarkable. Second, it implies that when combined with such transformations, the marginal diversity is a close predictor for the CPE (and consequently the BE) achievable in a given feature space. This enables quantifying the goodness of the transformation without even having to build the classifier. See [13] for a more extensive discussion of these issues. 5 Experimental results In this section we present results showing that 1) MMD feature selection outperforms combinatorial search when (8) holds, and 2) in the context of visual recognition problems, marginal diversity is a good predictor of PE. We start by reporting results on a synthetic problem, introduced by Trunk to illustrate the curse of dimensionality [12], and used by Jain and Zongker (JZ) to evaluate various feature selection procedures [6]. It consists of two Gaussian classes of identity covariance and means 7R) d d ,.,-, d ? : and is an interesting benchmark for feature selection because it has a clear optimal solution for the best subset of [ features (the first [ ) for any [ . JZ exploited this property to propose an automated procedure for testing the performance of feature selection algorithms across variations in dimensionality of the feature space and sample size. We repeated their experiments, simply replacing the cost function they used (Mahalanobis distance - MDist between the means) by the marginal diversity. Figure 4 a) presents the JZ score obtained with MMD as a function of the sample size. A comparison with Figure 5 of [6] shows that these results are superior to all those obtained by JZ, including the ones relying on branch and bound. This is remarkable, since branch and bound is guaranteed to find the optimal solution and the Mdist is inversely proportional to the PE for Gaussian classes. We believe that the superiority of MMD is due to the fact that it only requires estimates of the marginals, while the MDist requires estimates of joint densities and is therefore much more susceptible to the curse of dimensionality. Unfortunately, because in [6] all results are averaged over dimension, we have not been able to prove this conjecture yet. In any case, this problem is a good example of situations where, because (8) holds, MMD will find the optimal solution at a computational cost that is various orders of magnitude smaller than standard procedures based on combinatorial search (e.g. branch and bound). Figures 4 b) and c) show that, for problems involving commonly used image transformations, marginal diversity is indeed a good predictor of classification accuracy. The figures compare, for each space dimension, the recognition accuracy of a complete texture recognition system with the predictions provided by marginal diversity. Recognition accuracy was measured on the Brodatz texture database ( )) m texture classes) and a dimensional feature space consisting of the coefficients of a multiresolution decomposition over regions of pixels. Three transformations were considered: the discrete cosine transform, principal component analysis, and a three-level wavelet decomposition (see [14] for detailed description of the experimental set up). The classifier was based on Gauss mixtures and marginal diversity was computed with Algorithm 1. Note that the curves of cumulative marginal diversity are qualitatively very similar to those of recognition accuracy. References [1] S. Basu, C. Micchelli, and P. Olsen. Maximum Entropy and Maximum Likelihood Criteria for Feature Selection from Multivariate Data. In Proc. IEEE International Symposium on Circuits and Systems, Geneva, Switzerland,2000. [2] A. Bell and T. Sejnowski. An Information Maximisation Approach to Blind Separation and Blind Deconvolution. Neural Computation, 7(6):1129–1159, 1995. [3] B. Bonnlander and A. Weigand. Selecting Input Variables using Mutual Information and Nonparametric Density Estimation. In Proc. IEEE International ICSC Symposium on Artificial Neural Networks, Tainan,Taiwan,1994. [4] D. Erdogmus and J. Principe. Information Transfer Through Classifiers and its Relation to Probability of Error. In Proc. of the International Joint Conference on Neural Networks, Washington, 2001. [5] J. Huang and D. Mumford. Statistics of Natural Images and Models. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Fort Collins, Colorado, 1999. [6] A. Jain and D. Zongker. Feature Selection: Evaluation, Application, and Small Sample Performance. IEEE Trans. on Pattern Analysis and Machine Intelligence, 19(2):153–158, February 1997. [7] R. Linsker. Self-Organization in a Perceptual Network. IEEE Computer, 21(3):105–117, March 1988. [8] J. Portilla and E. Simoncelli. Texture Modeling and Synthesis using Joint Statistics of Complex Wavelet Coefficients. In IEEE Workshop on Statistical and Computational Theories of Vision, Fort Collins, Colorado, 1999. [9] J. Principe, D. Xu, and J. Fisher. Information-Theoretic Learning. In S. Haykin, editor, Unsupervised Adaptive Filtering, Volume 1: Blind-Souurce Separation. Wiley, 2000. [10] G. Saon and M. Padmanabhan. Minimum Bayes Error Feature Selection for Continuous Speech Recognition. In Proc. Neural Information Proc. Systems, Denver, USA, 2000. [11] K. Torkolla and W. Campbell. Mutual Information in Learning Feature Transforms. In Proc. International Conference on Machine Learning, Stanford, USA, 2000. [12] G. Trunk. A Problem of Dimensionality: a Simple Example. IEEE Trans. on Pattern. Analysis and Machine Intelligence, 1(3):306–307, July 1979. [13] N. Vasconcelos. Feature Selection by Maximum Marginal Diversity: Optimality and Implications for Visual Recognition. In submitted, 2002. [14] N. Vasconcelos and G. Carneiro. What is the Role of Independence for Visual Regognition? In Proc. European Conference on Computer Vision, Copenhagen, Denmark, 2002. [15] H. Yang and J. Moody. Data Visualization and Feature Selection: New Algorithms for Nongaussian Data. In Proc. Neural Information Proc. Systems, Denver, USA, 2000.
|
2002
|
147
|
2,155
|
A Neural Edge-Detection Model for Enhanced Auditory Sensitivity in Modulated Noise Alon Fishbach and Bradford J. May Department of Biomedical Engineering and Otolaryngology-HNS Johns Hopkins University Baltimore, MD 21205 fishbach@northwestern.edu Abstract Psychophysical data suggest that temporal modulations of stimulus amplitude envelopes play a prominent role in the perceptual segregation of concurrent sounds. In particular, the detection of an unmodulated signal can be significantly improved by adding amplitude modulation to the spectral envelope of a competing masking noise. This perceptual phenomenon is known as “Comodulation Masking Release” (CMR). Despite the obvious influence of temporal structure on the perception of complex auditory scenes, the physiological mechanisms that contribute to CMR and auditory streaming are not well known. A recent physiological study by Nelken and colleagues has demonstrated an enhanced cortical representation of auditory signals in modulated noise. Our study evaluates these CMR-like response patterns from the perspective of a hypothetical auditory edge-detection neuron. It is shown that this simple neural model for the detection of amplitude transients can reproduce not only the physiological data of Nelken et al., but also, in light of previous results, a variety of physiological and psychoacoustical phenomena that are related to the perceptual segregation of concurrent sounds. 1 Introduction The temporal structure of a complex sound exerts strong influences on auditory physiology (e.g. [10, 16]) and perception (e.g. [9, 19, 20]). In particular, studies of auditory scene analysis have demonstrated the importance of the temporal structure of amplitude envelopes in the perceptual segregation of concurrent sounds [2, 7]. Common amplitude transitions across frequency serve as salient cues for grouping sound energy into unified perceptual objects. Conversely, asynchronous amplitude transitions enhance the separation of competing acoustic events [3, 4]. These general principles are manifested in perceptual phenomena as diverse as comodulation masking release (CMR) [13], modulation detection interference [22] and synchronous onset grouping [8]. Despite the obvious importance of timing information in psychoacoustic studies of auditory masking, the way in which the CNS represents the temporal structure of an amplitude envelope is not well understood. Certainly many physiological studies have demonstrated neural sensitivities to envelope transitions, but this sensitivity is only beginning to be related to the variety of perceptual experiences that are evoked by signals in noise. Nelken et al. [15] have suggested a correspondence between neural responses to time-varying amplitude envelopes and psychoacoustic masking phenomena. In their study of neurons in primary auditory cortex (A1), adding temporal modulation to background noise lowered the detection thresholds of unmodulated tones. This enhanced signal detection is similar to the perceptual phenomenon that is known as comodulation masking release [13]. Fishbach et al. [11] have recently proposed a neural model for the detection of “auditory edges” (i.e., amplitude transients) that can account for numerous physiological [14, 17, 18] and psychoacoustical [3, 21] phenomena. The encompassing utility of this edge-detection model suggests a common mechanism that may link the auditory processing and perception of auditory signals in a complex auditory scene. Here, it is shown that the auditory edge detection model can accurately reproduce the cortical CMR-like responses previously described by Nelken and colleagues. 2 The Model The model is described in detail elsewhere [11]. In short, the basic operation of the model is the calculation of the first-order time derivative of the log-compressed envelope of the stimulus. A computational model [23] is used to convert the acoustic waveform to a physiologically plausible auditory nerve representation (Fig 1a). The simulated neural response has a medium spontaneous rate and a characteristic frequency that is set to the frequency of the target tone. To allow computation of the time derivative of the stimulus envelope, we hypothesize the existence of a temporal delay dimension, along which the stimulus is progressively delayed. The intermediate delay layer (Fig 1b) is constructed from an array of neurons with ascending membrane time constants (τ); each neuron is modeled by a conventional integrate-and-fire model (I&F, [12]). Higher membrane time constant induces greater delay in the neuron’s response [1]. The output of the delay layer converges to a single output neuron (Fig. 1c) via a set of connection with various efficacies that reflect a receptive field of a gaussian derivative. This combination of excitatory and inhibitory connections carries out the time-derivative computation. Implementation details and parameters are given in [11]. The model has 2 adjustable and 6 fixed parameters, the former were used to fit the responses of the model to single unit responses to variety of stimuli [11]. The results reported here are not sensitive to these parameters. Figure 1: Schematic diagram of the model and a block diagram of the basic operation of each model component (shaded area). The stimulus is converted to a neural representation (a) that approximates the average firing rate of a medium spontaneous-rate AN fiber [23]. The operation of this stage can be roughly described as the log-compressed rms output of a bandpass filter. The neural representation is fed to a series of neurons with ascending membrane time constant (b). The kernel functions that are used to simulate these neurons are plotted for a few neurons along with the time constants used. The output of the delay-layer neurons converge to a single I&F neuron (c) using a set of connections with weights that reflect a shape of a gaussian derivative. Solid arrows represent excitatory connections and white arrows represent inhibitory connections. The absolute efficacy is represented by the width of the arrows. 3 Results Nelken et al. [15] report that amplitude modulation can substantially modify the noise-driven discharge rates of A1 neurons in Halothane-anesthetized cats. Many cortical neurons show only a transient onset response to unmodulated noise but fire in synchrony (“lock”) to the envelope of modulated noise. A significant reduction in envelope-locked discharge rates is observed if an unmodulated tone is added to modulated noise. As summarized in Fig. 2, this suppression of envelope locking can reveal the presence of an auditory signal at sound pressure levels that are not detectable in unmodulated noise. It has been suggested that this pattern of neural responding may represent a physiological equivalent of CMR. Reproduction of CMR-like cortical activity can be illustrated by a simplified case in which the analytical amplitude envelope of the stimulus is used as the input to the edge-detector model. In keeping with the actual physiological approach of Nelken et al., the noise envelope is shaped by a trapezoid modulator for these simulations. Each cycle of modulation, EN(t), is given by: where P is the peak pressure level and D is set to 12.5 ms. ( ) < ≤ < ≤ − − < ≤ < ≤ = D t D D t D D t P D t D P D t t t E D P D P N 8 4 0 4 3 3 3 0 ) ( I&F Neuron (b) delay-layer (a) AN model (c) edge-detector neuron τ=3ms τ=4ms τ=6ms RMS dt d bandpass log Figure 2: Responses of an A1 unit to a combination of noise and tone at many tone levels, replotted from Nelken et al. [15]. (a) Unmodulated noise and (b) modulated noise. The noise envelope is illustrated by the thick line above each figure. Each row shows the response of the neuron to the noise plus the tone at the level specified on the ordinate. The dashed line in (b) indicates the detection threshold level for the tone. The detection threshold (as defined and calculated by Nelken et al.) in the unmodulated noise was not reached. Since the basic operation of the model is the calculation of the rectified timederivative of the log-compressed envelope of the stimulus, the expected noisedriven rate of the model can be approximated by: where A=20/ln(10) and P0=2e-5 Pa. The expected firing rate in response to the noise plus an unmodulated signal (tone) can be similarly approximated by: where PS is the peak pressure level of the tone. Clearly, both MN (t) and MN+S (t) are identically zero outside the interval [0 D]. Within this interval it holds that: and and the ratio of the firing rates is: Clearly, N S N M M < + for the interval [0 D] of each modulation cycle. That is, the addition of a tone reduces the responses of the model to the rising part of the modulated envelope. Higher tone levels (Ps) cause greater reduction in the model’s firing rate. ( ) + = 0 ) ( 1 ln ,0 max ) ( P t E N A dt d t M ( ) + = + + 0 ) ( 1 ln ,0 max ) ( P P t E S N S A dt d t M D t t P t M D P D AP N < ≤ + = 0 ) ( 0 D t t P P t M D P S D AP S N < ≤ + + = + 0 ) ( 0 D t t P P t M t M D P S S N N < ≤ + + = + 0 1 ) ( ) ( 0 Time (ms) (b) Modulated noise 0 150 300 0 150 300 76 26 Tone level (dB SPL) Spikes/sec (a) Unmodulated noise Figure 3: An illustration of the basic operation of the model on various amplitude envelopes. The simplified operation of the model includes log compression of the amplitude envelope (a and c) and rectified time-derivative of the log-compressed envelope (b and d). (a) A 30 dB SPL tone is added to a modulated envelope (peak level of 70 dB SPL) 300 ms after the beginning of the stimulus (as indicated by the horizontal line). The addition of the tone causes a great reduction in the time derivative of the log-compressed envelope (b). When the envelope of the noise is unmodulated (c), the time-derivative of the log-compressed envelope (d) shows a tiny spike when the tone is added (marked by the arrow). Fig. 3 demonstrates the effect of a low-level tone on the time-derivative of the logcompressed envelope of a noise. When the envelope is modulated (Fig. 3a) the addition of the tone greatly reduces the derivative of the rising part of the modulation (Fig. 3b). In the absence of modulations (Fig. 3c), the tone presentation produces a negligible effect on the level derivative (Fig. 3d). Model simulations of neural responses to the stimuli used by Nelken et al. are plotted in Fig. 4. As illustrated schematically in Fig 3 (d), the presence of the tone does not cause any significant change in the responses of the model to the unmodulated noise (Fig. 4a). In the modulated noise, however, tones of relatively low levels reduce the responses of the model to the rising part of the envelope modulations. Time (ms) Level (dB SPL) Level derivative (dB SPL/ms) (a) (b) (c) (d) Figure 4: Simulated responses of the model to a combination of a tone and Unmodulated noise (a) and modulated noise (b). All conventions are as in Fig. 2. 4 Discussion This report uses an auditory edge-detection model to simulate the actual physiological consequences of amplitude modulation on neural sensitivity in cortical area A1. The basic computational operation of the model is the calculation of the smoothed time-derivative of the log-compressed stimulus envelope. The ability of the model to reproduce cortical response patterns in detail across a variety of stimulus conditions suggests similar time-sensitive mechanisms may contribute to the physiological correlates of CMR. These findings augment our previous observations that the simple edge-detection model can successfully predict a wide range of physiological and perceptual phenomena [11]. Former applications of the model to perceptual phenomena have been mainly related to auditory scene analysis, or more specifically the ability of the auditory system to distinguish multiple sound sources. In these cases, a sharp amplitude transition at stimulus onset (“auditory edge”) was critical for sound segregation. Here, it is shown that the detection of acoustic signals also may be enhanced through the suppression of ongoing responses to the concurrent modulations of competing background sounds. Interestingly, these temporal fluctuations appear to be a common property of natural soundscapes [15]. The model provides testable predictions regarding how signal detection may be influenced by the temporal shape of amplitude modulation. Carlyon et al. [6] measured CMR in human listeners using three types of noise modulation: squarewave, sine wave and multiplied noise. From the perspective of the edge-detection model, these psychoacoustic results are intriguing because the different modulator types represent manipulations of the time derivative of masker envelopes. Squarewave modulation had the most sharply edged time derivative and produced the greatest masking release. Fig. 5 plots the responses of the model to a pure-tone signal in square-wave and sine-wave modulated noise. As in the psychoacoustical data of Carlyon et al., the simulated detection threshold was lower in the context of square-wave modulation. Our modeling results suggest that the sharply edged square wave evoked higher levels of noise-driven activity and therefore created a sensitive background for the suppressing effects of the unmodulated tone. Time (ms) (b) Modulated noise 0 150 300 0 150 300 76 26 Tone level (dB SPL) Spikes/sec (a) Unmodulated noise Figure 5: Simulated responses of the model to a combination of a tone at various levels and a sine-wave modulated noise (a) or a square-wave modulated noise (b). Each row shows the response of the model to the noise plus the tone at the level specified on the abscissa. The shape of the noise modulator is illustrated above each figure. The 100 ms tone starts 250 ms after the noise onset. Note that the tone detection threshold (marked by the dashed line) is 10 dB lower for the square-wave modulator than for the sine-wave modulator, in accordance with the psychoacoustical data of Carlyon et al. [6]. Although the physiological basis of our model was derived from studies of neural responses in the cat auditory system, the key psychoacoustical observations of Carlyon et al. have been replicated in recent behavioral studies of cats (Budelis et al. [5]). These data support the generalization of human perceptual processing to other species and enhance the possible correspondence between the neuronal CMR-like effect and the psychoacoustical masking phenomena. Clearly, the auditory system relies on information other than the time derivative of the stimulus envelope for the detection of auditory signals in background noise. Further physiological and psychoacoustic assessments of CMR-like masking effects are needed not only to refine the predictive abilities of the edge-detection model but also to reveal the additional sources of acoustic information that influence signal detection in constantly changing natural environments. Acknowledgments This work was supported in part by a NIDCD grant R01 DC004841. References [1] Agmon-Snir H., Segev I. (1993). “Signal delay and input synchronization in passive dendritic structure”, J. Neurophysiol. 70, 2066-2085. [2] Bregman A.S. (1990). “Auditory scene analysis: The perceptual organization of sound”, MIT Press, Cambridge, MA. [3] Bregman A.S., Ahad P.A., Kim J., Melnerich L. (1994) “Resetting the pitch-analysis system. 1. Effects of rise times of tones in noise backgrounds or of harmonics in a complex tone”, Percept. Psychophys. 56 (2), 155-162. 0 200 400 600 0 200 400 600 60 10 Time (ms) Tone level (dB SPL) Spikes/sec (b) (a) [4] Bregman A.S., Ahad P.A., Kim J. (1994) “Resetting the pitch-analysis system. 2. Role of sudden onsets and offsets in the perception of individual components in a cluster of overlapping tones”, J. Acoust. Soc. Am. 96 (5), 2694-2703. [5] Budelis J., Fishbach A., May B.J. (2002) “Behavioral assessments of comodulation masking release in cats”, Abst. Assoc. for Res. in Otolaryngol. 25. [6] Carlyon R.P., Buus S., Florentine M. (1989) “Comodulation masking release for three types of modulator as a function of modulation rate”, Hear. Res. 42, 37-46. [7] Darwin C.J. (1997) “Auditory grouping”, Trends in Cog. Sci. 1(9), 327-333. [8] Darwin C.J., Ciocca V. (1992) “Grouping in pitch perception: Effects of onset asynchrony and ear of presentation of a mistuned component”, J. Acoust. Soc. Am. 91 , 33813390. [9] Drullman R., Festen H.M., Plomp R. (1994) “Effect of temporal envelope smearing on speech reception”, J. Acoust. Soc. Am. 95 (2), 1053-1064. [10] Eggermont J J. (1994). “Temporal modulation transfer functions for AM and FM stimuli in cat auditory cortex. Effects of carrier type, modulating waveform and intensity”, Hear. Res. 74, 51-66. [11] Fishbach A., Nelken I., Yeshurun Y. (2001) “Auditory edge detection: a neural model for physiological and psychoacoustical responses to amplitude transients”, J. Neurophysiol. 85, 2303–2323. [12] Gerstner W. (1999) “Spiking neurons”, in Pulsed Neural Networks, edited by W. Maass, C. M. Bishop, (MIT Press, Cambridge, MA). [13] Hall J.W., Haggard M.P., Fernandes M.A. (1984) “Detection in noise by spectrotemporal pattern analysis”, J. Acoust. Soc. Am. 76, 50-56. [14] Heil P. (1997) “Auditory onset responses revisited. II. Response strength”, J. Neurophysiol. 77, 2642-2660. [15] Nelken I., Rotman Y., Bar-Yosef O. (1999) “Responses of auditory cortex neurons to structural features of natural sounds”, Nature 397, 154-157. [16] Phillips D.P. (1988). “Effect of Tone-Pulse Rise Time on Rate-Level Functions of Cat Auditory Cortex Neurons: Excitatory and Inhibitory Processes Shaping Responses to Tone Onset”, J. Neurophysiol. 59, 1524-1539. [17] Phillips D.P., Burkard R. (1999). “Response magnitude and timing of auditory response initiation in the inferior colliculus of the awake chinchilla”, J. Acoust. Soc. Am. 105, 27312737. [18] Phillips D.P., Semple M.N., Kitzes L.M. (1995). “Factors shaping the tone level sensitivity of single neurons in posterior field of cat auditory cortex”, J. Neurophysiol. 73, 674-686. [19] Rosen S. (1992) “Temporal information in speech: acoustic, auditory and linguistic aspects”, Phil. Trans. R. Soc. Lond. B 336, 367-373. [20] Shannon R.V., Zeng F.G., Kamath V., Wygonski J, Ekelid M. (1995) “Speech recognition with primarily temporal cues”, Science 270, 303-304. [21] Turner C.W., Relkin E.M., Doucet J. (1994). “Psychophysical and physiological forward masking studies: probe duration and rise-time effects”, J. Acoust. Soc. Am. 96 (2), 795-800. [22] Yost W.A., Sheft S. (1994) “Modulation detection interference – across-frequency processing and auditory grouping”, Hear. Res. 79, 48-58. [23] Zhang X., Heinz M.G., Bruce I.C., Carney L.H. (2001). “A phenomenological model for the responses of auditory-nerve fibers: I. Nonlinear tuning with compression and suppression”, J. Acoust. Soc. Am. 109 (2), 648-670.
|
2002
|
148
|
2,156
|
Concentration Inequalities for the Missing Mass and for Histogram Rule Error David McAllester Toyota Technological Institute at Chicago mcallester@tti-c.org Luis Ortiz University of Pennsylvania leo@cis.upenn.edu Abstract This paper gives distribution-free concentration inequalities for the missing mass and the error rate of histogram rules. Negative association methods can be used to reduce these concentration problems to concentration questions about independent sums. Although the sums are independent, they are highly heterogeneous. Such highly heterogeneous independent sums cannot be analyzed using standard concentration inequalities such as Hoeffding’s inequality, the Angluin-Valiant bound, Bernstein’s inequality, Bennett’s inequality, or McDiarmid’s theorem. 1 Introduction The Good-Turing missing mass estimator was developed in the 1940s to estimate the probability that the next item drawn from a fixed distribution will be an item not seen before. Since the publication of the Good-Turing missing mass estimator in 1953 [9], this estimator has been used extensively in language modeling applications [4, 6, 12]. Recently a large deviation accuracy guarantee was proved for the missing mass estimator [15, 14]. The main technical result is that the missing mass itself concentrates — [15] proves that the probability that missing mass deviates from its expectation by more than is at most
independent of the underlying distribution. Here we give a simpler proof of the stronger result that the deviation probability is bounded by . A histogram rule is defined by two things — a given clustering of objects into classes and a given training sample. In a classification setting the histogram rule defined by a given clustering and sample assigns to each cluster the label that occurred most frequently for that cluster in the sample. In a decision-theoretic setting, such as that studied by Ortiz and Kaebling [16], the rule associates each cluster with the action choice of highest performance on the training data for that cluster. We show that the performance of a histogram rule (for a fixed clustering) concentrates near its expectation — the probability that the performance deviates from its expectation by more than is bounded by
independent of the clustering or the underlying data distribution. 2 The Exponential Moment Method All of the results in this paper are based on the exponential moment method of proving concentration inequalities. The exponential moment was perhaps first used by Bernstein but was popularized by Chernoff. Let be any real-valued random variable with finite mean. Let be
if
and is . The following lemma is the central topic of Chernoff’s classic paper [5]. Lemma 1 (Chernoff) For any real-valued variable with finite mean we have the following for any where the “entropy” is defined as below. !#"%$'& (*) (1) + -, ./10 2 4357698;:<+ 3= (2) :> 3?-, A@ 2 $CB (3) Lemma 1 follows, essentially, from the observation that for 3D
AE we have the following. F
G HAJI 2 "9$ (K)ML , 2 ( A@ 2 $B , ="%( 2 4N O4PQ"%$'& 2 )) (4) Lemma 1 is called the exponential moment method because of the first inequality in (4). The following two observations provide a simple general tool. Observation 2 Let R be any positive constant satisfying 6%8;:> 3?A S3UTGRK3WV for all 3X
YE . Formula (2) implies that for
ZE we have M [T '
V]\ ^_R] . Observation 3 If ` , aSaKa , b are independent then 698;:<dcJef e W3=g,hciej6%8;:> e 3? . Some further observations also prove useful. Let be an arbitrary real-valued random variable. For a discrete distribution the Gibbs distribution 2 can be defined as follows. 2 k,A g, l :> ?3? + m,J 2 ( There exists a unique largest open interval +3no O 3#npqj (possibly with infinite endpoints) such that for 3ZrZ+3 no O 3 npq we have that :<+ 3? is finite. For 3ZrZ+3 no O H3 npq we define the expectation of sW+ 7 at inverse temperature 3 as follows. 2 sW 7d, l :> 3= @ sW 7 2 $HB (5) Equation (5) can be taken as the definition of 2 for continuous distributions on . For 3trh34no O u3#npqj let v V + u3= be 2 @ w5D 2 x V B . The quantity v V u3= is the Gibbs-variance at inverse temperature 3 . For 3yrX+3 no O 3 np q we let z{;+ 2Q|%| < denote the KL-divergence from 2 to which can be written as follows. z{;+ 2 |9| <},J 2 S35~6%8;:> 3= (6) Let + #no O 4np qj be the smallest open interval containing all values of the form 2 for 3r+3 no O 3 npq . If the open interval no O < npq is not empty then 2 is a monotonically increasing function of 3r+3 no O >3 npq . For r no O npq define 3g+ to be the unique value 3 satisfying 2 g, . For any continuous function s we now define the double integral
( sWMR]' V R to be the function satisfying +[C,E , +[g,iE , and < + g,tsW where <M+ and < are the first and second derivatives of respectively. We now have the following general theorem. Theorem 4 For any real-valued variable , any r + no O npq , and 3 r 3 no O 3 npq we have the following. ? , #3g+ 5~698;:<+ 3g (7) , z{ M 2 "%(K) |9| < (8) , ( $ V v V + 3g (9) 698;:<+ W3=, } K3T 2 v V
V
(10) Formula (9) can be clarified by noting that for | 5D | small we have the following. C }, ( $ V v V + 3g + 5~ x V v V }E Formula (7) is proved by showing that 3g is the the optimal 3 in (2). Up to sign conventions (7) is the equation for physical entropy in statistical mechanics. Equation (8) follows from (7) and (6). Equations (9) and (10) then follow from well known equations of statistical mechanics. An implicit derivation of (9) and (10) can be found in section six of Chernoff’s original paper [5]. As a simple example of the use of (9), we derive Hoeffding’s inequality. Consider a sum , c b e ` e where the e are independent and e is bounded to an interval of width e . Note that each e remains bounded to this interval at all values of 3 . Hence vQV e g3=H . We then have that v V 3? ` c b e ` V e . Hoeffding’s inequality now follows from (1) and (9). 3 Negative Association The analysis of the missing mass and histogram rule error involve sums of variables that are not independent. However, these variables are negatively associated — an increase in one variable is associated with decreases in the other variables. Formally, a set of real-valued random variables ` , aSaKa , b is negatively associated if for any two disjoint subsets and of the integers l aSaKa , and any two non-decreasing, or any two non-increasing, functions s from ! "#! to and $ from %! &'! to we have the following. sW e )(}r*_+$ -,j/.r ?Y sW e )( r0_ ] $+ 1,f/.r d Dubhasi and Ranjan [8] give a survey of methods for establishing and using negative association. This section states some basic facts about negative association. Lemma 5 Let ` , aSaKa , b be any set of negatively associated variables. Let ~ ` , aKaSa , b be independent shadow variables, i.e., independent variables such that X e is distributed identically to e . Let , c ef e and 7, c ef e . For any set of negatively associated variables we have + H
Z+ 7M . Lemma 6 Let be any sample of 2 items (ball throws) drawn IID from a fixed distribution on the integers (bins) l aSaSaK43% . Let 5 6( be the number of times integer ( occurs in the sample. The variables 5j l , aSaKa , 5 73> are negatively associated. Lemma 7 For any negatively associated variables ` , aKaSa , b , and any non-decreasing functions s ` , aSaKa , s b , we have that the quantities s ` + ` , aSaSa , s b b are negatively associated. This also holds if the functions s e are non-increasing. Lemma 8 Let ` , aSaSa , b be a negatively associated set of variables. Let Q`aKaSa , b be 0-1 (Bernoulli) variables such that e is a stochastic function of e , i.e., e , l | ` KaSaKa b , e , l | e . If ` , l | e is a non-decreasing function of e then ` , aSaKa , b are negatively associated. This also holds if e , l | e is non-increasing. 4 The Missing Mass Suppose that we draw words (or any objects) independently from a fixed distribution over a countable (but possibly infinite) set of words. We let the probability of drawing word be denoted as . For a sample of 2 draws the missing mass of , denoted , is the total probability mass of the items not occurring in the sample, i.e. k, c ! . Theorem 9 For the missing mass as defined above, and for
tE , we have the following. } #5
^ 2 V (11) } _T
2 V (12) To prove theorem 9 let be a Bernoulli variable which is 1 if word does not occur in the sample and 0 otherwise. The missing mass can now be written as , c . The variables are monotonic functions of the word counts so by lemmas 6 and 7 we have that the are negatively associated. By lemma 5 we can then assume that the variables are independent. The analysis of this independent sum uses the following general concentration inequalities for independent sums of bounded variables. Lemma 10 Let , c e ` e e where ` , aKaSa , are independent random variables with e rJ E# l and each e is a non-negative constant. Let
e be e . For
E we have the following. + #5
V c e `
e V e (13) + _T
V c e ` N O (14) Before proving (13) and (14) we first show how (13) and (14) imply (11) and (12) respectively. For the missing mass m, c ? we have the following.
,J+ , l }, l 5~ To prove (11) we note that formula (13) implies the following where we use the fact that for
AE we have ( l \ . #5 '
V c V
V c \ 2 , 2 V To prove (12) we note that formula (14) implies the following. _T F
V c V \ 6%8 `
V c \ 2 , 2 V We now compare (13) and (14) to other well known bounds. Hoeffding’s inequality [11] yields the following. _T '
V c e ` V e (15) In the missing mass application we have that c e ` V e can be l which fails to yield (12). The Srivistav-Stangier bound [17], which itself an improvement on the Angluin-Valiant bound [1, 10], yields the following for E where Snp q is e e . C [T '
V npqWc e ` e
e (16) It is possible to show that in the missing mass application npq c e ` e
e can be l so this bound does not handle the missing mass. A weaker version of the lower-deviation inequality (13) can be derived from Bernstein’s inequality [3] (see [7]). However, neither Bernstein”s inequality nor Bennett’s inequality [2] can handle the upward deviation of the missing mass. To prove (13) and (14) we first note the following lemma. Lemma 11 Let be a random variable with rD E1 l and let D r *E1 l be a Bernoulli variable with 7 ,t . For any such variables and ~ and any 3 and constant we have the following. 6%8;:>+ 3='G698;:<M 3= This lemma follows from the observation that for any convex function s on the interval E1 l we have that sW+ is less than l 57 sWME QTy sW l and so we have the following. A@ 2 $B Zi@x l 57 ~ TX 2 B , l 5~ xQTy 2 ,itI 2 $xL Lemma 11 and equation (2) now imply the following which implies that for the proof of (13) and (14) we can assume without loss of generality that the variables e are Bernoulli. Lemma 12 Let , c e e e with e r E1 l with the variables e independent. Let }, c e e e where e r *E# l with e = e . For any such , 7 , and we have the following. + H
Z+ . To prove (13) let ,c e ` e e where the e are independent Bernoulli variables. For 3DZE we have the following. v V e 3=HA 2 + e , l '
e So we have v V + 3?Fcie V e
e . Formula (13) now follows from (9). Formula (14) follows from observations 2 and 3 and the following lemma of Kearns and Saul [13]. Lemma 13 (Kearns&Saul) For a Bernoulli variable we have the following where
is , l . 698;:< g3? g K3UT l 5
< V ^}698 ` 3 V (17) g K3UT V ^}698 ` 3 V (18) 5 Histogram Rule Error Now we consider the problem of learning a histogram rule from an IID sample of pairs =>r drawn from a fixed distribution on such pairs. The problem is to find a rule mapping to the two-element set ]E1 l so as to minimize the expectation of the loss
Q+ 1 where is a given loss function from *E# l to the interval E1 l . In the classification setting one typically takes to be *E1 l . In the decision-theoretic setting is the hidden state and can be arbitrarily complex and = [ is the cost of taking action = in the presence of hidden state . In the general case (covering both settings) we assume only = 'r ]E1 l and j= 1HrD E1 l . We are interested in histogram rules with respect to a fixed clustering. We assume a given cluster function mapping to the integers from l to . We consider a sample of 2 pairs drawn IID from a fixed distribution on . For any cluster index . , we define , to be the subset of the sample consisting of pairs Q such that g, . . We define 5j. to be | , | . For any cluster index . and r *E# l we define ,j; and , ; as follows. , ug, l 5 ._ Q ! <1 , ;g,t Q ! "%(*) , >1 If 5j._},JE then we define , ; to be 1. We now define the rule and "! from class index to labels as follows. Q. }, $# % '&98 )( & `* ,j ; ! ._g, +#% ,&98 )( & ` * , ; Ties are broken stochastically with each outcome equally likely so that the rule -! is a random variable only partially determined by the sample . We are interested in the generalization loss of the empirical rule . 4g,i =- I Q 1 L Theorem 14 For . defined as above we have the following for positive . 0/1 I # L 5 2
2 V 3 (19) 0/1 I # L T 2
2 V 4 (20) To prove this we need some additional terminology. For each class label . define , to be the probability over selecting a pair Q that g, . . Define { , to be , l 55"!f._ ?5 , "!j._ . In other words, {), is the additional loss on class . when assigns the wrong label to this class. Define the random variable , to be 1 if =._6 ,78!j._ and 0 otherwise. The variable , represents the statement that the empirical rule is “wrong” (non-optimal) on class . . We can now express the generalization loss of as follows. 9 },: ! T e e { e e (21) The variable , is a monotone stochastic function of the count 5j._ — the probability of error declines monotonically in the count of the class. By lemma 8 we then have that the variables e are negatively associated so we can treat them as independent. To prove theorem 14 we start with an analysis of , , l . Lemma 15 + , , l ' <; >= ? Proof: To prove this lemma we consider a threshold ~ 2 , and show the following. -, , l 5 ._H Q Ty+ 1,;, l | 5j._'
Q (22) 5j._H Q (23) + , , l | 5 ._'
Q V b (24) Formula (23) follows by the Angluin-Valiant bound [1, 7].1 To prove (24) we note that if -, , l then either , "!f. H
"! ._ T { \ or ,f l 5<"! . H l 5<8! ._ 15{ \ . By a combination of Hoeffding’s inequality and the union bound we have that the probability that one of these two conditions holds is bounded by the left hand side of (24). Lemma 15 now follows by setting to 2 , and noting that { , l . We now prove (19) using lemma 15 and (10). For Y we have 3g E and for 3DZE we have the following. v V + V e { V e 1,j3?, V e { V e 2 -,;, l l 5~ 2 -,;, l V e { V e 2 , , l 'Y V e { V e , , l 'A V e { V e ; = + ? Since -, is bounded to the interval E1 l we have that v V + e { e -,f 3= is also bounded by V e { V e \ ^ . By (10) we then have the following for 3E where ,m l \ _6%8 l . In deriving (27) we use the fact that
( is a monotonically decreasing function of for l \ . "!$#$% '&(*),+ . 0/213 /54 3 /067 / + 8 :9;< ; == > @? 2BA 3 (25) C !D#% '&EF) + -HGI = > ? KJML 1 / NPO N 1 / 4 3 / 8 Q ) = > ? SRML 1 / N / N 1 / 4 3 / 9T;< ; >=T= > @? 2BUV 3 (26) W!D#% '&E*),+ G I = > X? SJYL 1 / N /[Z 8 2 ) = > @? ERYL 1 / N / 9 Z ;< ; = L 2BU V 3 (27) C !D#% '&E*),+ . 0/ 1 / N /[Z 8 2A 3 (28) C !D#% '&E*) + / Z 8 N 2 3 (29) Formula (19) now follows from (29) and a downward variant of observation 2. The proof of (20) is similar but uses (18). For 3X
YE we have the following where is l T698 \ . \ !$#% ]&E*) + -^GI = > ? KJYL 1 / NPO N 1 / 4 3 / 8 Q ) = > ? SRYL 1 / N . N 1 / 4 3 / :_ `Xa N 1 / 4 3 / b c9 AUV 3 !$#% ]&E*)d+ G I = > X? KJYL 1 / N /[Z 8 2 ) = > @? ERYL 1 / N . + :_ `Xa bfe g _ L AhU V 3 1The downward deviation Angluin-Valiant bound used here follows from (9) and the observation that for a Bernoulli variable i and j"k we have l 3 i
: 1 i C + . C ! # % ]&E*) + . / 1 / N / Z 8 2A 3 (30) C ! # % ]&E*)d+ / Z 8 N 2 3 (31) Formula (20) now follows from (31) and observation 2. References [1] D. Anguluin and L. Valiant. Fast probabalistic algorithms for hamiltonian circuits. Journal of Computing Systems Science, 18:155–193, 1979. [2] G. Bennnett. Probability inequalities for the sum of independent ranndom variables. Journal of the American Statistical Association, 57:33–45, 1962. [3] S. Bernstein. The Theory of Probabilities. Gastehizdat Publishing House, Moscow, 1946. [4] Stanley Chen and Joshua Goodman. An empirical study of smoothing techniques for language modeling, August 1998. Technical report TR-10-98, Harvard University. [5] H. Chernoff. A measure of the asymptotic efficiency of tests of a hypothesis based on the sum of observations. Annals of Mathmematical Statistics, 23:493–507, 1952. [6] Kenneth W. Church and William A. Gale. A comparison of the enhanced Good-Turing and deleted estimation methods for estimating probabilities of English bigrams. Computer Speech and Language, 5:19–54, 1991. [7] Luc Devroye, L´aszl´o Gy¨orfi, and G´abor Lugosi. A Probabilistic Theory of Pattern Recognition. Springer, 1996. [8] Devdatt P. Dubhashi and Desh Ranjan. Balls and bins: A study in negative dependence. Random Structures and Algorithms, 13(2):99–124, 1998. [9] I. J. Good. The population frequencies of species and the estimation of population parameters. Biometrika, 40(16):237–264, December 1953. [10] T. Hagerup and C. R¨ub. A guided tour of chernoff bounds. Information Processing Letters, 33:305–309, 1989. [11] W. Hoeffding. Probability inequalities for sums of bounded random variables. Journal of the American Statistical Association, 58:13–30, 1963. [12] Slava M. Katz. Estimation of probabilities from sparse data for the language model component of a speech recognizer. IEEE Transactions on Acoustics, Speech and Signal Processing, ASSP35(3):400–401, March 1987. [13] Michael Kearns and Lawrence Saul. Large deviation methods for approximate probabilistic inference, with rates of convergence. In UAI-98, pages 311–319. Morgan Kaufmann, 1998. [14] Samuel Kutin. Algorithmic Stability and Ensemble-Based Learning. PhD thesis, University of Chicago, 2002. [15] David McAllester and Robert Schapire. On the convergence rate of good-turing estimators. In COLT00, 2000. [16] Luis E. Ortiz and Leslie Pack Kaelbling. Sampling methods for action selection in influence diagrams. In Proceedings of the Seventeenth National Conference on Artificial Intelligence, pages 378–385, 2000. [17] Anand Srivastav and Peter Stangier. Integer multicommodity flows with reduced demands. In European Symposium on Algorithms, pages 360–371, 1993.
|
2002
|
149
|
2,157
|
Efficient Learning Equilibrium * Ronen I. Brafman Computer Science Department Ben-Gurion University Beer-Sheva, Israel email: brafman@cs.bgu.ac.il Moshe Tennenholtz Computer Science Department Stanford University Stanford, CA 94305 e-mail: moshe@robotics.stanford.edu Abstract We introduce efficient learning equilibrium (ELE), a normative approach to learning in non cooperative settings. In ELE, the learning algorithms themselves are required to be in equilibrium. In addition, the learning algorithms arrive at a desired value after polynomial time, and deviations from a prescribed ELE become irrational after polynomial time. We prove the existence of an ELE in the perfect monitoring setting, where the desired value is the expected payoff in a Nash equilibrium. We also show that an ELE does not always exist in the imperfect monitoring case. Yet, it exists in the special case of common-interest games. Finally, we extend our results to general stochastic games. 1 Introduction Reinforcement learning in the context of multi-agent interaction has attracted the attention of researchers in cognitive psychology, experimental economics, machine learning, artificial intelligence, and related fields for quite some time [8, 4]. Much of this work uses repeated games [3, 5] and stochastic games [10, 9, 7, 1] as models of such interactions. The literature on learning in games in game theory [5] is mainly concerned with the understanding of learning procedures that if adopted by the different agents will converge at end to an equilibrium of the corresponding game. The game itself may be known; the idea is to show that simple dynamics lead to rational behavior, as prescribed by a Nash equilibrium. The learning algorithms themselves are not required to satisfy any rationality requirement; it is what they converge to, if adopted by all agents that should be in equilibrium. This is quite different from the classical perspective on learning in Artificial Intelligence, where the main motivation The second author permanent address is: Faculty of Industrial Engineering and Management, Technion, Haifa 32000, Israel. This work was supported in part by Israel Science Foundation under Grant #91/02-1. The first author is partially supported by the Paul Ivanier Center for Robotics and Production Management. for learning stems from the fact that the model of the environment is unknown. For example, consider a Markov Decision Process (MDP). If the rewards and transition probabilities are known then one can find an optimal policy using dynamic programming. The major motivation for learning in this context stems from the fact that the model (i.e. rewards and transition probabilities) is initially unknown. When facing uncertainty about the game that is played, game-theorists appeal to a Bayesian approach, which is completely different from a learning approach; the typical assumption in that approach is that there exists a probability distribution on the possible games, which is common-knowledge. The notion of equilibrium is extended to this context of games with incomplete information, and is treated as the appropriate solution concept. In this context, agents are assumed to be rational agents adopting the corresponding (Bayes-) Nash equilibrium, and learning is not an issue. In this work we present an approach to learning in games, where there is no known distribution on the possible games that may be played - an approach that appears to be much more reflective of the setting studied in machine learning and AI and in the spirit of work on on-line algorithms in computer science. Adopting the framework of repeated games, we consider a situation where the learning algorithm is a strategy for an agent in a repeated game. This strategy takes an action at each stage based on its previous observations, and initially has no information about the identity of the game being played. Given the above, the following are natural requirements for the learning algorithms provided to the agents: 1. Individual Rationality: The learning algorithms themselves should be in equilibrium. It should be irrational for each agent to deviate from its learning algorithm, as long as the other agents stick to their algorithms, regardless of the what the actual game is. 2. Efficiency: (a) A deviation from the learning algorithm by a single agent (while the other stick to their algorithms) will become irrational (i.e. will lead to a situation where the deviator's payoff is not improved) after polynomially many stages. (b) If all agents stick to their prescribed learning algorithms then the expected payoff obtained by each agent within a polynomial number of steps will be (close to) the value it could have obtained in a Nash equilibrium, had the agents known the game from the outset. A tuple of learning algorithms satisfying the above properties for a given class of games is said to be an Efficient Learning Equilibrium[ELE]. Notice that the learning algorithms should satisfy the desired properties for every game in a given class despite the fact that the actual game played is initially unknown. Such assumptions are typical to work in machine learning. What we borrow from the game theory literature is the criterion for rational behavior in multi-agent systems. That is, we take individual rationality to be associated with the notion of equilibrium. We also take the equilibrium of the actual (initially unknown) game to be our benchmark for success; we wish to obtain a corresponding value although we initially do not know which game is played. In the remaining sections we formalize the notion of efficient learning equilibrium, and present it in a self-contained fashion. We also prove the existence of an ELE (satisfying all of the above desired properties) for a general class of games (repeated games with perfect monitoring) , and show that it does not exist for another. Our results on ELE can be generalized to the context of Pareto-ELE (where we wish to obtain maximal social surplus), and to general stochastic games. These will be mentioned only very briefly, due to space limitations. The discussion of these and other issues, as well as proofs of theorems, can be found in the full paper [2]. Technically speaking, the results we prove rely on a novel combination of the socalled folk theorems in economics, and a novel efficient algorithm for the punishment of deviators (in games which are initially unknown). 2 ELE: Definition In this section we develop a definition of efficient learning equilibrium. For ease of exposition, our discussion will center on two-player repeated games in which the agents have an identical set of actions A. The generalization to n-player repeated games with different action sets is immediate, but requires a little more notation. The extension to stochastic games is fully discussed in the full paper [2]. A game is a model of multi-agent interaction. In a game, we have a set of players, each of whom performs some action from a given set of actions. As a result of the players' combined choices, some outcome is obtained which is described numerically in the form of a payoff vector, i.e., a vector of values, one for each of the players. A common description of a (two-player) game is as a matrix. This is called a game in strategic form. The rows of the matrix correspond to player 1 's actions and the columns correspond to player 2's actions. The entry in row i and column j in the game matrix contains the rewards obtained by the players if player 1 plays his ith action and player 2 plays his jth action. In a repeated game (RG) the players playa given game G repeatedly. We can view a repeated game, with respect to a game G, as consisting of infinite number of iterations, at each of which the players have to select an action of the game G. After playing each iteration, the players receive the appropriate payoffs, as dictated by that game's matrix, and move to a new iteration. For ease of exposition we normalize both players' payoffs in the game G to be nonnegative reals between ° and some positive constant Rmax. We denote this interval (or set) of possible payoffs by P = [0, Rmax]. In a perfect monitoring setting, the set of possible histories of length t is (A2 X p2)t, and the set of possible histories, H, is the union of the sets of possible histories for all t 2 0, where (A2 x p 2)O is the empty history. Namely, the history at time t consists of the history of actions that have been carried out so far, and the corresponding payoffs obtained by the players. Hence, in a perfect monitoring setting, a player can observe the actions selected and the payoffs obtained in the past, but does not know the game matrix to start with. In an imperfect monitoring setup, all that a player can observe following the performance of its action is the payoff it obtained and the action selected by the other player. The player cannot observe the other player's payoff. The definition of the possible histories for an agent naturally follows. Finally, in a strict imperfect monitoring setting, the agent cannot observe the other agents' payoff or their actions. Given an RG, a policy for a player is a mapping from H, the set of possible histories, to the set of possible probability distributions over A. Hence, a policy determines the probability of choosing each particular action for each possible history. A learning algorithm can be viewed as an instance of a policy. We define the value for player 1 (resp. 2) of a policy profile (1f, p), where 1f is a policy for player 1 and p is a policy for player 2, using the expected average reward criterion as follows. Given an RG M and a natural number T, we denote the expected T -step undiscounted average reward of player 1 (resp. 2) when the players follow the policy profile (1f,p), by U1(M,1f,p,T) (resp. U2(M,1f,p,T)). We define Ui(M, 1f, p) = liminfT--+oo Ui(M, 1f, p, T) for i = 1,2. Let M denote a class of repeated games. A policy profile (1f, p) is a learning equilibrium w.r.t. M if'rh' ,p',M E M, we have that U1(M,1f',p) :::; U1(M,1f,p), and U2 (M,1f,p') :::; U2(M,1f,p). In this paper we mainly treat the class M of all repeated games with some fixed action profile (i.e., in which the set of actions available to all agents is fixed). However, in Section 4 we consider the class of common-interest repeated games. We shall stick to the assumption that both agents have a fixed, identical set A of k actions. Our first requirement, then, is that learning algorithms will be treated as strategies. In order to be individually rational they should be the best response for one another. Our second requirement is that they rapidly obtain a desired value. The definition of this desired value may be a parameter, the most natural candidate - though not the only candidate - being the expected payoffs in a Nash equilibrium of the game. Another appealing alternative will be discussed later. Formally, let G be a (one-shot) game, let M be the corresponding repeated game, and let n(G) be a Nash-equilibrium of G. Then, denote the expected payoff of agent i in n(G) by Nl/i(n(G)). A policy profile (1f, p) is an efficient learning equilibrium with respect to the class of games M if for every E > 0, ° < 8 < 1, there exists some T > 0, where T is polynomial in ~,~, and k, such that with probability of at least 1 - 8: (1) For every t 2: T and for every repeated game M E M (and its corresponding one-shot game, G), Ui(M, 1f, p, t) 2: Nl/i(n(G)) - E for i = 1,2, for some Nash equilibrium n(G), and (2) If player 1 (resp. 2) deviates from 1f to 1f' (resp. from p to p') in iteration l, then U1 (M, 1f', p, l + t) :::; U1 (M, 1f, p, l + t) + E (resp. U2(M, 1f, p', l + t) :::; U2 (M, 1f, p, l + t) + E) for every t 2: T. Notice that a deviation is considered irrational if it does not increase the expected payoff by more than E. This is in the spirit of E-equilibrium in game theory. This is done mainly for ease of mathematical exposition. One can replace this part of the definition, while getting similar results, with the requirement of "standard" equilibrium, where a deviation will not improve the expected payoff, and even with the notion of strict equilibrium, where a deviation will lead to a decreased payoff. This will require, however, that we restrict our attention to games where there exist a Nash equilibrium in which the agents' expected payoffs are higher than their probabilistic maximin values. The definition of ELE captures the insight of a normative approach to learning in non-cooperative settings. We assume that initially the game is unknown, but the agents will have learning algorithms that will rapidly lead to the values the players would have obtained in a Nash equilibrium had they known the game. Moreover, as mentioned earlier, the learning algorithms themselves should be in equilibrium. Notice that each agent's behavior should be the best response against the other agents' behaviors, and deviations should be irrational, regardless of what the actual (one-shot) game is. 3 Efficient Learning Equilibrium: Existence Let M be a repeated game in which G is played at each iteration. Let A = {al' ... , ak} be the set of possible actions for both agents. Finally let there be an agreed upon ordering over the actions. The basic idea behind the algorithm is as follows. The agents collaborate in exploring the game. This requires k2 moves. Next, each agent computes a Nash equilibrium of the game and follows it. If more than one equilibrium exists, then the first one according to the natural lexicographic ordering is used. l If one of the agents does not collaborate in the initial exploration phase, the other agent "punishes" this agent. We will show that efficient punishment is feasible. Otherwise, the agents have chosen a Nash-equilibrium, and it is irrational for them to deviate from this equilibrium unilaterally. This idea combines the so-called folk-theorems in economics [6], and a technique for learning in zero-sum games introduced in [1]. Folk-theorems in economics deal with a technique for obtaining some desired behavior by making a threat of employing a punishing strategy against a deviator from that behavior. When both agents are equipped with corresponding punishing strategies, the desired behavior will be obtained in equilibrium (and the threat will not be materialized - as a deviation becomes irrational). In our context however, when an agent deviates in the exploration phase, then the game is not fully known, and hence punishment is problematic; moreover, we wish the punishment strategy to be an efficient algorithm (both computationally, and in the time a punishment will materialize and make deviations irrational). These are addressed by having an efficient punishment algorithm that guarantees that the other agent will not obtain more than its maximin value, after polynomial time, although the game is initially unknown to the punishing agent. The latter is based on the ideas of our R-max algorithm, introduced in [1]. More precisely, consider the following algorithm, termed the ELE algorithm. The ELE algorithm: Player 1 performs action ai one time after the other for k times, for all i = 1,2, ... , k. In parallel, player 2 performs the sequence of actions (al' ... ,ak) k times. If both players behaved according to the above then a Nash equilibrium of the corresponding (revealed) game is computed, and the players behave according to the corresponding strategies from that point on. If several Nash equilibria exist, one is selected based on a pre-determined lexicographic ordering. lIn particular, the agents can choose the equilibrium selected by a fixed shared algorithm. If one of the players deviated from the above, we shall call this player the adversary and the other player the agent. Let G be the Rmax-sum game in which the adversary's payoff is identical to his payoff in the original game, and where the agent's payoff is Rmax minus the adversary payoffs. Let M denote the corresponding repeated game. Thus, G is a constant-sum game where the agent's goal is to minimize the adversary's payoff. Notice that some of these payoffs will be unknown (because the adversary did not cooperate in the exploration phase). The agent now plays according to the following algorithm: Initialize: Construct the following model M' of the repeated game M, where the game G is replaced by a game G' where all the entries in the game matrix are assigned the rewards (Rmax , 0). 2 In addition, we associate a boolean valued variable with each joint-action {assumed, known}. This variable is initialized to the value assumed. Repeat: Compute and Act: Compute the optimal probabilistic maximin of G' and execute it. Observe and update: Following each joint action do as follows: Let a be the action the agent performed and let a' be the adversary's action. If (a, a') is performed for the first time, update the reward associated with (a,a') in G', as observed, and mark it known. Recall- the agent takes its payoff to be complementary to the (observed) adversary's payoff. We can show that the policy profile in which both agents use the ELE algorithm is indeed an ELE. Thus: Theorem 1 Let M be a class of repeated games. Then, there exists an ELE w.r.t. M given perfect monitoring. The proof of the above Theorem, contained in the full paper, is non-trivial. It rests on the ability of the agent to "punish" the adversary quickly, making it irrational for the adversary to deviate from the ELE algorithm. 4 Imperfect monitoring In the previous section we discussed the existence of an ELE in the context of the perfect monitoring setup. This result allows us to show that our concepts provide not only a normative, but also a constructive approach to learning in general noncooperative environments. An interesting question is whether one can go beyond that and show the existence of an ELE in the imperfect monitoring case as well. Unfortunately, when considering the class M of all games, this is not possible. Theorem 2 There exist classes of games for which an ELE does not exist given imperfect monitoring. 2The value 0 given to the adversary does not play an important role here. Proof (sketch): We will consider the class of all 2 x 2 games and we will show that an ELE does not exist for this class under imperfect monitoring. Consider the following games: 1. Gl: 2. G2: M= ( 6, 5, -100 o M = (6, 9 5,11 0,100 ) 1, 500 0, 1) 1, 10 Notice that the payoffs obtained for a joint action in Gland G 2 are identical for player 1 and are different for player 2. The only equilibrium of G 1 is where both players play the second action, leading to (1,500). The only equilibrium of G2 is where both players play the first action, leading to (6,9). (These are unique equilibria since they are obtained by removal of strictly dominated strategies.) Now, assume that an ELE exists, and look at the corresponding policies of the players in that equilibrium. Notice that in order to have an ELE, we must visit the entry (6,9) most of the times if the game is G2 and visit the entry (1 ,500) most of the times if the game is G 1; otherwise, player 1 (resp. player 2) will not obtain a high enough value in G2 (resp. Gl), since its other payoffs in G2 (resp. Gl) are lower than that. Given the above, it is rational for player 2 to deviate and pretend that the game is always Gland behave according to what the suggested equilibrium policy tells it to do in that case. Since the game might be actually G 1, and player 1 cannot tell the difference, player 2 will be able to lead to playing the second action by both players for most times also when the game is G2, increasing its payoff from 9 to 10, contradicting ELE. I The above result demonstrates that without additional assumptions, one cannot provide an ELE under imperfect monitoring. However, for certain restricted classes of games, we can provide an ELE under imperfect monitoring, as we now show. A game is called a common-interest game if for every joint-action, all agents receive the same reward. We can show: Theorem 3 Let M c- i be the class of common-interest repeated games in which the number of actions each agent has is a. There exists an ELE for M c- i under strict imperfect monitoring. Proof (sketch): The agents use the following algorithm: for m rounds, each agent randomly selects an action. Following this, each agent plays the action that yielded the best reward. If multiple actions led to the best reward, the one that was used first is selected. m is selected so that with probability 1 - J every joint-action will be selected. Using Chernoff bound we can choose m that is polynomial in the size of the game (which is ak , where k is the number of agents) and in 1/ J. I This result improves previous results in this area, such as the combination of Qlearning and fictitious play used in [3]. Not only does it provably converge in polynomial time, it is also guaranteed, with probability of 1 - J to converge to the optimal Nash-equilibrium of the game rather than to an arbitrary (and possibly non-optimal) Nash-equilibrium. 5 Conclusion We defined the concept of an efficient learning equilibria - a normative criterion for learning algorithms. We showed that given perfect monitoring a learning algorithm satisfying ELE exists, while this is not the case under imperfect monitoring. In the full paper [2] we discuss related solution concepts, such as Pareto ELE. A Pareto ELE is similar to a (Nash) ELE, except that the requirement of attaining the expected payoffs of a Nash equilibrium is replaced by that of maximizing social surplus. We show that there fexists a Pareto-ELE for any perfect monitoring setting, and that a Pareto ELE does not always exist in an imperfect monitoring setting. In the full paper we also extend our discussion from repeated games to infinite horizon stochastic games under the average reward criterion. We show that under perfect monitoring, there always exists a Pareto ELE in this setting. Please refer to [2] for additional details and the full proofs. References [1] R. I. Brafman and M. Tennenholtz. R-max - a general polynomial time algorithm for near-optimal reinforcement learning. In IJCAI'Ol, 200l. [2] R. I. Brafman and M. Tennenholtz. Efficient learning equilibrium. Technical Report 02-06, Dept. of Computer Science, Ben-Gurion University, 2002. [3] C. Claus and C. Boutilier. The dynamics of reinforcement learning in cooperative multi-agent systems. In Proc. Workshop on Multi-Agent Learning, pages 602- 608, 1997. [4] I. Erev and A.E. Roth. Predicting how people play games: Reinforcement learning in games with unique strategy equilibrium. American Economic Review, 88:848- 881, 1998. [5] D. Fudenberg and D. Levine. The theory of learning in games. MIT Press, 1998. [6] D. Fudenberg and J. Tirole. Game Theory. MIT Press, 1991. [7] J. Hu and M.P. Wellman. Multi-agent reinforcement learning: Theoretical framework and an algorithms. In Proc. 15th ICML, 1998. [8] L. P. Kaelbling, M. L. Littman, and A. W. Moore. Reinforcement learning: A survey. Journal of AI Research, 4:237- 285, 1996. [9] M. L. Littman. Markov games as a framework for multi-agent reinforcement learning. In Proc. 11th ICML, pages 157- 163, 1994. [10] L.S. Shapley. Stochastic Games. In Proc. Nat. Acad. Scie. USA, volume 39, pages 1095- 1100, 1953.
|
2002
|
15
|
2,158
|
Learning Sparse Multiscale Image Representations Phil Sallee Department of Computer Science and Center for Neuroscience, UC Davis 1544 Newton Ct. Davis, CA 95616 sallee@cs.ucdavis.edu Bruno A. Olshausen Department of Psychology and Center for Neuroscience, UC Davis 1544 Newton Ct. Davis, CA 95616 baolshausen@ucdavis.edu Abstract We describe a method for learning sparse multiscale image representations using a sparse prior distribution over the basis function coefficients. The prior consists of a mixture of a Gaussian and a Dirac delta function, and thus encourages coefficients to have exact zero values. Coefficients for an image are computed by sampling from the resulting posterior distribution with a Gibbs sampler. The learned basis is similar to the Steerable Pyramid basis, and yields slightly higher SNR for the same number of active coefficients. Denoising using the learned image model is demonstrated for some standard test images, with results that compare favorably with other denoising methods. 1 Introduction Increasing interest has been given to the use of overcomplete representations for natural scenes, where the number of basis functions exceeds the number of image pixels. One reason for this is that overcompleteness allows for more stable, and thus arguably more meaningful, representations in which common image features can be well described by only a few coefficients, regardless of where they are located in the image, how they are rotated, or how large they are [8, 6]. This may translate into gains in coding efficiency for image compression, and improved accuracy for tasks such as denoising. Overcomplete representations have been shown to reduce Gibbs-like artifacts common to thresholding methods employing critically sampled wavelets [4, 3, 9]. Common wavelet denoising approaches generally apply either a hard or softthresholding function to coefficients which have been obtained by filtering an image with a the basis functions. One can view these thresholding methods as a means of selecting coefficients for an image based on an assumed sparse prior on the coefficients [1, 2]. This statistical framework provides a principled means of selecting an appropriate thresholding function. When such thresholding methods are applied to overcomplete representations, however, problems arise due to the dependencies between coefficients. Choosing optimal thresholds for a non-orthogonal basis is still an unsolved problem. In one approach, orthogonal subgroups of an overcomplete shift-invariant expansion are thresholded separately and then the results are combined by averaging [4, 3]. In addition, if the coefficients are obtained by filtering the noisy image, there will be correlations in the noise that should be taken into account. Here we address two major issues regarding the use of overcomplete representations for images. First, current methods make use of various overcomplete wavelet bases. What is the optimal basis to use for a specific class of data? To help answer this question, we describe how to adapt an overcomplete wavelet basis to the statistics of natural images. Secondly, we address the problem of properly inferring the coefficients for an image when the basis is overcomplete. We avoid problems associated with thresholding by using the wavelet basis as part of a generative model, rather than a simple filtering mechanism. We then sample the coefficients from the resulting posterior distribution by simulating a Markov process known as a Gibbs-sampler. Our previous work in this area made use of a prior distribution peaked at zero and tapering away smoothly to obtain sparse coefficients [7]. However, we encountered a number of significant limitations with this method. First, the smooth priors do not force inactive coefficients to have values exactly equal to zero, resulting in decreased coding efficiency. Efficiency may be partially regained by thresholding the near-zero coefficients, but due to the non-orthogonality of the representation this will produce sub-optimal results as previously mentioned. The maximum a posteriori (MAP) estimate also introduced biases in the learning process. These effects can be partially compensated for by renormalizing the basis functions, but other parameters of the model such as those of the prior could not be learned. Finally, the gradient ascent method has convergence problems due to the power spectrum of natural images and the overcompleteness of the representation. Here we resolve these problems by using a prior distribution which is composed of a mixture of a Gaussian and a Dirac delta function, so that inactive coefficients are encouraged to have exact zero values. Similar models employing a mixture of two Gaussians have been used for classifying wavelet coefficients into active (high variance) and inactive (low variance) states [2, 5]. Such a classification should be even more advantageous if the basis is overcomplete. A method for performing Gibbs-sampling for the Delta-plus-Gaussian prior in the context of an image pyramid is derived, and demonstrated to be effective at obtaining very sparse representations which match the form of the imposed prior. Biases in the learning are overcome by sampling instead of using a MAP estimate. 2 Wavelet image model Each observed image I is assumed to be generated by a linear superposition of basis functions which are columns of an N by M weight matrix W, with the addition of Gaussian noise ν: I = W a + ν, (1) where I is an N-element vector of image pixels and a is an M-element vector of basis coefficients. In order to achieve a practical implementation which can be seamlessly scaled to any size image, we assume that the basis function matrix W is composed of a small set of spatially localized mother wavelet functions ψi(x, y), which are shifted to each position in the image and rescaled by factors of two. Unlike typical wavelet transforms which use a single 1-D mother wavelet function to generate 2-D functions by inner product, we do not constrain the functions ψi(x, y) to be 1-D separable. The functions ψi(x, y) provide an efficient way to perform computations involving W by means of convolutions. Basis functions of coarser scales are produced by upsampling the ψi(x, y) functions and blurring with a low-pass filter φ(x, y), also known as the scaling function. The image model above may be re-expressed to make these parameters explicit: I(x, y) = g0(x, y) + ν(x, y) (2) gl(x, y) = gl+1(x, y) ↑2 ∗φ(x, y) + P i al i(x, y) ∗ψi(x, y) l < L −1 al(x, y) l = L −1 (3) where the coefficients al i(x, y) are indexed by their position (x, y), band (i) and level of resolution (l) within the pyramid (l = 0 is the highest resolution level). The symbol ∗denotes convolution, and ↑2 denotes upsampling by two and is defined as f(x, y) ↑2 ≡ f( x 2, y 2) x even & y even 0 otherwise (4) The probability of generating an image I, given coefficients a, parameters θ, assuming Gaussian i.i.d. noise ν (with variance 1/λN), is P(I|a, θ) = 1 ZλN e− λN 2 |I−W a|2. (5) The prior probability over each coefficient ai is modeled as a mixture of a Gaussian distribution and a Dirac delta function δ(ai). A binary state variable si for each coefficient indicates whether the coefficient ai is active (any real value), or inactive (zero). The probability of a coefficient vector a given a binary state vector s and model parameters θ = {W, λN, λa, Λs} is defined as P(a|s, θ) = Y i P(ai|si, θ) (6) P(ai|si, θ) = ( δ(ai) if si = 0, 1 Zλai e− λai 2 a2 i if si = 1 (7) where λa is a vector with elements λai. The probability of a binary state s is P(s|θ) = 1 ZΛs e−1 2 sT Λs s. (8) Matrix Λs is assumed to be diagonal (for now), with nonzero elements λsi. The form of the prior is shown graphically in figure 1. Note that the parameters W, λa, and Λs are themselves parameterized by a much smaller set of parameters. Only the mother wavelet function ψi(x, y) and a single λsi and λai parameter need to be learned for each wavelet band, since we are assuming translation invariance. The total image probability is obtained by marginalizing over the possible coefficient and state values: P(I|θ) = X s P(s|θ) Z P(I|a, θ)P(a|s, θ) da (9) 3 Sampling and Inference We show how to sample from the posterior distribution P(a, s|I, θ) for an image I using a Gibbs sampler. For each coefficient and state variable pair (ai,si), we 0.5 0.4 0.3 0.2 0.1 0 0.1 0.2 0.3 0.4 0.5 10 -6 10 -5 10 -4 10 -3 10 -2 10 -1 10 0 Figure 1: Prior distribution (dashed), and histogram of samples taken from the posterior distribution (solid) plotted for a single coefficient. The y-axis is plotted on a log scale. sample from the posterior distribution conditioned on the image and the remaining coefficients a¯i: P(ai, si|I, a¯i, s¯i, θ). After all coefficients (and state variables) have been updated, this process is repeated until the system has reached equilibrium. To infer an optimal representation a for an image I (for coding or denoising purposes), we can either average a number of samples to estimate the posterior mean, or with minor adjustment locate a posterior maximum by raising the posterior distribution to a power (1/T) and annealing T to zero. To sample from P(ai, si|I, a¯i, s¯i, θ), we first draw a value for si from P(si|I, a¯i, s¯i, θ), then draw ai from P(ai|si, I, a¯i, s¯i, θ). For P(si|I, a¯i, s¯i, θ) we have: P(si|I, a¯i, s¯i, θ) ∝ P(si|s¯i, θ) Z P(I|ai, a¯i, θ)P(ai|si, θ)dai (10) where P(si|s¯i, θ) = 1 Zsi|s¯i e− λsi 2 si, (11) P(I|ai, a¯i, θ) = 1 Zλni e− λni 2 (ai−bi)2, (12) and λni = λN |Wi|2, bi = Wi · (I −W ai=0) |Wi|2 . (13) The notation Wi denotes column i of matrix W, |Wi| is the length of vector Wi, and ai=0 denotes the current coefficient vector a except with ai set to zero. Thus, bi denotes the value for ai which minimizes the reconstruction error (while holding a¯i constant). Since si can only take on two values, we can compute equation 10 for si = 0 and si = 1, integrating over the possible coefficient values. This yields the following sigmoidal activation rule as a function of bi: P(si =1|I, a¯i, s¯i, θ) = 1 1 + e−βi(b2 i −ti) (14) where βi = 1 2 λ2 ni λni + λai , ti = λni + λai λ2ni λsi −log λai λni + λai . (15) For P(ai|si, I, a¯i, s¯i, θ) we have: P(ai|si, I, a¯i, s¯i, θ) = ( δ(ai) if si = 0, N( λnibi λni+λai , 1 λni+λai ) if si = 1 (16) To perform this procedure on a wavelet pyramid, the inner product computations necessary to compute bi can be performed efficiently by means of convolutions with the mother wavelet functions ψi(x, y). The λN, λsi and λai parameters may be adapted to a specific image during the inference process by use of the update rules described in the next section. This method was found to be particularly useful for denoising, when the variance of the noise was assumed to be unknown. 4 Learning Our objective for learning is to adjust the parameters, θ, to maximize the average log-likelihood of images under the model: ˆθ = arg max θ ⟨log P(I|θ)⟩ (17) The parameters are updated by gradient ascent on this objective, which results in the following update rules: ∆λsi ∝ 1 2 * 1 1 + e 1 2 λsi −si P (a,s|I,θ) + (18) ∆λai ∝ 1 2 * si 1 λai −a2 i P (a,s|I,θ) + (19) ∆ψi(x, y) ∝ λN D ⟨e(x, y) ⋆ai(x, y)⟩P (a,s|I,θ) E (20) where ⋆denotes cross correlation and e(x, y) is the reconstruction error computed by e = I −W a. Only a center portion of the cross correlation with the extent of the ψi(x, y) functions is computed to update the parameters. The outer brackets denotes averaging over many images. The notation ⟨⟩P () denotes averaging the quantity in brackets while sampling from the specified distribution. 5 Results The image model was trained on 22 512x512 pixel grayscale natural images (not whitened). These images were generated from color images taken from a larger database of photographic images 1. Smaller images (64x64 pixels) were selected randomly for sampling during training. To simplify the learning procedure, sampling was performed on a single spatial frequency scale. Each image was bandpass filtered for an octave range before sampling from the posterior for that scale. The 1Images were downloaded from philip.greenspun.com with permission from Philip Greenspun. (a) (b) Figure 2: (a) Mother wavelet functions ψi(x, y) adapted for 2, 4 and 6 bands and corresponding power spectra showing power as a function of spatial frequency in the 2D Fourier plane. (b) Equivalent mother wavelets and spectra for the 4-band Steerable Pyramid. λai and λsi parameters were constrained to be the same for all orientation bands and were adapted over many images with λN fixed. Shown in figure 2 are the learned ψi(x, y) which parameterize W, with their corresponding 2D spectra. Three different degrees of overcompleteness were tested. The results are shown for 2 band, 4 band and 6 band wavelet bases. As the degree of overcompleteness increases, the resulting functions show tighter tuning to orientation. The basis filters for a 4 band Steerable Pyramid [10] are also shown for comparison, to illustrate the similarity to the learned functions. 1.0 2.0 3.0 4.0 5.0 22.5 23 23.5 24 24.5 25 25.5 26 26.5 27 % nonzeros learned steer SNR (dB) Figure 3: Sparsity comparison between the learned basis (top) and the steerable basis (bottom). The y axis represents the signal-to-noise ratio (SNR) in dB achieved for each method for a given percentage of nonzeros. 5.1 Sparsity We evaluated the sparsity of the representations obtained with the four band learned functions and the sampling method with those obtained using the same sampling method and the four band Steerable Pyramid filters [10]. In order to explore the SNR curves for each basis, we used a variety of values for λs so as to obtain different levels of sparsity. The same images were used for both bases. The results are given in figure 3. Each dot on the line represents a different value of λs. The results were similar, with the learned basis yielding slightly higher SNR (about 0.5 dB) for the same number of active coefficients. 5.2 Denoising We evaluated our inference method and learned basis functions by denoising images containing known amounts of additive i.i.d. Gaussian noise. Denoising was accomplished by averaging samples taken from the posterior distribution for each image via Gibbs sampling to approximate the posterior mean. Gibbs sampling was performed on a four level pyramid using the 6 band learned wavelet basis, and also using the 6 band Steerable basis. The λN, λsi and λai parameters were adapted to each noisy image during sampling for blind denoising in which the noise variance was assumed to be unknown. We compared these results to the wiener2 function in MATLAB, and also to BayesCore [9], a Bayesian method for computing an optimal soft thresholding, or coring, function for a generalized Laplacian prior. For wiener2, the best neighborhood size was used for each image. Table 1 gives the SNR results for each method when applied to some standard test images for three different levels of i.i.d. Gaussian noise with standard deviation σ. Figure 4 shows a cropped subregion of the results for the “Einstein” image with σ = 10. 6 Summary and Conclusions We have shown that a wavelet basis and a mixture prior composed of a Dirac delta function and a Gaussian can be adapted to natural images resulting in very sparse image representations. The resulting basis is very similar to a Steerable basis, both in appearance and sparsity of the resulting image representations. It appears that the Steerable basis may be nearly optimal for producing sparse representations of natural scenes. Denoising results indicate that using a sparse prior and an inference method to properly account for the non-orthogonality of the representation may yield a significant improvement over wavelet coring methods that use filtered coefficients. More work needs to be done to determine whether the coding gains achieved are due to the choice of prior versus the basis or inference/estimation method used. Acknowledgments Supported by NIMH R29-MH057921. Phil Sallee’s work was also supported in part by a United States Department of Education Government Assistance in Areas of National Need (DOE-GAANN) grant #P200A980307. Image noise level noisy wiener2 BayesCore S6 D+G S6 D+G L6 Einstein σ = 10 12.40 15.80 16.36 16.47 16.19 σ = 20 6.40 12.61 13.44 13.80 13.79 σ = 30 2.89 10.95 11.81 12.28 12.29 Lena σ = 10 13.61 19.05 19.91 20.37 20.21 σ = 20 7.59 15.51 16.88 17.46 17.54 σ = 30 4.07 13.25 14.99 15.48 15.55 Goldhill σ = 10 13.86 17.56 18.14 18.10 17.90 σ = 20 7.83 14.32 15.18 15.41 15.41 σ = 30 4.28 12.64 13.61 13.92 13.95 Fruit σ = 10 16.25 21.87 22.09 22.78 22.38 σ = 20 10.24 18.15 18.97 19.61 19.42 σ = 30 6.70 15.97 17.21 17.72 17.66 Table 1: SNR values (in dB) for noisy and denoised images contaminated with additive i.i.d. Gaussian noise of std.dev. σ. “D+G” means Delta-plus-Gaussian prior, “S6” means 6-Band Steerable basis, and “L6” means 6-Band Learned basis. original noisy (σ=10) SNR=12.3983 wiener2 SNR=15.8033 BayesCore steer6 SNR=16.3591 D+G steer6 SNR=16.4714 D+G learned6 SNR=16.1939 Figure 4: Denoising example. A cropped subregion of the Einstein image and denoised images for each noise reduction method for noise std.dev. σ=10. References [1] Abromovich F, Sapatinas T, Silverman B (1996), Wavelet Thresholding via a Bayesian Approach, preprint. [2] Chipman H, Kolaczyk E, McCulloch R (1997) Adaptive bayesian wavelet shrinkage, J. Amer. Statist. Assoc. 92(440): 1413-1421. [3] Chang SG, Yu B, Vetterli M (2000). Spatially Adaptive Wavelet Thresholding with Context Modelling for Image Denoising. IEEE Trans. on Image Proc., 9(9): 1522-1531. [4] Coifman RR, Donoho DL (1995). Translation-invariant de-noising, in Wavelets and Statistics, A.Antoniadis and G. Oppenheim, Eds. Berlin, Germany: Springer-Varlag. [5] Crouse MS, Nowak RD and Baraniuk RG (1998) Wavelet-based Statistical Signal Processing using Hidden Markov Models, IEEE Trans. Signal Proc., 46(4): 886-902. [6] Freeman WT, Adelson EH (1991) The Design and Use of Steerable Filters. IEEE Trans. Patt. Anal. and Machine Intell., 13(9): 891-906. [7] Olshausen BA, Sallee P, Lewicki MS (2001) Learning sparse image codes using a wavelet pyramid architecture, Adv. in Neural Inf. Proc. Sys., 13: 887-893. [8] Simoncelli EP, Freeman WT, Adelson EH, Heeger DJ (1992) Shiftable multiscale transforms, IEEE Transactions on Information Theory, 38(2): 587-607. [9] Simoncelli EP, Adelson EH (1996) Noise removal via Bayesian wavelet coring, Presented at: 3rd IEEE International Conf. on Image Proc., Laussanne Switzerland. [10] Simoncelli EP, Freeman WT (1995). The Steerable Pyramid: A Flexible Architecture for Multi-scale Derivative Computation, IEEE Int. Conf. on Image Processing.
|
2002
|
150
|
2,159
|
The Effect of Singularities in a Learning Machine when the True Parameters Do Not Lie on Such Singularities Sumio Watanabe Precision and Intelligence Laboratory Tokyo Institute of Technology 4259 Nagatsuta, Midori-ku, Yokohama, 226-8503 Japan E-mail: swatanab@pi.titech.ac.jp Shun-ichi Amari Laboratory for Mathematical Neuroscience RIKEN Brain Science Institute Hirosawa, 2-1, Wako-shi, Saitama, 351-0198, Japan E-mail: amari@brain.riken.go.jp Abstract A lot of learning machines with hidden variables used in information science have singularities in their parameter spaces. At singularities, the Fisher information matrix becomes degenerate, resulting that the learning theory of regular statistical models does not hold. Recently, it was proven that, if the true parameter is contained in singularities, then the coefficient of the Bayes generalization error is equal to the pole of the zeta function of the Kullback information. In this paper, under the condition that the true parameter is almost but not contained in singularities, we show two results. (1) If the dimension of the parameter from inputs to hidden units is not larger than three, then there exits a region of true parameters where the generalization error is larger than those of regular models, however, if otherwise, then for any true parameter, the generalization error is smaller than those of regular models. (2) The symmetry of the generalization error and the training error does not hold in singular models in general. 1 Introduction A lot of learning machines with hidden parts such as multi-layer perceptrons [8], gaussian mixtures[2], Boltzman machines, and Bayesian networks with latent variables [4] are nonidentifiable statistical models. In such learning machines, the mapping from the parameter to the probability distribution is not one-to-one. Moreover, they have complex singularities. In this paper, a parameter w of a parametric probability density function p(x|w) is called to be a singularity if and only if det I(w) = 0, where I(w) is the Fisher information matrix at w. If a learning machine has singularities, then neither the maximum likelihood estimator nor the Bayes a posteriori distribution converges to the normal distribution in general [1][5]. Recently, despite of the mathematical difficulty of such learning machines, the asymptotic Bayes generalization error has been clarified using algebraic geometrical method [5][6]. The Bayes generalization error G(n), which is defined as the average Kullback distance from the true distribution to the Bayes predictive distribution, is equal to G(n) = λ n + o( 1 n) where n is the number of training samples and (−λ) is the rational number that is equal to the largest pole of the zeta function of the Kullback information and the prior [6][7]. If the true parameter is not a singular point, then λ = d/2, where d is the dimension of the parameter space, whereas, if the set of the true parameters consists of singularities, then λ is different from d/2 [6][8]. In almost learning machines, singularities of the parameter space correspond to smaller models contained in the parametric model. However, in practical applications, the true distribution is seldom contained completely in a finite model, and it often happens that the true parameter is almost but not completely contained in singularities. In this paper, in order to clarify the effect of singularities when the true parameter lies in the neighborhood of singularities, we propose a new scaling method by which the Kullback distance from the singularities to the true distribution is equal to c/n, where n is the number of training samples and c is a controlling parameter. This scaling method, which is often used in comparing the powers of statistical hypothesis testing algorithms, enables us to clarify the effect of singularities. We show two results. (1) If the number of the parameters from inputs to hidden units is not larger than three, then there exists c > 0 such that the generalization error is larger than those of the corresponding regular model. However, if otherwise, then for an arbitrary c ≥0, the generalization error is made to be smaller by the singularities. (2) The symmetry of the generalization error and the training error does not hold in nonidentifiable learning machines in general. 2 A Singular Model Since singularities in learning machines with hidden variables have quite complex geometrical structures in general, it needs the advanced method in modern algebraic geometry to treat them in a general manner [6]. In this paper, we study a simple hierarchical model. Even in this simple model, a universal phenomenon caused by singularities can be found. Let us consider a learning problem: Learner : p(y|x, a, b) = 1 √ 2π exp(−1 2(y −af(b, x))2), (1) True : q(y|x) = 1 √ 2π exp(−1 2(y −a0 √nf(b0, x))2), (2) where y ∈R1 is an output, x ∈RM is an input with the probability distribution q(x). The parameter space is defined by {(a, b) ∈R1×RN}. The Kullback distance from q(y|x) to p(y|x, a, b) is equal to (1/2n) a2 0Ex[f(b0, x)2], where Ex denotes the expectation value over x. If f(0, x) ≡0, then an arbitrary point in {a = 0}∪{b = 0} is a singularity. We assume that the a priori distribution ϕ(a, b) is a C1-class function and ψ(b) ≡ϕ(0, b) has a compact support. Let Dn = {(xi, yi); i = 1, 2, · · ·, n} be a set of training samples independently taken from q(x)q(y|x). Both the Bayes a posteriori distribution p(a, b|Dn) and the Bayes predictive distribution p(y|x, Dn) are respectively defined by p(a, b|Dn) = 1 Cn ϕ(a, b) n Y i=1 p(yi|xi, a, b), p(y|x, Dn) = Z p(y|x, a, b) p(a, b|Dn) da db, where Cn is a normalizing constant. The generalization error G(n) and the training error T(n) are respectively defined by Generalization Error: G(n) = E h log q(yn+1|xn+1) p(yn+1|xn+1, Dn) i , Training Error: T(n) = E h 1 n n X k=1 log q(yk|xk) p(yk|xk, Dn) i , where E shows the expectation value over all sets of training samples Dn and the testing samples (xn+1, yn+1). If the learning machine is a regular statistical model, then both G(n) = d/(2n) + o(1/n) and T(n) = −d/(2n) + o(1/n) hold, where d is the dimension of the parameter space, hence the coefficient d does not depend on the true parameter. In this paper, we show that this property does not hold in a singular learning machine. We assume that the learning machine satisfies the condition f(b, x) = J X j=1 fj(b)ej(x) (3) where {ej(x)} is a set of orthonormal functions, Ex[ei(x)ej(x)] = δij. Then it follows that ∥f(b)∥2 ≡P j=1 fj(b)2 = Ex[f(b, x)2]. Then we have the following theorem. Theorem 1 The Bayes generalization and training errors can be asymptotically expanded as G(n) = λ(a0, b0) 2n + o( 1 n), T(n) = µ(a0, b0) 2n + o( 1 n). Here λ(a0, b0) and µ(a0, b0) are constant functions of n defined by λ(a0, b0) = 1 + a2 0∥f(b0)∥2 −Eg h J X j=1 a0fj(b0) 1 Z(g) ∂Z ∂gj i µ(a0, b0) = λ(a0, b0) −Eg h J X j=1 2gj 1 Z(g) ∂Z ∂gj i where g = (gj) is the J dimensional gaussian distribution whose average and the covariance matrix are respectively zero and the identity, and Eg shows the expectation value over g, and Z(g) = Z exp h 1 2 ∥f(b)∥2 { J X j=1 (gj + a0fj(b0))fj(b)}2i ψ(b) ∥f(b)∥db. Proof of Theorem 1. We use the rescaling parameter α = √n a and define the average < S(α, b) > of a function of S(α, b) by < S(α, b) >= R exp(−L(α, b)) S(α, b) ϕ(α/√n, b) dα db R exp(−L(α, b)) ϕ(α/√n, b) dα db where, we use notations d(α, b, x) = αf(b, x) −a0f(b0, x) and L(α, b) = 1 n n X i=1 Li(α, b) Li(α, b) = 1 2 d(α, b, xi)2 −√n ϵi d(α, b, xi). Here ϵi ≡yi −a0f(b0, xi)/√n is a sample from the standard normal distribution. The Bayes generalization and training errors are respectively equal to G(n) = E h −log < exp{−Ln+1(α, b) n } > i T(n) = E h −1 n n X k=1 log < exp{−Lk(α, b) n } > i . When n →∞, the central limiting theorem ensures the convergences in probability and in law respectively, 1 n n X i=1 ej(xi) ek(xi) →δjk, 1 √n n X i=1 ϵi ej(xi) →gj, where g = (gj) is subject to the normal distribution whose average and covariance matrix are respectively equal to zero and the identity. Then by using log(1 −t) = −t + t2/2 + o(t2) for small t, it follows that lim n→∞2nG(n) = J X j=1 Eg h { 1 Z ∂Z ∂gj −a0fj(b0)}2i , lim n→∞2nT(n) = lim n→∞2nG(n) −2Eg h J X j=1 gj 1 Z ∂Z ∂gj i , where Eg shows the expectation value over the random variable g and Z(g) = Z exp h −1 2 J X j=1 α2fj(b)2 + J X j=1 αfj(b)(gj + a0fj(b0)) i ψ(b) dα db. By using the identity { 1 Z ∂Z ∂gj }2 = 1 Z ∂2Z ∂g2 j −∂ ∂gj { 1 Z ∂Z ∂gj }, and Eg[(∂/∂gj)f(g)] = Eg[gjf(g)] for an arbitrary function f(g), we obtain Theorem 1. (End of Proof: Theorem 1). Theorem 1 shows that, if a0 = 0, then λ(a0, b0) = 1, which coincides with the general theory for the case when the true parameter is contained in the singularities [6]. In fact, if a0 = 0, the zeta function of the Kullback information ζ(z) = Z a2z∥b∥2z ϕ(a, b) da db, has the largest pole at z = −1/2. The new point of this paper is that the learning coefficient λ(a0, b0) for a0 ̸= 0, b0 ̸= 0 is obtained. Unfortunately it can not be represented by any simple function. 3 The Effect of Singularities In order to study the effect of singularities, we adopt the simple learning machine, af(b, x) = N X j=1 abjej(x) (4) where a ∈R1, b ∈RN, x ∈RM (N > 1). Also we assume that ψ(b) depends only the norm ∥b∥, that is to say, ψ(b) can be rewritten as ψ(∥b∥). In this learning machine, if the true regression function is y = 0, then the set of true parameters is {(a, b); a = 0 or b = 0}. Remark. By using the re-parameterization wi = abi, the learning machine eq.(4) results in p(y|x, w) = 1 √ 2π exp(−1 2(y − N X j=1 wjej(x)))2). This learner is a regular statistical model, hence both G(n) = N/(2n) + o(1/n) and T(n) = −N/(2n) + o(1/n) hold. Therefore, by comparing λ(a0, b0) and −µ(a0, b0) with N, let us clarify the effect of singularities. Theorem 2 Let us consider the learning machine and the true distribution given by eq.(1) and eq.(2), which are restricted as eq.(4). If N ≥2, then the Bayes generalization and training errors are respectively given by λ(a0, b0) = 1 + Eg h (a2 0∥b0∥2 + a0b0 · g) YN(g) YN−2(g) i (5) µ(a0, b0) = 1 −2N + Eg h (a2 0∥b0∥2 + 3a0b0 · g + 2∥g∥2) YN(g) YN−2(g) i (6) where YN(g) = Z π/2 0 dθ sinN θ exp(−1 2 ∥a0b0 + g∥2 sin2 θ). Proof of Theorem 2. We introduce the general polar coordinate b = (r, Ω). The function Z(g) in Theorem 1 is given by Z(g) = Z dr Z dΩexp{ ((g + a0b0) · Ω)2 2 } ψ(r) rN−2. Therefore Z(g) is independent of the direction of g+a0b0, we can assume g+a0b0 = ∥g + a0b0∥× (1, 0, · · ·, 0) without loss of generality. By representing Ω= b/r as bi/r = sin θ1 · · ·sin θi−1 cos θi (1 ≤i ≤N −1), bN/r = sin θ1 · · ·sin θN−1, we obtain Z(g) = const. Z π/2 0 sinN−2 θ1 exp(∥a0b0 + g∥2 2 cos2 θ1) dθ1. which completes the proof. (End of Proof: Theorem 2). Unfortunately, the function λ(a0, b0) in eq.(5) can not be represented by any classically analytic function. Figure 1 shows the value λ(a0, b0) given by eq.(5) by numerical calculations, for the cases N = 2, 3, .., 6. The horizontal and longitudinal lines respectively show |a0|∥b0∥and λ(a0, b0)/N. The generalization error 0 0.2 0.4 0.6 0.8 1 1.2 0 2 4 6 8 10 12 lambda/N a0||b0|| "Gener:N=2" "Gener:N=3" "Gener:N=4" "Gener:N=5" "Gener:N=6" "lambda=1" Figure 1: Coefficients of Generalization Errors λ(a0, b0)/N for a0∥b0∥ -1 -0.8 -0.6 -0.4 -0.2 0 0 2 4 6 8 10 12 mu/N a0||b0|| "Train:N=2" "Train:N=3" "Train:N=4" "Train:N=5" "Train:N=6" Figure 2: Coefficients of Training Errors µ(a0, b0)/N for a0∥b0∥ is smaller than that of the corresponding regular statistical model if and only if λ(a0, b0)/N < 1. For all cases 2 ≤N ≤6, λ(a0, b0) converges to the dimension N when |a0|∥b0∥→ ∞. If N = 2 and N = 3, λ(a0, b0) becomes larger than N, if the true parameter mismatches the singularities. When N = 2, in the region |a0|∥b0∥> 2.8, λ(a0, b0) > N. When N = 3, only in the interval 3.8 < |a0|∥b0∥< 6.8, λ(a0, b0) > N. On the other hand, if N ≥4, the learning coefficient λ(a0, b0) is always smaller than N, even if the true parameter is not contained in singularities. If the dimension of the parameter is large, then singularities make the Bayes generalization error smaller than regular statistical models, independently of the place of the true parameter. This result can be analyzed more precisely by the asymptotic expansion. Theorem 3 The coefficients can be asymptotically expanded when |a0|∥b0∥→∞. λ(a0, b0) = N −(N −1)(N −3) a2 0∥b0∥2 + o( 1 a2 0∥b0∥2 ), µ(a0, b0) = −N + (N −1)2 a2 0∥b0∥2 + o( 1 a2 0∥b0∥2 ). In this theorem, a2 0∥b0∥2/2 is equal to the Kullback distance from the singularities to the true distribution. It should be emphasized that the symmetrical relation λ(a0, b0) + µ(a0, b0) = 0 does not hold near the singularities. In the generalization error, the coefficient of 1/a2 0∥b0∥2 is positive if N = 2, whereas it is negative if N ≥4. When N = 3, then the coefficient is equal to zero. Proof of Theorem 3 The function YN(g) in Theorem 2 is rewritten as YN(g) = 1 ∥a0b0 + g∥2 Z 1 0 xN q 1 − x2 ∥a0b0+g∥2 exp(−x2 2 )dx Then by using 1 q 1 − x2 ∥a0b0+g∥2 ∼= 1 + x2 2∥a0b0 + g∥2 , we have an asymptotic expansion, λ(a0, b0) = 1 + Eg h (a2 0∥b0∥2 + a0b0 · g) CN ∥a0b0 + g∥M+1 + CN+2 2∥a0b0 + g∥M+3 CN−2 ∥a0b0 + g∥M−1 + CN 2∥a0b0 + g∥M+1 i , where CN = 2(N−1)/2Γ( N+1 2 ). The training error can be obtained by the same way. (End of Proof: Theorem 3). 4 Discussion Let us shortly discuss three points. Firstly, in this paper, we compared a simple layered model with a regular statistical model. If we employ a linear learner y = N X j=1 bjej(x), then we can expect the more precise statistical estimation by making it to be the hierarchical model, y = N X j=1 abjej(x), if N ≥4 and Bayesian estimation is applied. Secondly, the Bayesian model selection is usually carried out by minimizing the stochastic complexities, F (Dn) = −log Z n Y i=1 p(yi|xi, a, b)ϕ(a, b) dab. . Let us consider the model selection problem, the model y = 0 or the model in eq.(1). If the Kullback distance from the singularities to the true paramater is equal to c/n and if n is sufficiently large, then for an arbitrary c, y = 0 is selected with the probability one. Theoretically speaking, this fact shows that the minimum stochastic complexity criterion is not equivalent to the minimum generalization error criterion. And lastly, we have shown that, if the true parameter is at the neighborhood of singularities, then the symmetry of the generalization error and the training error does not hold. Therefore the generalization error can not be estimated based on the training error using the conventional method. These three points are the important problems for future study. 5 Conclusion Effect of singularities when the true parameter mismatches them is clarified. Singularities make the Bayes generalization error to be small if the dimension of the inputs to hidden units is large. We expect that this research will be a base to clarify the reason why neural information processing systems need hierarchical structures. This work was supported by the Ministry of Education, Science, Sports, and Culture in Japan, Grant-in-aid for scientific research 12680370. References [1] Amari,S., Park,H., and Ozeki,T. (2002) Geometrical singularities in the neuromanifold of multilayer perceptrons. Advances in Neural Information Processing Systems, Vol.14. [2] Hartigan, J.A. (1985) A Failure of likelihood asymptotics for normal mixtures. Proceedings of the Berkeley Conference in Honor of J.Neyman and J.Kiefer, Vol.2, pp.807-810. [3] Hironaka, H. (1964). Resolution of singularities of an algebraic variety over a field of characteristic zero. Annals of Mathematics, 79, 109-326. [4] Rusakov, D, Geiger,D.(2002) Asymptotic model selection for naive Bayesian networks. Proc. of UAI02. [5] Watanabe, S. (1999). Algebraic analysis for singular statistical estimation. Lecture Notes in Computer Science, 1720, 39-50. [6] Watanabe, S.,(2001) Algebraic analysis for nonidentifiable learning machines. Neural Computation, 13,(4), pp.899-933. [7] Watanabe, S. (2001) Algebraic information geometry for learning machines with singularities. Advances in Neural Information Processing Systems, Vol.13, 329-336. [8] Watanabe, S. (2001) Algebraic geometrical methods for hierarchical learning machines. International Journal of Neural Networks, Vol.14, No.8, 1049-1060. [9] Watanabe,S., & Amari,S.-I.(2003) Learning coefficients of layered models when the true distriburion mismatches the singularities.Neural Computation, to appear.
|
2002
|
151
|
2,160
|
Learning about Multiple Objects in Images: Factorial Learning without Factorial Search Christopher K. I. Williams and Michalis K. Titsias School of Informatics, University of Edinburgh, Edinburgh EH1 2QL, UK c.k.i.williams@ed.ac.uk M.Titsias@sms.ed.ac.uk Abstract We consider data which are images containing views of multiple objects. Our task is to learn about each of the objects present in the images. This task can be approached as a factorial learning problem, where each image must be explained by instantiating a model for each of the objects present with the correct instantiation parameters. A major problem with learning a factorial model is that as the number of objects increases, there is a combinatorial explosion of the number of configurations that need to be considered. We develop a method to extract object models sequentially from the data by making use of a robust statistical method, thus avoiding the combinatorial explosion, and present results showing successful extraction of objects from real images. 1 Introduction In this paper we consider data which are images containing views of multiple objects. Our task is to learn about each of the objects present in the images. Previous approaches (discussed in more detail below) have approached this as a factorial learning problem, where each image must be explained by instantiating a model for each of the objects present with the correct instantiation parameters. A serious concern with the factorial learning problem is that as the number of objects increases, there is a combinatorial explosion of the number of configurations that need to be considered. Suppose there are possible objects, and that there are possible values that the instantiation parameters of any one object can take on; we will need to consider combinations to explain any image. In contrast, in our approach we find one object at a time, thus avoiding the combinatorial explosion. In unsupervised learning we aim to identify regularities in data such as images. One fairly simple unsupervised learning model is clustering, which can be viewed as a mixture model where there are a finite number of types of object, and data is produced by choosing one of these objects and then generating the data conditional on this choice. As a model of objects in images standard clustering approaches are limited as they do not take into account the variability that can arise due to the transformations that can take place, described by instantiation parameters such as translation, rotation etc of the object. Suppose that there are different instantiation parameters, then a single object will sweep out a -dimensional manifold in the image space. Learning about objects taking this regularity into account has http://anc.ed.ac.uk been called transformation-invariant clustering by Frey and Jojic (1999, 2002). However, this work is still limited to finding a single object in each image. A more general model for data is that where the observations are explained by multiple causes; in our example this will be that in each image there are objects. The approach of Frey and Jojic (1999, 2002) can be extended to this case by explicitly considering the simultaneous instantiation of all objects (Jojic and Frey, 2001). However, this gives rise to a large search problem over the instantiation parameters of all objects simultaneously, and approximations such as variational methods are needed to carry out the inference. In our method, by contrast, we discover the objects one at a time using a robust statistical method. Sequential object discovery is possible because multiple objects combine by occluding each other. The general problem of factorial learning has longer history, see, for example, Barlow (1989), Hinton and Zemel (1994), and Ghahramani (1995). However, Frey and Jojic made the important step for image analysis problems of using explicit transformations of object models, which allows the incorporation of prior knowledge about these transformations and leads to good interpretability of the results. A related line of research is that concerned with discovering part decompositions of objects. Lee and Seung (1999) described a non-negative matrix factorization method addressing this problem, although their work does not deal with parts undergoing transformations. There is also work on learning parts by Shams and von der Malsburg (1999), which is compared and contrasted with our work in section 4. The structure of the remainder of this paper is as follows. In section 2 we describe the model, first for images containing only a single object ( 2.1) and then for images containing multiple objects ( 2.2). In section 3 we present experimental results for up to five objects appearing against stationary and non-stationary backgrounds. We conclude with a discussion in section 4. 2 Theory 2.1 Learning one object In this section we consider the problem of learning about one object which can appear at various locations in an image. The object is in the foreground, with a background behind it. This background can either be fixed for all training images, or vary from image to image. The two key issues that we must deal with are (i) the notion of a pixel being modelled as foreground or background, and (ii) the problem of transformations of the object. We consider first the foreground/background issue. Consider an image of size containing
pixels, arranged as a length vector. Our aim is to learn appearance-based representations of the foreground and the background . As the object will be smaller than pixels, we will need to specify which pixels belong to the background and which to the foreground; this is achieved by a vector of binary latent variables , one for each pixel. Each binary variable in is drawn independently from the corresponding entry in a vector of probabilities . For pixel , if , then the pixel will be ascribed to the background with high probability, and if , it will be ascribed to the foreground with high probability. We sometimes refer to as a mask. ! is modelled by a mixture distribution: ! "$# ! &%(') +* ! %,')-/.10 24365 798 ! %(: ;* ! %(: -/.10 8 24365 <(1) where .10 and .10 8 are respectively the foreground and background variances. Thus, ignoring transformations, we obtain ! %('
98 ! %(: The second issue that we must deal with is that of transformations. Below we consider only translations, although the ideas can be extended to deal with other transformations such as scaling and rotation (see e.g. Jojic and Frey (2001)). Each possible transformation (e.g. translations in units of one pixel) is represented by a corresponding transformation matrix, so that matrix corresponds to transformation and is the transformed foreground model. In our implementation the translations use wrap-around, so that each is in fact a permutation matrix. The semantics of foreground and background mean that the mask must also be transformed, so that we obtain ! &%
8 ! &%(:/ (2) Notice that the foreground and mask are transformed by , but the background is not. In order for equation 2 to make sense, each element of must be a valid probability (lying in < ). This is certainly true for the case when is a permutation matrix (and can be true more generally). To complete the model we place a prior probability on each transformation ; this is taken to be uniform over all possibilities so that "! # . Given a data set $ %'& , ( 7) * we can adapt the parameters * -(. 0 -(.10 8 by maximizing the log likelihood * ,+ % -/.10 % * . This can be achieved through using the EM algorithm to handle the missing data which is the transformation and . The model developed in this section is similar to Jojic and Frey (2001), except that our mask has probabilistic semantics, which means that an exact M-step can be used as opposed to the generalized M-step used by Jojic and Frey. 2.2 Coping with multiple objects If there are foreground objects, one natural approach is to consider models with latent variables, each taking on the values of the possible transformations. We also need to account for object occlusions. By assuming that the objects can arbitrarily occlude one another (and this occlusion ordering can change in different images), there are 32 possible arrangements. A model that accounts for multiple objects is described in Jojic and Frey (2001) where the occlusion ordering of the objects is taken as being fixed since they assume that each object is ascribed to a global layer. A full search over the parameters (assuming unknown occlusion ordering for each image) must consider 42 possibilities, which scales exponentially with . An alternative is to consider approximations; Ghahramani (1995) suggests mean field and Gibbs sampling approximations and Jojic and Frey (2001) use approximate variational inference. Our goal is to find one object at a time in the images. We describe two methods for doing this. The first uses random initializations, and on different runs can find different objects; we denote this RANDOM STARTS. The second method (denoted GREEDY) removes objects found in earlier iterations and looks for as-yet-undiscovered objects in what remains. For both methods we need to adapt the model presented in section 2.1. The problem is that occlusion can occur of both the foreground and the background. For a foreground pixel, a different object to the one being modelled may be interposed between the camera and our object, thus perturbing the pixel value. This can be modelled with a mixture distribution as ! %,' 65 * ! %(' -/.10 7 8 5 :9 ! , where 5 is the fraction of times a foreground pixel is not occluded and the robustifying component 9 ! is a uniform distribution common for all image pixels. Such robust models have been used for image matching tasks by a number of authors, notably Black and colleagues (Black and Jepson, 1996). Similarly for the background, a different object from the one being modelled may be interposed between the background and the camera, so that we again have a mixture model 8 ! &%(:/ 5 8 * ! &%,:/ -(.10 8
5 8 :9 ! , with similar semantics for the parameter 5 8 . (If the background has high variability then this robustness may not be required, but it will be in the case that the background is fixed while the objects move.) 2.2.1 Finding the first object With this robust model we can now apply the RANDOM STARTS algorithm by maximizing the likelihood of a set of images with respect to the model using the EM algorithm. The expected complete data log likelihood is given by + % ! % $ % -/.10 % . 0 % 0 -/.10 . 0
% -/.10 % 8 . 0 8 % 0 -/.10 . 0 8 &7 ( 5 (3) where defines the element-wise product between two vectors, is written as 0 for compactness and denotes the -dimensional vector containing ones. The expected values of several latent variables are as follows: ' ! "$#&% " " is the transformation responsibility, % is a -dimensional vector associated with the binary variables with each element storing the probability 5 % (')+*(-, *-./ 0+* 21 * , * .3 0 * 546 879 21 * -:( * . 8 * , %& is the vector containing the robust responsibilities for the foreground on image % using transformation , so that its <;>= element is equal to ? , + A@ * ./ 0 * BC , ? , + @ * ./ 0 * B C , 54D E7 ? ,GF9 @ * and similarly the vector % 8 defines the robust responsibilities of the background. Note that the latter responsibilities do not depend on the transformation since the background is not transformed. All of the above expected values of the missing variables are estimated in the H -step using the current parameter values. In the I -step we maximise the function with respect to the model parameters , , . 0 and .10 8 . We do not have space to show all of the updates but for example KJ + % ! % % % % /L + % ! % % %& (4) where /L stands for the element-wise division between two vectors. This update is quite intuitive. Consider the case when for and otherwise. For pixels which are ascribed to the foreground (i.e. % 8M % M ), the values in % are transformed by 8M (which is 7 M as the transformations are permutation matrices). This removes the effect of the transformation and thus allows the foreground pixels found in each training image to be averaged to produce . On different runs we hope to discover objects. However, this is rather inefficient as the basins of attraction for the different objects may be very different in size given the initialization. Thus we describe the GREEDY algorithm next. 2.2.2 The GREEDY algorithm We assume that we have run the RANDOM STARTS algorithm and have learned a foreground model and mask . We wish to remove from consideration the pixels of the learned object (in each training image) in order to find a new object by applying the same algorithm. For each example image we can use the responsibilities to find the most likely transformation .1 Now note that the transformed mask M % obtains values close to 1 for all object pixels, however some of these pixels might be occluded by other not-yet-discovered objects and we do not wish to remove them from consideration. Thus we consider the vector M % M % % . According to the semantics of the robust foreground responsibilities M % % , will roughly give close to values only for the nonoccluded object pixels. To further explain all pixels having we introduce a new foreground model 0 and mask 0 , then for each transformation of model 2, we obtain M % * ! % M % -/. 0 % 0 ! % ) 0 0 8 ! %(: (5) Note that we have dropped the robustifying component 9 ! from model 1, since the parameters of this object have been learned. By summing out over the possible transformations we can maximize the likelihood with respect to 0 , 0 , .10 C , and .10 8 . The above expression says that each image pixel ! is modelled by a three-component mixture distribution; the pixel ! can belong to the first object with probability , does not belong to the first object and belongs to the second one with probability 0 , while with the remaining probability it is background. Thus, the search for a new object involves only the pixels that are not accounted for by model 1 (i.e. those for which ). This process can be continued, so that after finding a second model, the remaining background is searched for a third model, and so on. The formula for objects becomes M % M % 7 7 * ! % M &-/. 0 : 7 ! % ) 7 8 ! %,: (6) This is a component mixture at each pixel, where the ;>= object is the background. If
then the term 7 M is defined to be equal to . Note that all parameters of the first components are kept fixed (learned in previous stages). We always deal with only one object at a time and thus with one transformation latent variable. This approach can be viewed as approximating the full factorial model by sequentially learning each factor (object). A crucial point is that the algorithm is not assumed to extract layers in images, ordered from the nearest layer to the furthest one. In fact in next section we show a twoobject example of a video sequence where we learn first the occluded object. Space limitations do not permit us to show the function and updates for the parameters, but these are very similar to the RANDOM STARTS, since we also learn only the parameters of one object plus the background while keeping fixed all the parameters of previously discovered objects. 1It would be possible to make a “softer” version of this, where the transformations are weighted by their posterior probabilities, but in practice we have found that these probabilities are usually for the best-fitting transformation and otherwise after learning and . Mask Foreground * Mask Mask Foreground * Mask Background (a) (b) Figure 1: Learning two objects against a stationary background. Panel (a) displays some frames of the training images, and (b) shows the two objects and background found by the GREEDY algorithm. 3 Experiments We describe three experiments extracting objects from images including up to five movable objects, using stationary as well as non-stationary backgrounds. In these experiments the uniform distribution 9 ! is based on the maximum and minimum pixel values of all training image pixels. In all the experiments reported below 5 and 5 8 were chosen to be . Also we assume that the total number of objects that appear in the images is known, thus the GREEDY algorithm terminates when we discover the ;>= object. The learning algorithm also requires the initialization of the foreground and background appearances , the mask and the parameters . 0 and . 0 8 . Each element of the mask is initialised to 0.5, the background appearance to the mean of the training images and the variances . 0 and . 0 8 are initialized to equal large values (larger than the overall variance of all image pixels). For the foreground appearance we compute the pixelwise mean of the training images and add independent Gaussian noise with the equal variances at each pixel, where the variance is set to be large enough so that the range of pixel values found in the training images can be explored. In the GREEDY algorithm each time we add a new object
the parameters , , , .10 -/.10 8 are initialized as described above. This means that the background is reset to the mean of the training images; this is done to avoid local maxima since the background found by considering only some of the objects in the images can be very different than the true background. Figure 1 illustrates the detection of two objects against a stationary background2. Some examples of the 44 training images (excluding the black border) are shown in Figure 1(a) and results are shown in Figure 1(b). For both objects we show both the learned mask and the elementwise product of the learned foreground and mask. In most runs the person with the lighter shirt (Jojic) is discovered first, even though he is occluded and the person with the striped shirt (Frey) is not. Video sequences of the raw data and the extracted objects can be viewed at http://www.dai.ed.ac.uk/homes/s0129556/lmo.html . In Figure 2 five objects are learned against a stationary background, using a dataset of 7 images of size . Notice the large amount of occlusion in some of the training images shown in Figure 2(a). Results are shown in Figure 2(b) for the GREEDY algorithm. 2These data are used in Jojic and Frey (2001). We thank N. Jojic and B. Frey for making available these data via http://www.psi.toronto.edu/layers.html. Mask Foregr. * Mask Mask Foregr. * Mask Mask Foregr. * Mask Mask Foregr. * Mask Mask Foregr. * Mask (a) (b) Figure 2: Learning five objects against a stationary background. Panel (a) displays some of the training images and (b) shows the objects learned by the GREEDY algorithm. Mask Foreground * Mask Mask Foreground * Mask Background (a) (b) Figure 3: Two objects are learned from a set of images with non-stationary background. Panel (a) displays some examples of the training images, and (b) shows the objects found by the GREEDY algorithm. In Figure 3 we consider learning objects against a non-stationary background. Actually three different backgrounds were used, as can be seen in the example images shown in Figure 3(a). There were images in the training set. Using the RANDOM STARTS algorithm the CD was found in 9 out of 10 runs. The results with the GREEDY algorithm are shown in Figure 3(b). The background found is approximately the average of the three backgrounds. Overall we conclude that the RANDOM STARTS algorithm is not very effective at finding multiple objects in images; it needs many runs from different initial conditions, and sometimes fails entirely to find all objects. In contrast the GREEDY algorithm is very effective. 4 Discussion Shams and von der Malsburg (1999) obtained candidate parts by matching images in a pairwise fashion, trying to identify corresponding regions in the two images. These candidate image patches were then clustered to compensate for the effect of occlusions. We make four observations: (i) instead of directly learning the models, they match each image against all others (with complexity * 0 ), as compared to the linear scaling with * in our method; (ii) in their method the background must be removed otherwise it would give rise to large match regions; (iii) they do not define a probabilistic model for the images (with all its attendant benefits); (iv) their data (although based on realistic CAD-type models) is synthetic, and designed to focus learning on shape related features by eliminating complicating factors such as background, surface markings etc. In our work the model for each pixel is a mixture of Gaussians. There is some previous work on pixelwise mixtures of Gaussians (see, e.g. Rowe and Blake 1995) which can, for example, be used to achieve background subtraction and highlight moving objects against a stationary background. Our work extends beyond this by gathering the foreground pixels into objects, and also allows us to learn objects in the more difficult non-stationary background case. For the stationary background case, pixelwise mixture of Gaussians might be useful ways to create candidate objects. The GREEDY algorithm has shown itself to be an effective factorial learning algorithm for image data. We are currently investigating issues such as dealing with richer classes of transformations, detecting automatically, and allowing objects not to appear in all images. Furthermore, although we have described this work in relation to image modelling, it can be applied to other domains. For example, one can make a model for sequence data by having Hidden Markov models (HMMs) for a “foreground” pattern and the “background”. Faced with sequences containing multiple foreground patterns, one could extract these patterns sequentially using a similar algorithm to that described above. It is true that for sequence data it would be possible to train a compound HMM consisting of HMM components simultaneously, but there may be severe local minima problems in the search space so that the sequential approach might be preferable. Acknowledgements: CW thanks Geoff Hinton for helpful discussions concerning the idea of learning one object at a time. References Barlow, H. (1989). Unsupervised Learning. Neural Computation, 1:295–311. Black, M. J. and Jepson, A. (1996). EigenTracking: Robust matching and tracking of articulated objects using a view-based representation. In Buxton, B. and Cipolla, R., editors, Proceedings of the Fourth European Conference on Computer Vision, ECCV’96, pages 329–342. Springer-Verlag. Frey, B. J. and Jojic, N. (1999). Estimating mixture models of images and inferring spatial transformations using the EM algorithm. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 1999. IEEE Computer Society Press. Ft. Collins, CO. Frey, B. J. and Jojic, N. (2002). Transformation Invariant Clustering and Linear Component Analysis Using the EM Algorithm. Revised manuscript under review for IEEE PAMI. Ghahramani, Z. (1995). Factorial Learning and the EM Algorithm. In Tesauro, G., Touretzky, D. S., and Leen, T. K., editors, Advances in Neural Information Processing Systems 7, pages 617–624. Morgan Kaufmann, San Mateo, CA. Hinton, G. E. and Zemel, R. S. (1994). Autoencoders, minimum description length, and Helmholtz free energy. In Cowan, J., Tesauro, G., and Alspector, J., editors, Advances in Neural Information Processing Systems 6. Morgan Kaufmann. Jojic, N. and Frey, B. J. (2001). Learning Flexible Sprites in Video Layers. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2001. IEEE Computer Society Press. Kauai, Hawaii. Lee, D. D. and Seung, H. S. (1999). Learning the parts of objects by non-negative matrix factorization. Nature, 401:788–791. Rowe, S. and Blake, A. (1995). Statistical Background Modelling For Tracking With A Virtual Camera. In Pycock, D., editor, Proceedings of the 6th British Machine Vision Conference, volume volume 2, pages 423–432. BMVA Press. Shams, L. and von der Malsburg, C. (1999). Are object shape primitives learnable? Neurocomputing, 26-27:855–863.
|
2002
|
152
|
2,161
|
Distance Metric Learning, with Application to Clustering with Side-Information Eric P. Xing, Andrew Y. Ng, Michael I. Jordan and Stuart Russell University of California, Berkeley Berkeley, CA 94720 epxing,ang,jordan,russell @cs.berkeley.edu Abstract Many algorithms rely critically on being given a good metric over their inputs. For instance, data can often be clustered in many “plausible” ways, and if a clustering algorithm such as K-means initially fails to find one that is meaningful to a user, the only recourse may be for the user to manually tweak the metric until sufficiently good clusters are found. For these and other applications requiring good metrics, it is desirable that we provide a more systematic way for users to indicate what they consider “similar.” For instance, we may ask them to provide examples. In this paper, we present an algorithm that, given examples of similar (and, if desired, dissimilar) pairs of points in , learns a distance metric over that respects these relationships. Our method is based on posing metric learning as a convex optimization problem, which allows us to give efficient, local-optima-free algorithms. We also demonstrate empirically that the learned metrics can be used to significantly improve clustering performance. 1 Introduction The performance of many learning and datamining algorithms depend critically on their being given a good metric over the input space. For instance, K-means, nearest-neighbors classifiers and kernel algorithms such as SVMs all need to be given good metrics that reflect reasonably well the important relationships between the data. This problem is particularly acute in unsupervised settings such as clustering, and is related to the perennial problem of there often being no “right” answer for clustering: If three algorithms are used to cluster a set of documents, and one clusters according to the authorship, another clusters according to topic, and a third clusters according to writing style, who is to say which is the “right” answer? Worse, if an algorithm were to have clustered by topic, and if we instead wanted it to cluster by writing style, there are relatively few systematic mechanisms for us to convey this to a clustering algorithm, and we are often left tweaking distance metrics by hand. In this paper, we are interested in the following problem: Suppose a user indicates that certain points in an input space (say, ) are considered by them to be “similar.” Can we automatically learn a distance metric over that respects these relationships, i.e., one that assigns small distances between the similar pairs? For instance, in the documents example, we might hope that, by giving it pairs of documents judged to be written in similar styles, it would learn to recognize the critical features for determining style. One important family of algorithms that (implicitly) learn metrics are the unsupervised ones that take an input dataset, and find an embedding of it in some space. This includes algorithms such as Multidimensional Scaling (MDS) [2], and Locally Linear Embedding (LLE) [9]. One feature distinguishing our work from these is that we will learn a full metric over the input space, rather than focusing only on (finding an embedding for) the points in the training set. Our learned metric thus generalizes more easily to previously unseen data. More importantly, methods such as LLE and MDS also suffer from the “no right answer” problem: For example, if MDS finds an embedding that fails to capture the structure important to a user, it is unclear what systematic corrective actions would be available. (Similar comments also apply to Principal Components Analysis (PCA) [7].) As in our motivating clustering example, the methods we propose can also be used in a pre-processing step to help any of these unsupervised algorithms to find better solutions. In the supervised learning setting, for instance nearest neighbor classification, numerous attempts have been made to define or learn either local or global metrics for classification. In these problems, a clear-cut, supervised criterion—classification error—is available and can be optimized for. (See also [11], for a different way of supervising clustering.) This literature is too wide to survey here, but some relevant examples include [10, 5, 3, 6], and [1] also gives a good overview of some of this work. While these methods often learn good metrics for classification, it is less clear whether they can be used to learn good, general metrics for other algorithms such as K-means, particularly if the information available is less structured than the traditional, homogeneous training sets expected by them. In the context of clustering, a promising approach was recently proposed by Wagstaff et al. [12] for clustering with similarity information. If told that certain pairs are “similar” or “dissimilar,” they search for a clustering that puts the similar pairs into the same, and dissimilar pairs into different, clusters. This gives a way of using similarity side-information to find clusters that reflect a user’s notion of meaningful clusters. But similar to MDS and LLE, the (“instance-level”) constraints that they use do not generalize to previously unseen data whose similarity/dissimilarity to the training set is not known. We will later discuss this work in more detail, and also examine the effects of using the methods we propose in conjunction with these methods. 2 Learning Distance Metrics Suppose we have some set of points
, and are given information that certain pairs of them are “similar”:
if and ! are similar (1) How can we learn a distance metric "#! between points and # that respects this; specifically, so that “similar” points end up close to each other? Consider learning a distance metric of the form "#!%$ '&( "#)$+*,* -.#/*,* & $+0 1-2#4365 7-.#98 (2) To ensure that this be a metric—satisfying non-negativity and the triangle inequality— we require that 5 be positive semi-definite, 5;:=< .1 Setting 5>$@? gives Euclidean distance; if we restrict 5 to be diagonal, this corresponds to learning a metric in which the different axes are given different “weights”; more generally, 5 parameterizes a family of Mahalanobis distances over .2 Learning such a distance metric is also equivalent to finding a rescaling of a data that replaces each point with 5 ACB and applying the 1Technically, this also allows pseudometrics, where DFEHGJILKNMPO/QSR does not imply ITQUM . 2Note that, but putting the original dataset through a non-linear basis function V and considering W GXV/GJI!OY7VZGJMPONO\[L]^GXV/GJI!OY_VZGJMPONO , non-linear distance metrics can also be learned. standard Euclidean metric to the rescaled data; this will later be useful in visualizing the learned metrics. A simple way of defining a criterion for the desired metric is to demand that pairs of points in have, say, small squared distance between them: &
*,* *,* B & . This is trivially solved with 5 $ < , which is not useful, and we add the constraint *,* -S!!** & to ensure that 5 does not collapse the dataset into a single point. Here, ! can be a set of pairs of points known to be “dissimilar” if such information is explicitly available; otherwise, we may take it to be all pairs not in . This gives the optimization problem: & *,* /-2!'*,* B & (3) s.t. " *,* L /-2!'*,* &#$ (4) 5 : < 8 (5) The choice of the constant 1 in the right hand side of (4) is arbitrary but not important, and changing it to any other positive constant % results only in 5 being replaced by % B 5 . Also, this problem has an objective that is linear in the parameters 5 , and both of the constraints are also easily verified to be convex. Thus, the optimization problem is convex, which enables us to derive efficient, local-minima-free algorithms to solve it. We also note that, while one might consider various alternatives to (4), “ & ** !'*,* B & ' ” would not be a good choice despite its giving a simple linear constraint. It would result in 5 always being rank 1 (i.e., the data are always projected onto a line).3 2.1 The case of diagonal 5 In the case that we want to learn a diagonal 5 $)( *,+ 5 C C5 BCB 8 8 8ZC5 , we can derive an efficient algorithm using the Newton-Raphson method. Define 5 %$ 5 C 8 8 8Z5 )$ . *,* -2 *,* B & -0/1 +324 . *,* -2 *,* &657 It is straightforward to show that minimizing (subject to 5 : < ) is equivalent, up to a multiplication of 5 by a positive constant, to solving the original problem (3–5). We can thus use Newton-Raphson to efficiently optimize - .4 2.2 The case of full 5 In the case of learning a full matrix 5 , the constraint that 5 : < becomes slightly trickier to enforce, and Newton’s method often becomes prohibitively expensive (requiring 8 :9<; time to invert the Hessian over 9 B parameters). Using gradient descent and the idea of iterative projections (e.g., [8]) we derive a different algorithm for this setting. 3The proof is reminiscent of the derivation of Fisher’s linear discriminant. Briefly, consider maximizing G>=@?BA C A DFEGIHBH IKJ"Y IL HBH M E ONO=@?BA C A PD:EQRHBH IKJ"Y IL HBH M E Q
STU VXW]ZY G N[STUVXWL]ZY Q , where Y]\ Q'=^?_A C A DFE \ GJIKJHYSIL OCGJI`J YSIL O [ . Decomposing ] as ] Qa=cb Jdfehg J g [ J (always possible since ]ai R ), this gives = Jg [ J Y G g J:NO= Jjg [ J Y Q g J , which we recognize as a Rayleighquotient like quantity whose solution is given by (say) solving the generalized eigenvector problem Y G g e Q#kKY Q g e for the principal eigenvector, and setting g M Q@lXlXl Q g b QSR . 4To ensure that ]miR , which is true iff the diagonal elements ] JJ are non-negative, we actually replace the Newton update npo eqsr by tun]o eqsr , where t is a step-size parameter optimized via a line-search to give the largest downhill step subject to ]ZJJuv.R . Iterate Iterate 5 $ * P+ & !** 5 -25 ** 5" 5 $ * P+ & !** 5 -25 ** 5 B until 5 converges 5 $ 5
& 5 until convergence Figure 1: Gradient ascent + Iterative projection algorithm. Here, HBHhHBH is the Frobenius norm on matrices ( HBH Y HBH Q G:= J = L Y M J L O e M ). We pose the equivalent problem: * & 5)$
*,* L !'*,* & (6) s.t. 5 )$ ** *,* B & (7) 5 : < 8 (8) We will use a gradient ascent step on 5 to optimize (6), followed by the method of iterative projections to ensure that the constraints (7) and (8) hold. Specifically, we will repeatedly take a gradient step 5 $ 5
& 5 , and then repeatedly project 5 into the sets $ 5 *,* L "-U'** B & and B $ 5 5 : < . This gives the algorithm shown in Figure 1.5 The motivation for the specific choice of the problem formulation (6–8) is that projecting 5 onto or B can be done inexpensively. Specifically, the first projection step 5 $ *! + & *,* 55 ** B 5"(# involves minimizing a quadratic objective subject to a single linear constraint; the solution to this is easily found by solving (in 8 F9 B time) a sparse system of linear equations. The second projection step onto B , the space of all positive-semi definite matrices, is done by first finding the diagonalization 5 $%$ 3'& $ , where & $ ( *,+ ( 8 8 8" ( is a diagonal matrix of 5 ’s eigenvalues and the columns of $ *) contains 5 ’s corresponding eigenvectors, and taking 5+ $,$ 3 & -$ , where & $ ( *,+ *!
< ( 8 8 8Z
< ( . (E.g., see [4].) 3 Experiments and Examples We begin by giving some examples of distance metrics learned on artificial data, and then show how our methods can be used to improve clustering performance. 3.1 Examples of learned distance metrics Consider the data shown in Figure 2(a), which is divided into two classes (shown by the different symbols and, where available, colors). Suppose that points in each class are “similar” to each other, and we are given reflecting this.6 Depending on whether we learn a diagonal or a full 5 , we obtain: &.0/ 12354016 879 1.036 : : : 1.007 : : : : ;<>= &?A@06 6 879 3.245 3.286 0.081 3.286 3.327 0.082 0.081 0.082 0.002 ;< To visualize this, we can use the fact discussed earlier that learning *,*CB*,* & is equivalent to finding a rescaling of the data 5 ACB , that hopefully “moves” the similar pairs 5The algorithm shown in the figure includes a small refinement that the gradient step is taken the direction of the projection of q E r onto the orthogonal subspace of q E'D , so that it will “minimally” disrupt the constraint E e . Empirically, this modification often significantly speeds up convergence. 6In the experiments with synthetic data, F was a randomly sampled 1% of all pairs of similar points. −5 0 5 −5 0 5 −5 0 5 x 2−class data (original) y z −5 0 5 −5 0 5 −5 0 5 x y z 2−class data projection (Newton) −20 0 20 −20 0 20 −5 0 5 x 2−class data projection (IP) y z (a) (b) (c) Figure 2: (a) Original data, with the different classes indicated by the different symbols (and colors, where available). (b) Rescaling of data corresponding to learned diagonal ] . (c) Rescaling corresponding to full ] . −5 0 5 −5 0 5 −2 0 2 x 3−class data (original) y z −5 0 5 −5 0 5 −2 0 2 x y z 3−class data projection (Newton) −2 0 2 −2 0 2 −2 0 2 x y z 3−class data projection (IP) (a) (b) (c) Figure 3: (a) Original data. (b) Rescaling corresponding to learned diagonal ] . (c) Rescaling corresponding to full ] . together. Figure 2(b,c) shows the result of plotting 5 ACB . As we see, the algorithm has successfully brought together the similar points, while keeping dissimilar ones apart. Figure 3 shows a similar result for a case of three clusters whose centroids differ only in the x and y directions. As we see in Figure 3(b), the learned diagonal metric correctly ignores the z direction. Interestingly, in the case of a full 5 , the algorithm finds a surprising projection of the data onto a line that still maintains the separation of the clusters well. 3.2 Application to clustering One application of our methods is “clustering with side information,” in which we learn a distance metric using similarity information, and cluster data using that metric. Specifically, suppose we are given , and told that each pair Z C! means L and belong to the same cluster. We will consider four algorithms for clustering: 1. K-means using the default Euclidean metric *,* - *,* B B between points and cluster centroids to define distortion (and ignoring ). 2. Constrained K-means: K-means but subject to points U always being assigned to the same cluster [12].7 3. K-means + metric: K-means but with distortion defined using the distance metric ** "- *,* B & learned from . 4. Constrained K-means + metric: Constrained K-means using the distance metric learned from . 7This is implemented as the usual K-means, except if GJI J KNI L O F , then during the step in which points are assigned to cluster centroids , we assign both IKJ and I L to cluster U T
GJIKJ Y O M GJI L Y O M . More generally, if we imagine drawing an edge between each pair of points in , then all the points in each resulting connected component E are constrained to lie in the same cluster, which we pick to be U T
O= A E GJIKJY O M . −20 0 20 −20 0 20 −10 0 10 x Original 2−class data y z −20 0 20 −20 0 20 −10 0 10 x y z Porjected 2−class data (a) (b) 1. K-means: Accuracy = 0.4975 2. Constrained K-means: Accuracy = 0.5060 3. K-means + metric: Accuracy = 1 4. Constrained K-means + metric: Accuracy = 1 Figure 4: (a) Original dataset (b) Data scaled according to learned metric. ( ] ’s result is shown, but ]
gave visually indistinguishable results.) Let % ( $ 8 8 89 ) be the cluster to which point is assigned by an automatic clustering algorithm, and let % be some “correct” or desired clustering of the data. Following [?], in the case of 2-cluster data, we will measure how well the % ’s match the % ’s according to Accuracy $ . % $ % $ % $ % <8
where B is the indicator function ( $ , * / $ < ). This is equivalent to the probability that for two points L , ! drawn randomly from the dataset, our clustering % agrees with the “true” clustering % on whether Z and ! belong to same or different clusters.8 As a simple example, consider Figure 4, which shows a clustering problem in which the “true clusters” (indicated by the different symbols/colors in the plot) are distinguished by their -coordinate, but where the data in its original space seems to cluster much better according to their # -coordinate. As shown by the accuracy scores given in the figure, both K-means and constrained K-means failed to find good clusterings. But by first learning a distance metric and then clustering according to that metric, we easily find the correct clustering separating the true clusters from each other. Figure 5 gives another example showing similar results. We also applied our methods to 9 datasets from the UC Irvine repository. Here, the “true clustering” is given by the data’s class labels. In each, we ran one experiment using “little” side-information , and one with “much” side-information. The results are given in Figure 6.9 We see that, in almost every problem, using a learned diagonal or full metric leads to significantly improved performance over naive K-means. In most of the problems, using a learned metric with constrained K-means (the 5th bar for diagonal 5 , 6th bar for full 5 ) also outperforms using constrained K-means alone (4th bar), sometimes by a very large 8In the case of many ( ) clusters, this evaluation metric tends to give inflated scores since almost any clustering will correctly predict that most pairs are in different clusters. In this setting, we therefore modified the measure averaging not only IJ , I L drawn uniformly at random, but from the same cluster (as determined by ! ) with chance 0.5, and from different clusters with chance 0.5, so that “matches”and “mis-matches”are given the same weight. All results reported here used K-means with multiple restarts, and are averages over at least 20 trials (except for wine, 10 trials). 9 F was generated by picking a random subset of all pairs of points sharing the same class ! J . In the case of “little”side-information, the size of the subset was chosen so that the resulting number of resulting connected components "$# (see footnote 7) would be very roughly 90% of the size of the original dataset. In the case of “much”side-information, this was changed to 70%. −50 0 50 −50 0 50 −50 0 50 x Original data y z −50 0 50 −50 0 50 −50 0 50 x y z Projected data (a) (b) 1. K-means: Accuracy = 0.4993 2. Constrained K-means: Accuracy = 0.5701 3. K-means + metric: Accuracy = 1 4. Constrained K-means + metric: Accuracy = 1 Figure 5: (a) Original dataset (b) Data scaled according to learned metric. ( ] ’s result is shown, but ]
gave visually indistinguishable results.) Kc=447 Kc=354 0 0.2 0.4 0.6 0.8 1 Boston housing (N=506, C=3, d=13) Kc=269 Kc=187 0 0.2 0.4 0.6 0.8 1 ionosphere (N=351, C=2, d=34) Kc=133 Kc=116 0 0.2 0.4 0.6 0.8 1 Iris plants (N=150, C=3, d=4) Kc=153 Kc=127 0 0.2 0.4 0.6 0.8 1 wine (N=168, C=3, d=12) Kc=548 Kc=400 0 0.2 0.4 0.6 0.8 1 balance (N=625, C=3, d=4) Kc=482 Kc=358 0 0.2 0.4 0.6 0.8 1 breast cancer (N=569, C=2, d=30) Kc=41 Kc=34 0 0.2 0.4 0.6 0.8 1 soy bean (N=47, C=4, d=35) Kc=92 Kc=61 0 0.2 0.4 0.6 0.8 1 protein (N=116, C=6, d=20) Kc=694 Kc=611 0 0.2 0.4 0.6 0.8 1 diabetes (N=768, C=2, d=8) Figure 6: Clustering accuracy on 9 UCI datasets. In each panel, the six bars on the left correspond to an experiment with “little”side-information F , and the six on the right to “much” side-information. From left to right, the six bars in each set are respectively K-means, K-means diagonal metric, K-means full metric, Constrained K-means (C-Kmeans), C-Kmeans diagonal metric, and C-Kmeans full metric. Also shown are : size of dataset; E : number of classes/clusters; D : dimensionality of data; "$# : mean number of connected components (see footnotes 7, 9). 1 s.e. bars are also shown. 0 0.1 0.2 0.5 0.6 0.7 0.8 0.9 1 ratio of constraints performance Performance on Protein dataset kmeans c−kmeans kmeans + metric (diag A) c−kmeans + metric (diag A) kmeans + metric (full A) c−kmeans + metric (full A) 0 0.1 0.2 0.5 0.6 0.7 0.8 0.9 1 ratio of constraints performance Performance on Wine dataset kmeans c−kmeans kmeans + metric (diag A) c−kmeans + metric (diag A) kmeans + metric (full A) c−kmeans + metric (full A) (a) (b) Figure 7: Plots of accuracy vs. amount of side-information. Here, the I -axis gives the fraction of all pairs of points in the same class that are randomly sampled to be included in F . margin. Not surprisingly, we also see that having more side-information in typically leads to metrics giving better clusterings. Figure 7 also shows two typical examples of how the quality of the clusterings found increases with the amount of side-information. For some problems (e.g., wine), our algorithm learns good diagonal and full metrics quickly with only a very small amount of side-information; for some others (e.g., protein), the distance metric, particularly the full metric, appears harder to learn and provides less benefit over constrained K-means. 4 Conclusions We have presented an algorithm that, given examples of similar pairs of points in , learns a distance metric that respects these relationships. Our method is based on posing metric learning as a convex optimization problem, which allowed us to derive efficient, localoptima free algorithms. We also showed examples of diagonal and full metrics learned from simple artificial examples, and demonstrated on artificial and on UCI datasets how our methods can be used to improve clustering performance. References [1] C. Atkeson, A. Moore, and S. Schaal. Locally weighted learning. AI Review, 1996. [2] T. Cox and M. Cox. Multidimensional Scaling. Chapman & Hall, London, 1994. [3] C. Domeniconi and D. Gunopulos. Adaptive nearest neighbor classification using support vector machines. In Advances in Neural Information Processing Systems 14. MIT Press, 2002. [4] G. H. Golub and C. F. Van Loan. Matrix Computations. Johns Hopkins Univ. Press, 1996. [5] T. Hastie and R. Tibshirani. Discriminant adaptive nearest neighbor classification. IEEE Transactions on Pattern Analysis and Machine Learning, 18:607–616, 1996. [6] T.S. Jaakkola and D. Haussler. Exploiting generative models in discriminaive classifier. In Proc. of Tenth Conference on Advances in Neural Information Processing Systems, 1999. [7] I.T. Jolliffe. Principal Component Analysis. Springer-Verlag, New York, 1989. [8] R. Rockafellar. Convex Analysis. Princeton Univ. Press, 1970. [9] S.T. Roweis and L.K. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science 290: 2323-2326. [10] B. Scholkopf and A. Smola. Learning with Kernels. In Press, 2001. [11] N. Tishby, F. Pereira, and W. Bialek. The information bottleneck method. In Proc. of the 37th Allerton Conference on Communication, Control and Computing, 1999. [12] K. Wagstaff, C. Cardie, S. Rogers, and S. Schroedl. Constrained k-means clustering with background knowledge. In Proc. 18th International Conference on Machine Learning, 2001.
|
2002
|
153
|
2,162
|
Location Estimation with a Differential Update Network Ali Rahimi and Trevor Darrell Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, MA 02139 {ali,trevor}@mit.edu Abstract Given a set of hidden variables with an a-priori Markov structure, we derive an online algorithm which approximately updates the posterior as pairwise measurements between the hidden variables become available. The update is performed using Assumed Density Filtering: to incorporate each pairwise measurement, we compute the optimal Markov structure which represents the true posterior and use it as a prior for incorporating the next measurement. We demonstrate the resulting algorithm by calculating globally consistent trajectories of a robot as it navigates along a 2D trajectory. To update a trajectory of length t, the update takes O(t). When all conditional distributions are linear-Gaussian, the algorithm can be thought of as a Kalman Filter which simplifies the state covariance matrix after incorporating each measurement. 1 Introduction Consider a hidden Markov chain. Given a sequence of pairwise measurements between the elements of the chain (for example, their differences, corrupted by noise) we are asked to refine our estimate of their values online, as these pairwise measurements become available. We propose the Differential Update Network as a mechanism for solving this problem. We use this mechanism to recover the trajectory of a robot given noisy measurements of its movement between points in its trajecotry. These pairwise displacements are thought of as noise corrupted measurements between the true but unknown poses to be recovered. The recovered trajectories are consistent in the sense that when the camera returns to an already visited position, its estimated pose is consistent with the pose recovered on the earlier visit. Pose change measurements between two points on the trajectory are obtained by bringing images of the environment acquired at each pose into registration with each other. The required transformation to affect the registration is the pose change measurement. There is a rich literature on computing pose changes from a pair of scans from an optical sensor: 2D [5, 6] and 3D transformations [7, 8, 9] from monocular cameras, or 3D transformations from range imagery [10, 11, 12] are a few examples. These have been used by [1, 2] in 3D model acquisition and by [3, 4] in robot navigation. The trajectory of the robot is defined as the unknown pose from which each frame was acquired, and is maintained in a state vector which is updated as pose changes are measured. Figure 1: Independence structure of a differential update network. An alternative method estimates the pose of the robot with respect to fixed features in the world. These methods represent the world as a set of features, such as corners, lines, and other geometric shapes in 3D [13, 14, 15] and match features between a scan at the current pose and the acquired world representation. However, measurements are still pairwise, since they depend on a feature and the poses of the camera. Because both the feature list and the poses are maintained in the state vector, the differential Update Framework can be applied to both scan-based methods and feature-based methods. Our algorithm incorporates each pose change measurement by updating the pose associated with every frame encountered. To insure that each update can happen in time linear to the length of the trajectory, the correlation structure of the state vector is approximated with a simpler Markov chain after each measurement. This scheme can be thought of as an instance of Assumed Density Filtering (ADF) [16, 17]. The Differential Update Network presented here assumes a linear Gaussian system, but our derivation is general and can accommodate any distribution. For example, we are currently experimenting with discrete distributions. In addition, we focus on frame-based trajectory estimation due to the ready availability of pose change estimators, and to avoid the complexity of maintaining an explicit feature map. The following section describes the model in a Bayesian framework. Sections 3 and 4 sketch existing batch and online methods for obtaining globally consistent trajectories. Section 5 derives the update rules for our algorithm, which is then applied to a 2D trajectory estimation in section 6. 2 Dynamics and Measurement Models Figure 1 depicts the network. We assume the hidden variables xt have a Markov structure with known transition densities: p(X) = T Y t=1 p(xt|xt−1). Pairwise measurements appear on the chain one by one. Conditioned on the hidden variables, these measurements are assumed to be independent: p(Y |X) = Y (s,t)∈M p(yt s|xs, xt), where M is the set of pairs of hidden variables which have been measured. To apply this network to robot localization, let X = {xt}t=1..T be the trajectory of the robot up to time T , with each xt denoting its pose at time t. These poses can be represented using any parametrization of pose, for example as 3D rotations and translation, 2D translations (which is what we use in section 6, or even non-rigid deformations such as affine. The conditional distribution between adjacent x’s is assumed to follow: p(xt+1|xt) = N(xt+1|xt, Λx|x). (1) As the robot moves, the pose change estimator computes the motion yt s of the robot from two scans of the environment. Given the true poses, we assume that these measurements are independent of each other even when they share a common scan. We model each yt s as being drawn from a Gaussian centered around xt −xs: p(yt s|xs, xt) = N(yt s|xt −xs, Λy|xx) (2) The online global estimation problem requires us to update p(X|Y ) as each yt s in Y becomes available. The following section reviews a batch solution for computing p(X|Y ) using this model. Section 4 discusses a recursive approach with a similar running time as the batch version. Section 5 presents our approach, which performs these updates much faster by simplifying the output of the recursive solution after incorporating each measurement. 3 Batch Linear Gaussian Solution Equation (1) dictates a Gaussian prior p(X) with mean mX and covariance ΛX. Because the pose dynamics are Markovian, the inverse covariance Λ−1 X is tri-diagonal. According to equation (2), the observations are drawn from yt s = As,tX + ωs,t = xt −xs + ωs,t, with ωs,t white and Gaussian with covariance λs,t. Stacking up the As,t and λs,t into A and ΛY |X respectively we know that the posterior mean of X|Y is [21]: mX|Y = mX + ΛXA⊤ AΛXA⊤+ ΛY |X −1 Y (3) ΛX|Y = ΛX −ΛXA⊤ AΛXA⊤+ ΛY |X −1 AΛX, (4) or alternatively, Λ−1 X|Y = Λ−1 X + Λ−1 Y |X (5) mX|Y = ΛX|Y Λ−1 X mX + Λ−1 Y |XY . (6) If there are M measurements and T hidden variables, this computation will take O(T 2M) if performed naively. Note that if M > T , as is the case in the robot mapping problem, the alternate equations (5) and (6) can be used to obtain a running time of O(T 3). 4 Online Linear Gaussian Solution Lu and Milios [3] proposed a recursive update for updating the trajectory X|Y old after obtaining a new measurement yt s. Because each measurement is independent of past measurements given the X’s, the update is: p(X|Y old, yt s) Bayes ∝ p(yt s|X)p(X|Y old). (7) Using equations (3) and (4) to perform this update for one yt s takes O(T 2). After integrating M measurements, this yields the same final cost as the batch update. One way to lower this cost is to reduce the number of hidden variables xt by fixing some of them, thus reducing T [23]. It is also possible to take advantage of the sparseness of the covariance structure of X|Y old by using the updates (6) and (5): Λ−1 X|newmX|new = Λ−1 X|oldmX|old + λyts|oldyt s (8) Λ−1 X|new = Λ−1 X|old + A⊤ s,tλ−1 X|oldAs,t (9) Figure 2: The measurement (left) correlates the hidden variables (middle), whose correlation is then simplified (right), and is ready to accept a new measurement. Because Λ−1 X|new has a sparse structure (see equation (9)), mX|new can be found using a sparse linear system solver [23]. Unfortunately, as measurements are incorporated, Λ−1 X|new becomes denser due to the accumulation of the rank 1 terms in equation (9), rendering this approach less effective. In the linear Gaussian case, the Differential Update Network addresses this problem by projecting ΛX|new on the closest covariance matrix which has a tri-diagonal inverse. Hence, in solving (8), ΛX|new is always tri-diagonal, so mX|new is easy to compute. 5 Approximate Online Solution To implement this idea in the general case, we resort to Assumed Density Filtering (ADF) [16]: we approximate p(X|Y old) with a simpler distribution q(X|Y old). To incorporate a new measurement yt s, we apply the update p(X|Y new) Bayes ∝ p(yt s|xs, xt)q(X|Y old). (10) This new p(X|Y new) has a more complicated independence structure than q(X|Y old), so incorporating subsequent measurements would require more work and the resulting posterior would be even hairier. So we approximate it again with a q(X|Y new) that has a simpler independence structure. Subsequent measurements can again be incorporated easily using this new q. Specifically, we force q to always obey Markovian independence. Figure 5 summarizes this process. The following section discusses how to find a Markovian q so as to minimize the KL divergence between p and q. Section 5.2 shows how to incorporate a pairwise measurement on the resulting Markov chain using equation (10). 5.1 Simplifying the independence structure We would like to approximate an arbitrary distribution which factors according to p(X) = Q t pt(xt|Pa[xt]), using one which factors into q(X) = Q t qt(xt|Qa[xt]). Here, Pa[xt] are the parents of node xt in the graph prescribed by p(X), and Qa[xt][xt] = xt−1 are the parents of node xt as prescribed by q(X). The objective is to minimize: q∗= arg min q KL Y pt
Y qt = Z x p(X) ln p(X) Q i qt(xt|Qa[xt]). (11) After some manipulation, it can be shown that: q∗ t = p(xt|Qa[xt]). (12) This says that the best conditional qt is built up from the corresponding pt by marginalizing out the conditions that were removed in the graph. This is not an easy operation to perform in general, but the following section shows how to do it in our case. 5.2 Computing posterior transitions on a graph with a single loop This result suggests a simplification to the update of equation (10). Because the ultimate goal is to compute q(X|Y new), not p(X|Y new), we only need to compute the posterior transitions p(xt|xt−1, Y new). Thus, we circumvent having to first find p then project it onto q. We propose computing these transitions in three steps, one for the transitions to the left of xs, another for the loop, and the third for transitions to the right of xt. 5.2.1 Finding p(xτ|xτ−1, y) for τ = s..t For every s < τ < t, notice that p(y, xτ−1, xt)p(xτ|xτ−1, xt) = p(y, xτ−1, xτ, xt), (13) because according to figure 5, p(xτ|xτ−1, xt) = p(xτ|xτ−1, xt, y). If we could find this joint distribution for all τ, we could find p(xτ|xτ−1, y) by marginalizing out xt and normalizing. We could also find p(xτ|y) by marginalizing out both xt and xτ−1, then normalizing. Finally, we could compute p(y, xτ, xt) for the next τ in the iteration. So there are two missing pieces: The first is p(y, xs, xt) for starting the recursion. Computing this term is easy, because p(y|xs, xt) is the given measurement model, and p(xs, xt) can be obtained easily from the prior by successively applying the total probability theorem. The second missing piece is p(xτ|xτ−1, xt). Note that this quantity does not depend on the measurements and could be computed offline if we wanted to. The recursion for calculating it is: p(xτ|xτ−1, xt) Bayes ∝ p(xt|xτ)p(xτ|xτ−1) (14) p(xt|xτ) = Z dxi+1 p(xt|xi+1)p(xτ+1|xτ) (15) The second equation describes a recursion which starts from t and goes down to s. It computes the influence of node τ on node t. Equation (14) is coupled to this equation and uses its output. It involves applying Bayes rule to compute a function of 3 variables. Because of the backward nature of (15), p(xτ|xτ−1, xt) has to be computed using a pass which runs in the opposite direction of the process of (13). 5.2.2 Finding p(xτ|xτ−1, y) for τ = 1..s Starting from τ = s −1, compute p(y|xτ) = Z dxτ+1 p(y|xτ+1)p(xτ+1|xτ) p(xτ|y) Bayes ∝ p(y|xτ)p(xτ) p(xτ|xτ−1, y) Bayes ∝ p(y|xτ)p(xτ|xτ−1) The recursion first computes the influence of xτ on the observation, then computes the marginal and the transition probability. 5.2.3 Finding p(xτ|xτ−1, y) for τ = t..T Starting from τ = t, compute p(xτ|y) = Z dxτ−1 p(xτ|xτ−1, y)p(xτ−1|y) p(xτ|xτ−1, y) = p(xτ|xτ−1) The second identity follows from the independence structure on the right side of observed nodes. 6 Results We manually navigated a camera rig along two trajectories. The camera faced upward and recorded the ceiling. The robot took about 3 minutes to trace each path, producing about 6000 frames of data for each experiment. The trajectory was pre-marked on the floor so we could revisit specific locations (see the rightmost diagrams of figures 6(a,b)). This was done to make the evaluation of the results simpler. The trajectory estimation worked at frame-rate, although it was processed offline to simplify data acquisition. In these experiments, the pose parameters were (x, y) locations on the floor. All experiments assume the same Brownian motion dynamics. For each new frame, pose changes were computed with respect to at most three base frames. The selection of base frames was based on a measure of appearance between the current frame and all past frames. The pose change estimator was a Lucas-Kanade optical flow tracker [24]. To compute pose displacements, we computed a robust average of the flow vectors using an iterative outlier rejection scheme. We used the number of inlier flow vectors as a crude estimate of the precision of p(yt s|xs, xt). Figures 6(a,b) compare the algorithm presented in this paper against two others. The middle plots compare our algorithm (blue) against the batch algorithm which uses equations (5) and (6) (black). Although our recovered trajectories don’t coincide exactly with the batch solutions, like the batch solutions, ours are smooth and consistent. In contrast, more naive methods of reconstructing trajectories do not exhibit these two desiderata. Estimating the motion of each frame with respect to only the previous base frame yields an unsmooth trajectory (green). Furthermore, loops can’t be closed correctly (for example, the robot is not found to return to the origin). The simplest method of taking into account multiple base frames also fails to meet our requirements. The red trajectory shows what happens when we assume individual poses are independent. This corresponds to using a diagonal matrix to represent the correlation between the poses (instead of the tri-diagonal inverse covariance matrix our algorithm uses). Notice that the resulting trajectory is not smooth, and loops are not well closed. By taking into account a minimum amount of correlation between frame poses, loops have been closed correctly and the trajectory is correctly found to be smooth. 7 Conclusion We have presented a method for approximately computing the posterior distribution of a set of variables for which only pairwise measurements are available. We call the resulting structure a Differential Update Network and showed how to use Assumed Density Filtering to update the posterior as pairwise measurements become available. The two key insights were 1) how to approximate the posterior at each step to minimize KL divergence, and 2) how to compute transition densities on a graph with a single loop in closed form. We showed how to estimate globally consistent trajectories for a camera using this framework. In this linear-Gaussian context, our algorithm can be thought of as a Kalman Filter which projects the state information matrix down to a tri-diagonal representation while minimizing the KL divergence between the truth and obtain estimate. Although the example used pose change measurements between scans of the environment, our framework can be applied to feature-based mapping and localization as well. References [1] A. Stoddart and A. Hilton. Registration of multiple point sets. In IJCV, pages B40–44, 1996. (a) (b) Figure 3: Left, naive accumulation (green) and projecting trajectory to diagonal covariance (red). Loops are not closed well, and trajectory is not smooth. The zoomed areas show that in both naive approaches, there are large jumps in the trajectory, and the pose estimate is incorrect at revisited locations. Right, Differential Update Network (blue) and exact solution (black). Like the batch solution, our solution generates smooth and consistent trajectories. [2] Y. Chen and G. Medioni. Object modelling by registration of multiple range images. In Porceedings of the IEEE Internation Conference on Robotics and Authomation, pages 2724–2728, 1991. [3] F. Lu and E. Milios. Globally consistent range scan alignment for environment mapping. Autonomous Robots, 4:333–349, 1997. [4] J. Gutmann and K. Konolige. Incremental mapping of large cyclic environments. In IEEE International Symposium on Computational Intelligence in Robotics and Automation (CIRA), 2000. [5] Harpreet S. Sawhney, Steve Hsu, and Rakesh Kumar. Robust video mosaicing through topology inference and local to global alignment. In Proc ECCV 2, pages 103–119, 1998. [6] H.-Y. Shum and R. Szeliski. Construction of panoramic mosaics with global and local alignment. In IJCV, pages 101–130, February 2000. [7] A. Shashua. Trilinearity in visual recognition by alignment. In ECCV, pages 479–484, 1994. [8] C. Tomasi and T. Kanade. Shape and motion from image streams under orthography: A factorization approach. International Journal of Computer Vision, 9(2):137–154, 1992. [9] Olivier Faugeras. Three-Dimensional Computer Vision: A Geometric Viewpoint. MIT Press, Cambridge, Massachusetts, 1993. [10] M. Harville, A. Rahimi, T. Darrell, G.G. Gordon, and J. Woodfill. 3d pose tracking with linear depth and brightness constraints. In ICCV99, pages 206–213, 1999. [11] Feng Lu and E. Milios. Robot pose estimation in unknown environments by matching 2d range scans. Robotics and Autonomous Systems, 22(2):159–178, 1997. [12] P. J. Besl and N. D. McKay. A method for registration of 3-d shapes. IEEE Trans. Patt. Anal. Machine Intell., 14(2):239–256, February 1992. [13] N. Ayache and O. Faugeras. Maintaining representations of the environment of a mobile robot. IEEE Tran. Robot. Automat., 5(6):804–819, 1989. [14] Y. Liu, R. Emery, D. Chakrabarti, W. Burgard, and S. Thrun. Using EM to learn 3D models of indoor environments with mobile robots. In IEEE International Conference on Machine Learning (ICML), 2001. [15] R. Smith, M. Self, and P. Cheeseman. Estimating uncertain spatial relationships in robotics. In Uncertainity in Artificial Intelligence, 1988. [16] T.P. Minka. Expectation propagation for approximate bayesian inference. In UAI, 2001. [17] X. Boyen and D. Koller. Tractable inference for complex stochastic processes. In Uncertainty in Artificial Intelligence, 1998. [18] T.P. Minka. Independence diagrams. Technical report, Media Lab, http://www.stat.cmu.edu/˜minka/papers/diagrams.html, 1998. [19] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, 1997. [20] A. Rahimi, L-P. Morency, and T. Darrell. Reducing drift in parametric motion tracking. In ICCV, volume 1, pages 315–322, June 2001. [21] T. Kailath, A. H. Sayed, and B. Hassibi. Linear Estimation. Prentice Hall, 2000. [22] E. Sudderth. Embedded trees: Estimation of gaussian processes on graphs with cycles. Master’s thesis, MIT, 2002. [23] Philip F. McLauchlan. A batch/recursive algorithm for 3d scene reconstruction. Conf. Computer Vision and Pattern Recognition, 2:738–743, 2000. [24] B. D. Lucas and Takeo Kanade. An iterative image registration technique with an application to stereo vision. In International Joint Conference on Artificial Intelligence, pages 674–679, 1981. [25] Andrew W. Fitzgibbon and Andrew Zisserman. Automatic camera recovery for closed or open image sequences. In ECCV, pages 311–326, 1998.
|
2002
|
154
|
2,163
|
Hyperkernels Cheng Soon Ong, Alexander J. Smola, Robert C. Williamson Research School of Information Sciences and Engineering The Australian National University Canberra, 0200 ACT, Australia Cheng.Ong, Alex.Smola, Bob.Williamson @anu.edu.au Abstract We consider the problem of choosing a kernel suitable for estimation using a Gaussian Process estimator or a Support Vector Machine. A novel solution is presented which involves defining a Reproducing Kernel Hilbert Space on the space of kernels itself. By utilizing an analog of the classical representer theorem, the problem of choosing a kernel from a parameterized family of kernels (e.g. of varying width) is reduced to a statistical estimation problem akin to the problem of minimizing a regularized risk functional. Various classical settings for model or kernel selection are special cases of our framework. 1 Introduction Choosing suitable kernel functions for estimation using Gaussian Processes and Support Vector Machines is an important step in the inference process. To date, there are few if any systematic techniques to assist in this choice. Even the restricted problem of choosing the “width” of a parameterized family of kernels (e.g. Gaussian) has not had a simple and elegant solution. A recent development [1] which solves the above problem in a restricted sense involves the use of semidefinite programming to learn an arbitrary positive semidefinite matrix , subject to minimization of criteria such as the kernel target alignment [1], the maximum of the posterior probability [2], the minimization of a learning-theoreticalbound [3], or subject to cross-validation settings [4]. The restriction mentioned is that the methods work with the kernel matrix, rather than the kernel itself. Furthermore, whilst demonstrably improving the performance of estimators to some degree, they require clever parameterization and design to make the method work in the particular situations. There are still no general principles to guide the choice of a) which family of kernels to choose, b) efficient parameterizations over this space, and c) suitable penalty terms to combat overfitting. (The last point is particularly an issue when we have a very large set of semidefinite matrices at our disposal). Whilst not yet providing a complete solution to these problems, this paper presents a framework that allows the optimization within a parameterized family relatively simply, and crucially, intrinsically captures the tradeoff between the size of the family of kernels and the sample size available. Furthermore, the solution presented is for optimizing kernels themselves, rather than the kernel matrix as in [1]. Other approaches on learning the kernel include using boosting [5] and by bounding the Rademacher complexity [6]. Outline of the Paper We show (Section 2) that for most kernel-based learning methods there exists a functional, the quality functional1, which plays a similar role to the empirical risk functional, and that subsequently (Section 3) the introduction of a kernel on kernels, a so-called hyperkernel, in conjunction with regularization on the Reproducing Kernel Hilbert Space formed on kernels leads to a systematic way of parameterizing function classes whilst managing overfitting. We give several examples of hyperkernels (Section 4) and show (Section 5) how they can be used practically. Due to space constraints we only consider Support Vector classification. 2 Quality Functionals Let
denote the set of training data and the set of corresponding labels, jointly drawn iid from some probability distribution ! "#%$ on &('*) . Furthermore, let +-, and +-,. denote the corresponding test sets (drawn from the same ! /0$ ). Let 1
2 43 +-,. and 5
6 7 83 7+-,. . We introduce a new class of functionals 9 on data which we call quality functionals. Their purpose is to indicate, given a kernel : and the training data ; %7 $ , how suitable the kernel is for explaining the training data. Definition 1 (Empirical Quality Functional) Given a kernel : , and data , define 9<+-=?>%@ : <A to be an empirical quality functional if it depends on : only via :B BC#0D$ where C # D;E ; i.e. if there exists a function F such that 9 +-=?> @ : <AGHF0 $ where IJ@ :B C # D $ A CK D is the kernel matrix. The basic idea is that 9L+-=?> could be used to adapt : in a manner such that 9M+-=?> is minimized, based on this single dataset . Given a sufficiently rich class N of kernels : it is in general possible to find a kernel :PO E N that attains arbitrarily small values of 9 +-=?> @ :QO A for any training set. However, it is very unlikely that 9<+-=?>%@ :QO +-,. Q+-,.-A would be similarly small in general. Analogously to the standard methods of statistical learning theory, we aim to minimize the expected quality functional: Definition 2 (Expected Quality Functional) Suppose 9 +-=?> is an empirical quality functional. Then 9!@ :RA/ 2SUT K V @ 9L+-=?>R@ : <AWA (1) is the expected quality functional, where the expectation is taken with respect to . Note the similarity between 9 +-=?> @ : <A and the empirical risk of an estimator X +-=?>%@ Y <AL [Z CW\"] C#C YG C^$#$ (where ] is a suitable loss function): in both cases we compute the value of a functional which depends on some sample drawn from ! /#%$ and a function, and in both cases we have 9!@ :RAP2S_T K V @ 9<+-=?>%@ : <A`A and X @ YAP6S_T K V @ X +-=?>%@ Y <A`A (2) Here X @ YA is known as the expected risk. We now present some examples of quality functionals, and derive their exact minimizers whenever possible. Example 1 (Kernel Target Alignment) This quality functional was introduced in [7] to assess the “alignment” of a kernel with training labels. It is defined by 9 a b =?+- c +-=?> @ : A/ Jdfe %g hijhk k h h k (3) where denotes the vector of elements of , hjh k denotes the l k norm of , and h h k is the Frobenius norm: h h k k nmo g Z CK D k CD . Note that the definition in [7] looks somewhat different, yet it is algebraically identical to (3). 1We actually mean badness, since we are minimizing this functional. By decomposing into its eigensystem, one can see that (3) is minimized if g , in which case 9 a b =?+- c +-=?> @ : O 7 % AHdfe g R g hjhk k h g h k Hd4e hijh k hijhk k hjhk k (4) It is clear that one cannot expect that 9 a b =?+- c +-=?> @ :QO A for data other than the set chosen to determine :PO . Example 2 (Regularized Risk Functional) If is the Reproducing Kernel Hilbert Space (RKHS) associated with the kernel : , the regularized risk functionals have the form X +-b @ Y A/
d CW\" ] C C YG C $#$ fh Y h k (5) where h Y h k is the RKHS norm of Y . By virtue of the representer theorem (see e.g., [4, 8]) we know that the minimizer over Y E of (5) can be written as a kernel expansion. For a given loss ] this leads to the quality functional 9 +-b ,
+-=?> @ : 7 A d C`\" ] C # C @ /A C $ g ! (6) The minimizer of (6) is more difficult to find, since we have to carry out a double minimization over and . First, note that for #" R g and $&%(')%(* , 6 and g +"-, . Thus 9 +-b ,. +-=?> @ : A< / k $ . For sufficiently large " , we can make 9 +-b ,
+-=?> @ : A arbitrarily close to . Even if we disallow setting to zero, by setting mo d , we can determine the minimum of (6) as follows. Set %10% *3242 g , where 2 E65 , and 2 . Then 2 and so d CW\" ] C C @ jA C $ g CW\" ] C C 2 C $7 8h 2 h k k Choosing each 2 C 98o;:4<= ] CC ?>$7 / k > k yields the minimum with respect to 2 . The proof that is the global minimizer of this quality functional is omitted for brevity. Example 3 (Negative Log-Posterior) In Gaussian processes, this functional is similar to X +-b@ Y Q A since it includes a regularization term (the negative log prior) and a loss term (the negative log-likelihood). In addition, it also includes the log-determinant of which measures the size of the space spanned by . The quality functional is 9 a @b>@,. +-=?> @ : Q A/
AB C e C`\ 7DE :F" C;G C Y C $ d Y g , Y d DE : G G ! (7) Note that any which does not have full rank will send (7) to eIH , and thus such cases need to be excluded. When we fix G G Jd , to exclude the above case, we can set J" hBh , k R g " ,LK NM K POMe hijh , k R g $ (8) which leads to G G d . Under the assumption that the minimum of e D<E :F" C C Y C $ with respect to Y C is attained at Y C C , we can see that "RQSH still leads to the overall minimum of 9 a @b>@,. +-=?> @ : Q A . Other examples, such as cross-validation, leave-one-out estimators, the Luckiness framework, the Radius-Margin bound also have empirical quality functionals which can be arbitrarily minimized. The above examples illustrate how many existing methods for assessing the quality of a kernel fit within the quality functional framework. We also saw that given a rich enough class of kernels N , optimization of 9 +-=?> over N would result in a kernel that would be useless for prediction purposes. This is yet another example of the danger of optimizing too much — there is (still) no free lunch. 3 A Hyper Reproducing Kernel Hilbert Space We now introduce a method for optimizing quality functionals in an effective way. The method we propose involves the introduction of a Reproducing Kernel Hilbert Space on the kernel : itself — a “Hyper”-RKHS. We begin with the basic properties of an RKHS (see Def 2.9 and Thm 4.2 in [8] and citations for more details). Definition 3 (Reproducing Kernel Hilbert Space) Let & be a nonempty set (often called the index set) and denote by a Hilbert space of functions Y & Q 5 . Then is called a reproducing kernel Hilbert space endowed with the dot product ` (and the norm h Y h
Y Y ) if there exists a function : & ' &#Q 5 satisfying, / E & : 1. : has the reproducing property Y :B " $ YG P$ for all Y E ; in particular, :B / $ :B ^ $ :B /#.$ . 2. : spans , i.e.
8) :B / $ G E & where is the completion of . The advantage of optimization in an RKHS is that under certain conditions the optimal solutions can be found as the linear combination of a finite number of basis functions, regardless of the dimensionality of the space , as can be seen in the theorem below. Theorem 4 (Representer Theorem) Denote by @ H $ Q 5 a strictly monotonic increasing function, by & a set, and by ] 4 & ' 5 k $ Q 5 3 H an arbitrary loss function. Then each minimizer Y E of the regularized risk ] B #R YG $#$ # YG <$#$$ h Y h $ (9) admits a representation of the form YG $ Z C`\" C :B C#P$ . The above definition allows us to define an RKHS on kernels & ' & Q 5 , simply by introducing &
6& ' & and by treating : as functions : & Q 5 : Definition 5 (Hyper Reproducing Kernel Hilbert Space) Let & be a nonempty set and let & & ' & (the compounded index set). Then the Hilbert space of functions : & Q 5 , endowed with a dot product W (and the norm h : h : : ) is called a Hyper Reproducing Kernel Hilbert Space if there exists a hyperkernel : & ' & Q 5 with the following properties: 1. : has the reproducing property : : $ :B $ for all : E , in particular, : $ : $ : $ . 2. : spans , i.e.
8) : $ G E & . 3. For any fixed E & the hyperkernel : is a kernel in its second argument, i.e. for any fixed E & , the function :B /# $ : / $#$ with "# E & is a kernel. What distinguishes from a normal RKHS is the particular form of its index set ( & & k ) and the additional condition on : to be a kernel in its second argument for any fixed first argument. This condition somewhat limits the choice of possible kernels. On the other hand, it allows for simple optimization algorithms which consider kernels : E , which are in the convex cone of : . Analogously to the definition of the regularized risk functional (5), we define the regularized quality functional: 9 +-b @ : <A/
69 +-=?> @ : LA h : h k (10) where is a regularization constant and h : h k denotes the RKHS norm in . Minimization of 9<+-b is less prone to overfitting than minimizing 9M+-=?> , since the regularization term / k h : h k effectively controls the complexity of the class of kernels under consideration. Regularizers other than / k h : h k are also possible. The question arising immediately from (10) is how to minimize the regularized quality functional efficiently. In the following we show that the minimum can be found as a linear combination of hyperkernels. Corollary 6 (Representer Theorem for Hyper-RKHS) Let be a hyper-RKHS and denote by @ H $ Q 5 a strictly monotonic increasing function, by & a set, and by 9 an arbitrary quality functional. Then each minimizer : E of the regularized quality functional 9!@ : <A h : h k (11) admits a representation of the form :B /# $ CK D\" " CD : # PC#0D $ /# $#$ . Proof All we need to do is rewrite (11) so that it satisfies the conditions of Theorem 4. Let C
D PC#0D $ . Then 9;@ : _A has the properties of a loss function, as it only depends on : via its values at C
D . Furthermore, / k h : h k is an RKHS regularizer, so the representer theorem applies and the expansion of : follows. This result shows that even though we are optimizing over an entire (potentially infinite dimensional) Hilbert space of kernels, we are able to find the optimal solution by choosing among a finite dimensional subspace. The dimension required ( k ) is, not surprisingly, significantly larger than the number of kernels required in a kernel function expansion which makes a direct approach possible only for small problems. However, sparse expansion techniques, such as [9, 8], can be used to make the problem tractable in practice. 4 Examples of Hyperkernels Having introduced the theoretical basis of the Hyper-RKHS, we need to answer the question whether practically useful : exist which satisfy the conditions of Definition 5. We address this question by giving a set of general recipes for building such kernels. Example 4 (Power Series Construction) Denote by : a positive semidefinite kernel, and by 5 Q 5 a function with positive Taylor expansion coefficients B $ Z C`\/] C C and convergence radius X . Then for : k /$ X we have that : # $
B 7:j $ :B $#$ CW\ ] C :B $ :B $#$ C (12) is a hyperkernel: for any fixed , : "# $#$ is a sum of kernel functions, hence it is a kernel itself (since : C /.$ is a kernel if : is). To show that : is a kernel, note that : $ $i
$ , where $
J ] ] : $i ] k : k $i$ . Example 5 (Harmonic Hyperkernel) A special case of (12) is the harmonic hyperkernel: Denote by : a kernel with : &(';&#Q @ dA (e.g., RBF kernels satisfy this property), and set ] C
H d e $ C for some d . Then we have : # $ dfe $ C`\ :B $ :B $$ C dfe dfe :B $ :j $ (13) Example 6 (Gaussian Harmonic Hyperkernel) For :B "# $ / #e k h e -h k $ , : # /# $i # $$ dfe dfe L e k h e h k hi e h k $#$ (14) For Q d , : converges to K ; that is, the expression h : h k converges to the Frobenius norm of : on 1' . B $ Power series expansion X d 2 2 H
;< 6 *
K k 2 H E
d * k 2 * k [ H 8o m 8) [ * K k 6 1 e D / dfe $ * k [ 2 1 Table 1: Examples of Hyperkernels We can find further hyperkernels, simply by consulting tables on power series of functions. Table 1 contains a list of suitable expansions. Recall that expansions such as (12) were mainly chosen for computational convenience, in particular whenever it is not clear which particular class of kernels would be useful for the expansion. Example 7 (Explicit Construction) If we know or have a reasonable guess as to which kernels could be potentially relevant (e.g., a range of scales of kernel width, polynomial degrees, etc.), we may begin with a set of candidate kernels, say : , ..., : and define : # $
C`\" ] C : C $ : C $ : C $ (15) Clearly : is a hyperkernel, since : W$ $
.$ , where $
] : $i ] k : k $ ] : $$ . 5 An Application: Minimization of the Regularized Risk Recall that in the case of the Regularized Risk functional, the regularized quality optimization problem takes on the form B C K d CW\" ] PCC YG PC $$ h Y h k h : h k (16) For Y Z C C :B C #$ , the second term h Y h k is a linear function of : . Given a convex loss function ] , the regularized quality functional (16) is convex in : . The corresponding regularized quality functional is: 9 +-b ,
+-b @ : <A 9 +-b ,
+-=?> @ : <A h : h k (17) For fixed : , the problem can be formulated as a constrained minimization problem in Y , and subsequently expressed in terms of the Lagrange multipliers . However, this minimum depends on : , and for efficient minimization we would like to compute the derivatives with respect to : . The following lemma tells us how (it is an extension of a result in [3] and we omit the proof for brevity): Lemma 7 Let E 5 and denote by YG / $i ] C 5 Q 5 convex functions, where Y is parameterized by . Let X $ be the minimum of the following optimization problem (and denote by $ its minimizer): minimize YG " $ subject to ] C $ for all d ! #"? (18) Then $ D % X $ '& D k YG $ $ , where ( E*) and & k denotes the derivative with respect to the second argument of Y . Since the minimizer of (17) can be written as a kernel expansion (by the representer theorem for Hyper-RKHS), the optimal regularized quality functional can be written as (using the soft margin loss and CD
: # C#0D$i %# $$ : 9 +-b ,
+-b @ " MA d CW\" 8 dfe C DK K \" D " C
D (19) CK DK K \" C D " CD CK DK K \" " C
D " CD Minimization of (19) is achieved by alternating between minimization over for fixed " (this is a quadratic optimization problem), and subsequently minimization over " (with " C
D to ensure positivity of the kernel matrix) for fixed . Low Rank Approximation While being finite in the number of parameters (despite the optimization over two possibly infinite dimensional Hilbert spaces and ), (19) still presents a formidable optimization problem in practice (we have k coefficients for " ). For an explicit expansion of type (15) we can optimize in the expansion coefficients of : C $ : C $ directly, which means that we simply have a quality functional with an l k penalty on the expansion coefficients. Such an approach is recommended if there are few terms in (15). In the general case (or if " ), we resort to a low-rank approximation, as described in [9, 8]. This means that we pick from : BC#0D $ $ with d c ( a small fraction of terms which approximate : on ' sufficiently well. 6 Experimental Results and Summary Experimental Setup To test our claims of kernel adaptation via regularized quality functionals we performed preliminary tests on datasets from the UCI repository (Pima, Ionosphere, Wisconsin diagnostic breast cancer) and the USPS database of handwritten digits (’6’ vs. ’9’). The datasets were split into
training data and test data, except for the USPS data, where the provided split was used. The experiments were repeated over 200 random 60/40 splits. We deliberately did not attempt to tune parameters and instead made the following choices uniformly for all four sets: The kernel width was set to , 5d , where is the dimensionality of the data. We deliberately chose a too large value in comparison with the usual rules of thumb [8] to avoid good default kernels. was adjusted so that / d (that is Hd 4 in the Vapnik-style parameterization of SVMs). This has commonly been reported to yield good results. for the Gaussian Harmonic Hyperkernel was chosen to be throughout, giving adequate coverage over various kernel widths in (13) (small focus almost exclusively on wide kernels, close to d will treat all widths equally). The hyperkernel regularization was set to d , . We compared the results with the performance of a generic Support Vector Machine with the same values chosen for and and one for which had been hand-tuned using cross validation. Results Despite the fact that we did not try to tune the parameters we were able to achieve highly competitive results as shown in Table 2. It is also worth noticing that the number of hyperkernels required after a low-rank decomposition of the hyperkernel matrix contained typically less than 10 hyperkernels, thus rendering the optimization problem not much more costly than a standard Support Vector Machine (even with a very high quality d , approximation of ) and that after the optimization of (19), typically less than were being used. This dramatically reduced the computational burden. Using the same non-optimized parameters for different data sets we achieved results comparable to other recent work on classification such as boosting, optimized SVMs, and kernel target alignment [10, 11, 7] (note that we use a much smaller part of the data for training: X +-b 9 +-b Best in Tuned Data(size) Train Test Train Test [10, 11] SVM pima(768) 25.2 2.0 26.2 3.3 22.2 1.4 23.2 2.0 23.5 22.9 2.0 ionosph(351) 13.4 2.0 16.5 3.4 10.9 1.5 13.4 2.4 6.2 6.1 1.9 wdbc(569) 5.7 0.8 5.7 1.3 2.1 0.6 2.7 1.0 3.2 2.5 0.9 usps(1424) 2.1 3.4 1.5 2.8 NA 2.5 Table 2: Training and test error in percent only
rather than ). Results based on 9M+-b are comparable to hand tuned SVMs (right most column), except for the ionosphere data. We suspect that this is due to the small training sample. Summary and Outlook The regularized quality functional allows the systematic solution of problems associated with the choice of a kernel. Quality criteria that can be used include target alignment, regularized risk and the log posterior. The regularization implicit in our approach allows the control of overfitting that occurs if one optimizes over a too large a choice of kernels. A very promising aspect of the current work is that it opens the way to theoretical analyses of the price one pays by optimizing over a larger set N of kernels. Current and future research is devoted to working through this analysis and subsequently developing methods for the design of good hyperkernels. Acknowledgements This work was supported by a grant of the Australian Research Council. The authors thank Grace Wahba for helpful comments and suggestions. References [1] G. Lanckriet, N. Cristianini, P. Bartlett, L. El Ghaoui, and M. Jordan. Learning the kernel matrix with semidefinite programming. In ICML. Morgan Kaufmann, 2002. [2] C. K. I. Williams. Prediction with Gaussian processes: From linear regression to linear prediction and beyond. In M. I. Jordan, editor, Learning and Inference in Graphical Models. Kluwer Academic, 1998. [3] O. Chapelle, V. Vapnik, O. Bousquet, and S. Mukherjee. Choosing kernel parameters for support vector machines. Machine Learning, 2002. Forthcoming. [4] G. Wahba. Spline Models for Observational Data, volume 59 of CBMS-NSF Regional Conference Series in Applied Mathematics. SIAM, Philadelphia, 1990. [5] K. Crammer, J. Keshet, and Y. Singer. Kernel design using boosting. In Advances in Neural Information Processing Systems 15, 2002. In press. [6] O. Bousquet and D. Herrmann. On the complexity of learning the kernel matrix. In Advances in Neural Information Processing Systems 15, 2002. In press. [7] N. Cristianini, A. Elisseeff, and J. Shawe-Taylor. On optimizing kernel alignment. Technical Report NC2-TR-2001-087, NeuroCOLT, http://www.neurocolt.com, 2001. [8] B. Sch¨olkopf and A. J. Smola. Learning with Kernels. MIT Press, 2002. [9] S. Fine and K. Scheinberg. Efficient SVM training using low-rank kernel representation. Technical report, IBM Watson Research Center, New York, 2000. [10] Y. Freund and R. E. Schapire. Experiments with a new boosting algorithm. In ICML, pages 148–146. Morgan Kaufmann Publishers, 1996. [11] G. R¨atsch, T. Onoda, and K. R. M¨uller. Soft margins for adaboost. Machine Learning, 42(3):287–320, 2001.
|
2002
|
155
|
2,164
|
Prediction and Semantic Association Thomas L. Griffiths & Mark Steyvers Department of Psychology Stanford University, Stanford, CA 94305-2130 {gruffydd,msteyver}@psych.stanford.edu Abstract We explore the consequences of viewing semantic association as the result of attempting to predict the concepts likely to arise in a particular context. We argue that the success of existing accounts of semantic representation comes as a result of indirectly addressing this problem, and show that a closer correspondence to human data can be obtained by taking a probabilistic approach that explicitly models the generative structure of language. 1 Introduction Many cognitive capacities, such as memory and categorization, can be analyzed as systems for efficiently predicting aspects of an organism's environment [1]. Previously, such analyses have been concerned with memory for facts or the properties of objects, where the prediction task involves identifying when those facts might be needed again, or what properties novel objects might possess. However, one of the most challenging tasks people face is linguistic communication. Engaging in conversation or reading a passage of text requires retrieval of a variety of concepts from memory in response to a stream of information. This retrieval task can be facilitated by predicting which concepts are likely to be needed from their context, having efficiently abstracted and stored the cues that support these predictions. In this paper, we examine how understanding the problem of predicting words from their context can provide insight into human semantic association, exploring the hypothesis that the association between words is at least partially affected by their statistical relationships. Several researchers have argued that semantic association can be captured using high-dimensional spatial representations, with the most prominent such approach being Latent Semantic Analysis (LSA) [5]. We will describe this procedure, which indirectly addresses the prediction problem. We will then suggest an alternative approach which explicitly models the way language is generated and show that this approach provides a better account of human word association data than LSA, although the two approaches are closely related. The great promise of this approach is that it illustrates how we might begin to relax some of the strong assumptions about language made by many corpus-based methods. We will provide an example of this, showing results from a generative model that incorporates both sequential and contextual information. 2 Latent Semantic Analysis Latent Semantic Analysis addresses the prediction problem by capturing similarity in word usage: seeing a word suggests that we should expect to see other words with similar usage patterns. Given a corpus containing W words and D documents, the input to LSA is a W x D word-document co-occurrence matrix F in which fwd corresponds to the frequency with which word w occurred in document d. This matrix is transformed to a matrix G via some function involving the term frequency fwd and its frequency across documents fw .. Many applications of LSA in cognitive science use the transformation 2:D_ Wlog{W} H _ d-l f w f w. w logD ' gwd = IOg{fwd + 1}(1 - Hw) (1) where Hw is the normalized entropy of the distribution over documents for each word. Singular value decomposition (SVD) is applied to G to extract a lower dimensional linear subspace that captures much of the variation in usage across words. The output of LSA is a vector for each word, locating it in the derived subspace. The association between two words is typically assessed using the cosine of the angle between their vectors, a measure that appears to produce psychologically accurate results on a variety of tasks [5] . For the tests presented in this paper, we ran LSA on a subset of the TASA corpus, which contains excerpts from texts encountered by children between first grade and the first year of college. Our subset used all D = 37651 documents, and the W = 26414 words that occurred at least ten times in the whole corpus, with stop words removed. From this we extracted a 500 dimensional representation, which we will use throughout the paper. 1 3 The topic model Latent Semantic Analysis gives results that seem consistent with human judgments and extracts information relevant to predicting words from their contexts, although it was not explicitly designed with prediction in mind. This relationship suggests that a closer correspondence to human data might be obtained by directly attempting to solve the prediction task. In this section, we outline an alternative approach that involves learning a probabilistic model of the way language is generated. One generative model that has been used to outperform LSA on information retrieval tasks views documents as being composed of sets of topics [2,4]. If we assume that the words that occur in different documents are drawn from T topics, where each topic is a probability distribution over words, then we can model the distribution over words in anyone document as a mixture of those topics T P(Wi) = LP(Wilzi =j)P(Zi =j) j=l (2) where Zi is a latent variable indicating the topic from which the ith word was drawn and P(wilzi = j) is the probability of the ith word under the jth topic. The words likely to be used in a new context can be determined by estimating the distribution over topics for that context, corresponding to P(Zi). Intuitively, P(wlz = j) indicates which words are important to a topic, while P(z) is the prevalence of those topics within a document. For example, imagine a world where the only topics of conversation are love and research. We could then express IThe dimensionality of the representation is an important parameter for both models in this paper. LSA performed best on the word association task with around 500 dimensions, so we used the same dimensionality for the topic model. the probability distribution over words with two topics, one relating to love and the other to research. The content of the topics would be reflected in P(wlz = j): the love topic would give high probability to words like JOY, PLEASURE, or HEART, while the research topic would give high probability to words like SCIENCE, MATHEMATICS, or EXPERIMENT. Whether a particular conversation concerns love, research, or the love of research would depend upon its distribution over topics, P(z), which determines how these topics are mixed together in forming documents. Having defined a generative model, learning topics becomes a statistical problem. The data consist of words w = {Wl' ... , wn }, where each Wi belongs to some document di , as in a word-document co-occurrence matrix. For each document we have a multinomial distribution over the T topics, with parameters ()(d), so for a word in document d, P(Zi = j) = ();d;). The jth topic is represented by a multinomial distribution over the W words in the vocabulary, with parameters 1/i), so P(wilzi = j) = 1>W. To make predictions about new documents, we need to assume a prior distribution on the parameters (). Existing parameter estimation algorithms make different assumptions about (), with varying results [2,4]. Here, we present a novel approach to inference in this model, using Markov chain Monte Carlo with a symmetric Dirichlet(a) prior on ()(di) for all documents and a symmetric Dirichlet(,B) prior on 1>(j) for all topics. In this approach we do not need to explicitly represent the model parameters: we can integrate out () and 1>, defining the model simply in terms of the assignments of words to topics indicated by the Zi' Markov chain Monte Carlo is a procedure for obtaining samples from complicated probability distributions, allowing a Markov chain to converge to the taq~et distribution and then drawing samples from the states of that chain (see [3]). We use Gibbs sampling, where each state is an assignment of values to the variables being sampled, and the next state is reached by sequentially sampling all variables from their distribution when conditioned on the current values of all other variables and the data. We will sample only the assignments of words to topics, Zi. The conditional posterior distribution for Zi is given by n eW;) + (3 n(di) + a P( '1 ) -',} -',} Zi=)Z- i ,wex (.) (d ' ) n_i,j + W (3 n_i,. + Ta (3) where Z - i is the assignment of all Zk such that k f:. i, and n~~:j is the number of words assigned to topic j that are the same as w, n~L is the total number of words assigned to topic j, n~J,j is the number of words from document d assigned to topic j, and n~J. is the total number of words in document d, all not counting the assignment of the current word Wi. a,,B are free parameters that determine how heavily these distributions are smoothed. We applied this algorithm to our subset of the TASA corpus, which contains n = 5628867 word tokens. Setting a = 0.1,,B = 0.01 we obtained 100 samples of 500 topics, with 10 samples from each of 10 runs with a burn-in of 1000 iterations and a lag of 100 iterations between samples.2 Each sample consists of an assignment of every word token to a topic, giving a value to each Zi. A subset of the 500 topics found in a single sample are shown in Table 1. For each sample we can compute 2Random numbers were generated with the Mersenne Twister, which has an extremely deep period [6]. For each run, the initial state of the Markov chain was found using an on-line version of Equation 3. FEEL FEELINGS FEELING ANGRY WAY THINK SHOW FEELS PEOPLE FRIENDS THINGS MIGHT HELP HAPPY FELT LOVE ANGER BEING WAYS FEAR MUSIC PLAY DANCE PLAYS STAGE PLAYED BAND AUDIENCE MUSICAL DANCING RHYTHM PLAYING THEATER DRUM ACTORS SHOW BALLET ACTOR DRAMA SONG BALL GAME TEAM PLAY BASEBALL FOOTBALL PLAYERS GAMES PLAYING FIELD PLAYED PLAYER COACH BASKETBALL SPORTS HIT BAT TENNIS TEAMS SOCCER SCIENCE STUDY SCIENTISTS SCIENTIFIC KNOWLEDGE WORK CHEMISTRY RESEARCH BIOLOGY MATHEMATICS LABORATORY STUDYING SCIENTIST PHYSICS FIELD STUDIES UNDERSTAND STUDIED SCIENCES MANY WORKERS WORK LABOR JOBS WORKING WORKER WAGES FACTORY JOB WAGE SKILLED PAID CONDITIONS PAY FORCE MANY HOURS EMPLOYMENT EMPLOYED EMPLOYERS FORCE FORCES MOTION BODY GRAVITY MASS PULL NEWTON OBJECT LAW DIRECTION MOVING REST FALL ACTING MOMENTUM DISTANCE GRAVITATIONAL PUSH VELOCITY Table 1: Each column shows the 20 most probable words in one of the 500 topics obtained from a single sample. The organization of the columns and use of boldface displays the way in which polysemy is captured by the model. the posterior predictive distribution (and posterior mean for q/j)): J ( .) ( 0) ( 0) n (W) + (3 P(wlz = j, z, w) = P(wlz = j, ¢ J )P(¢ J Iz, w) d¢ J = _(;=,.J) __ nj + W (3 (4) 4 Predicting word association We used both LSA and the topic model to predict the association between pairs of words, comparing these results with human word association norms collected by Nelson, McEvoy and Schreiber [7]. These word association norms were established by presenting a large number of participants with a cue word and asking them to name an associated word in response. A total of 4544 of the words in these norms appear in the set of 26414 taken from the TASA corpus. 4.1 Latent Semantic Analysis In LSA, the association between two words is usually measured using the cosine of the angle between their vectors. We ordered the associates of each word in the norms by their frequencies, making the first associate the word most commonly given as a response to the cue. For example, the first associate of NEURON is BRAIN. We evaluated the cosine between each word and the other 4543 words in the norms, and then computed the rank of the cosine of each of the first ten associates, or all of the associates for words with less than ten. The results are shown in Figure 1. Small ranks indicate better performance, with a rank of one meaning that the target word had the highest cosine. The median rank of the first associate was 32, and LSA correctly predicted the first associate for 507 of the 4544 words. 4.2 The topic model The probabilistic nature of the topic model makes it easy to predict the words likely to occur in a particular context. If we have seen word WI in a document, then we can determine the probability that word W2 occurs in that document by computing P( w2IwI). The generative model allows documents to contain multiple topics, which 450 1_ LSA - cosine 1 LSA - inner product D Topic model 400 350 300 250 II 200 150 l;r 100 50 o lin 2 3 4 5 6 7 8 9 10 Associate number Figure 1: Performance of different methods of prediction on the word association task. Error bars show one standard error, estimated with 1000 bootstrap samples. is extremely important to capturing the complexity of large collections of words and computing the probability of complete documents. However, when comparing individual words it is more effective to assume that they both come from a single topic. This assumption gives us (5) z where we use Equation 4 for P(wlz) and P(z) is uniform, consistent with the symmetric prior on e, and the subscript in Pi (w2lwd indicates the restriction to a single topic. This estimate can be computed for each sample separately, and an overall estimate obtained by averaging over samples. We computed Pi (w2Iwi) for the 4544 words in the norms, and then assessed the rank of the associates in the resulting distribution using the same procedure as for LSA. The results are shown in Figure 1. The median rank for the first associate was 32, with 585 of the 4544 first associates exactly correct. The probabilistic model performed better than LSA, with the improved performance becoming more apparent for the later associates. 4.3 Discussion The central problem in modeling semantic association is capturing the interaction between word frequency and similarity of word usage. Word frequency is an important factor in a variety of cognitive tasks, and one reason for its importance is its predictive utility. A higher observed frequency means that a word should be predicted to occur more often. However, this effect of frequency should be tempered by the relationship between a word and its semantic context. The success of the topic model is a consequence of naturally combining frequency information with semantic similarity: when a word is very diagnostic of a small number of topics, semantic context is used in prediction. Otherwise, word frequency plays a larger role. The effect of word frequency in the topic model can be seen in the rank-order correlation of the predicted ranks of the first associates with the ranks predicted by word frequency alone, which is p = 0.49. In contrast, the cosine is used in LSA because it explicitly removes the effect of word frequency, with the corresponding correlation being p = -0.01. The cosine is purely a measure of semantic similarity, which is useful in situations where word frequency is misleading, such as in tests of English fluency or other linguistic tasks, but not necessarily consistent with human performance. This measure was based in the origins of LSA in information retrieval, but other measures that do incorporate word frequency have been used for modeling psychological data. We consider one such measure in the next section. 5 Relating LSA and the topic model The decomposition of a word-document co-occurrence matrix provided by the topic model can be written in a matrix form similar to that of LSA. Given a worddocument co-occurrence matrix F, we can convert the columns into empirical estimates of the distribution over words in each document by dividing each column by its sum. Calling this matrix P, the topic model approximates it with the nonnegative matrix factorization P ~ ¢O, where column j of ¢ gives 4/j) , and column d of 0 gives ()(d). The inner product matrix ppT is proportional to the empirical estimate of the joint distribution over words P(WI' W2)' We can write ppT ~ ¢OOT ¢T, corresponding to P(WI ,W2) = Lz" Z 2 P(wIlzdP(W2Iz2)P(ZI,Z2) , with OOT an empirical estimate of P(ZI ,Z2)' The theoretical distribution for P(ZI,Z2) is proportional to 1+ 0::, where I is the identity matrix, so OOT should be close to diagonal. The single topic assumption removes the off-diagonal elements, replacing OOT with I to give PI (Wl ' W2) ex: ¢¢T. By comparison, LSA transforms F to a matrix G via Equation 1, then the SVD gives G ~ UDVT for some low-rank diagonal D. The locations of the words along the extracted dimensions are X = UD. If the column sums do not vary extensively, the empirical estimate of the joint distribution over words specified by the entries in G will be approximately P(WI,W2) ex: GGT. The properties of the SVD guarantee that XX T , the matrix of inner products among the word vectors, is the best lowrank approximation to GGT in terms of squared error. The transformations in Equation 1 are intended to reduce the effects of word frequency in the resulting representation, making XXT more similar to ¢¢T. We used the inner product between word vectors to predict the word association norms, exactly as for the cosine. The results are shown in Figure 1. The inner product initially shows worse performance than the cosine, with a median rank of 34 for the first associate and 500 exactly correct, but performs better for later associates. The rank-order correlation with the predictions of word frequency for the first associate was p = 0.46, similar to that for the topic model. The rankorder correlation between the ranks given by the inner product and the topic model was p = 0.81, while the cosine and the topic model correlate at p = 0.69. The inner product and PI (w2lwd in the topic model seem to give quite similar results, despite being obtained by very different procedures. This similarity is emphasized by choosing to assess the models with separate ranks for each cue word, since this measure does not discriminate between joint and conditional probabilities. While the inner product is related to the joint probability of WI and W2, PI (w2lwd is a conditional probability and thus allows reasonable comparisons of the probability of W2 across choices of WI , as well as having properties like asymmetry that are exhibited by word association. HE YOU THEY I SHE WE IT PEOPLE EVERYONE OTHERS SCIENTISTS SOMEONE WHO NOBODY ONE SOMETHING ANYONE EVERYBODY SOME THEN "syntax" ON BE MAKE GET HAVE GO TAKE AT INTO FROM WITH THROUGH OVER AROUND AGAINST ACROSS UPON TOWARD UNDER ALONG NEAR BEHIND OFF ABOVE DOWN BEFORE DO FIND USE SEE HELP KEEP GIVE LOOK COME WORK MOVE LIVE EAT BECOME SAID ASKED THOUGHT TOLD SAYS MEANS CALLED CRIED SHOWS ANSWERED TELLS REPLIED SHOUTED EXPLAINED LAUGHED MEANT WROTE SHOWED BELIEVED WHISPERED "semantics" MAP NORTH EARTH SOUTH POLE MAPS EQUATOR WEST LINES EAST AUSTRALIA GLOBE POLES HEMISPHERE LATITUDE PLACES LAND WORLD COMPASS CONTINENTS DOCTOR PATIENT HEALTH HOSPITAL MEDICAL CARE PATIENTS NURSE DOCTORS MEDICINE NURSING TREATMENT NURSES PHYSICIAN HOSPITALS DR SICK ASSISTANT EMERGENCY PRACTICE Table 2: Each column shows the 20 most probable words in one of the 48 "syntactic" states of the hidden Markov model (four columns on the left) or one of the 150 "semantic" topics (two columns on the right) obtained from a single sample. 6 Exploring more complex generative models The topic model, which explicitly addresses the problem of predicting words from their contexts, seems to show a closer correspondence to human word association than LSA. A major consequence of this analysis is the possibility that we may be able to gain insight into some of the associative aspects of human semantic memory by exploring statistical solutions to this prediction problem. In particular, it may be possible to develop more sophisticated generative models of language that can capture some of the important linguistic distinctions that influence our processing of words. The close relationship between LSA and the topic model makes the latter a good starting point for an exploration of semantic association, but perhaps the greatest potential of the statistical approach is that it illustrates how we might go about relaxing some of the strong assumptions made by both of these models. One such assumption is the treatment of a document as a "bag of words" , in which sequential information is irrelevant. Semantic information is likely to influence only a small subset of the words used in a particular context, with the majority of the words playing functional syntactic roles that are consistet across contexts. Syntax is just as important as semantics for predicting words, and may be an effective means of deciding if a word is context-dependent. In a preliminary exploration of the consequences of combining syntax and semantics in a generative model for language, we applied a simple model combining the syntactic structure of a hidden Markov model (HMM) with the semantic structure of the topic model. Specifically, we used a third-order HMM with 50 states in which one state marked the start or end of a sentence, 48 states each emitted words from a different multinomial distribution, and one state emitted words from a document-dependent multinomial distribution corresponding to the topic model with T = 150. We estimated parameters for this model using Gibbs sampling, integrating out the parameters for both the HMM and the topic model and sampling a state and a topic for each of the 11821091 word tokens in the corpus.3 Some of the state and topic distributions from a single sample after 1000 iterations are shown in Table 2. The states of the HMM accurately picked out many of the functional classes of English syntax, while the state corresponding to the topic model was used to capture the context-specific distributions over nouns. 3This larger number is a result of including low frequency and stop words. Combining the topic model with the HMM seems to have advantages for both: no function words are absorbed into the topics, and the HMM does not need to deal with the context-specific variation in nouns. The model also seems to do a good job of generating topic-specific text - we can clamp the distribution over topics to pick out those of interest, and then use the model to generate phrases. For example, we can generate phrases on the topics of research ("the chief wicked selection of research in the big months" , "astronomy peered upon your scientist's door", or "anatomy established with principles expected in biology"), language ("he expressly wanted that better vowel"), and the law ("but the crime had been severely polite and confused" , or "custody on enforcement rights is plentiful"). While these phrases are somewhat nonsensical, they are certainly topical. 7 Conclusion Viewing memory and categorization as systems involved in the efficient prediction of an organism's environment can provide insight into these cognitive capacities. Likewise, it is possible to learn about human semantic association by considering the problem of predicting words from their contexts. Latent Semantic Analysis addresses this problem, and provides a good account of human semantic association. Here, we have shown that a closer correspondence to human data can be obtained by taking a probabilistic approach that explicitly models the generative structure of language, consistent with the hypothesis that the association between words reflects their probabilistic relationships. The great promise of this approach is the potential to explore how more sophisticated statistical models of language, such as those incorporating both syntax and semantics, might help us understand cognition. Acknowledgments This work was generously supported by the NTT Communications Sciences Laboratories. We used Mersenne Twister code written by Shawn Cokus, and are grateful to Touchstone Applied Science Associates for making available the TASA corpus, and to Josh Tenenbaum for extensive discussions on this topic. References [1] J. R. Anderson. The Adaptive Character of Thought. Erlbaum, Hillsdale, NJ, 1990. [2] D. M. Blei, A. Y. Ng, and M. 1. Jordan. Latent Dirichlet allocation. In T. G. Dietterich, S. Becker, and Z. Ghahramani, eds, Advances in Neural Information Processing Systems 14, 2002. [3] W. R. Gilks, S. Richardson, and D. J . Spiegelhalter, eds. Markov Chain Monte Carlo in Practice. Chapman and Hall, Suffolk, 1996. [4] T . Hofmann. Probabilistic Latent Semantic Indexing. In Proceedings of the TwentySecond Annual International SIGIR Conference, 1999. [5] T. K. Landauer and S. T. Dumais. A solution to Plato's problem: The Latent Semantic Analysis theory of acquisition, induction, and representation of knowledge. Psychological Review, 104:211- 240, 1997. [6] M. Matsumoto and T . Nishimura. Mersenne twister: A 623-dimensionally equidistributed uniform pseudorandom number generator. ACM Transactions on Modeling and Computer Simulation, 8:3- 30, 1998. [7] D. L. Nelson, C. L. McEvoy, and T. A. Schreiber. The University of South Florida word association norms. http://www. usf. edu/FreeAssociation, 1999.
|
2002
|
156
|
2,165
|
A Maximum Entropy Approach To Collaborative Filtering in Dynamic, Sparse, High-Dimensional Domains Dmitry Y. Pavlov NEC Laboratories America 4 Independence Way Princeton, NJ 08540, dpavlov@nec-labs.com David M. Pennock Overture Services, Inc. 74 N. Pasadena Ave., 3rd floor Pasadena, CA 91103, david.pennock@overture.com Abstract We develop a maximum entropy (maxent) approach to generating recommendations in the context of a user’s current navigation stream, suitable for environments where data is sparse, high-dimensional, and dynamic— conditions typical of many recommendation applications. We address sparsity and dimensionality reduction by first clustering items based on user access patterns so as to attempt to minimize the apriori probability that recommendations will cross cluster boundaries and then recommending only within clusters. We address the inherent dynamic nature of the problem by explicitly modeling the data as a time series; we show how this representational expressivity fits naturally into a maxent framework. We conduct experiments on data from ResearchIndex, a popular online repository of over 470,000 computer science documents. We show that our maxent formulation outperforms several competing algorithms in offline tests simulating the recommendation of documents to ResearchIndex users. 1 Introduction Recommender systems attempt to automate the process of “word of mouth” recommendations within a community. Typical application environments are dynamic in many respects: users come and go, users preferences and goals change, items are added and removed, and user navigation itself is a dynamic process. Recommendation domains are also often high dimensional and sparse, with tens or hundreds of thousands of items, among which very few are known to any particular user. Consider, for instance, the problem of generating recommendations within ResearchIndex (a.k.a., CiteSeer),1 an online digital library of computer science papers, receiving thousands of user accesses per hour. The site automatically locates computer science papers found on the Web, indexes their full text, allows browsing via the literature citation graph, and isolates the text around citations, among other services [8]. The archive contains over 470,000 1http://www.researchindex.com documents including the full text of each document, citation links between documents, and a wealth of user access data. With so many documents, and only seven accesses per user on average, the user-document data matrix is exceedingly sparse and thus challenging to model. In this paper, we work with the ResearchIndex data, since it is an interesting application domain, and is typical of many recommendation application areas [14]. There are two conceptually different ways of making recommendations. A content filtering approach is to recommend solely based on the features of a document (e.g., showing documents written by the same author(s), or textually similar documents to ). These methods have been shown to be good predictors [3]. Another possibility is to perform collaborative filtering [13] by assessing the similarities between the documents requested by the current user and the users who interacted with ResearchIndex in the past. Once the users with browsing histories similar to that of a given user are identified, an assumption is made that the future browsing patterns will be similar as well, and the prediction is made accordingly. Common measures of similarity between users include Pearson correlation coefficient [13], mean squared error [16], and vector similarity [1]. More recent work includes application of statistical machine learning techniques, such as Bayesian networks [1], dependency networks [6], singular value decomposition [14] and latent class models [7, 12]. Most of these recommendation algorithms are context and order independent: that is, the rank of recommendations does not depend on the context of the user’s current navigation or on recency effects (past viewed items receive as much weight as recently viewed items). Currently, ResearchIndex mostly employs fairly simple content-based recommenders. Our objective was to design a superior (or at least complementary) model-based recommendation algorithm that (1) is tuned for a particular user at hand, and (2) takes into account the identity of the currently-viewed document , so as not the lead the user too far astray from his or her current search goal. To overcome the sparsity and high dimensionality of the data, we cluster the documents with an objective of maximizing the likelihood that recommendable items co-occur in the same cluster. By marrying the clustering technique with the end goal of recommendation, our approach appears to do a good job at maintaining high recall (sensitivity). Similar ideas in the context of maxent were proposed recently by Goodman in [5]. We explicitly model time: each user is associated with a set of sessions, and each session is modeled as a time sequence of document accesses. We present a maxent model that effectively estimates the probability of the next visited document ID (DID) given the most recently visited DID (“bigrams”) and past indicative DIDs (“triggers”). To our knowledge, this is the first application of maxent for collaborative filtering, and one of the few published formulations that makes accurate recommendations in the context of a dynamic user session [3, 15]. We perform offline empirical tests of our recommender and compare it to competing models. The comparison shows our method is quite accurate, outperforming several other less-expressive models. The rest of the paper is organized as follows. In Section 2, we describe the log data from ResearchIndex and how we preprocessed it. Section 3 presents the greedy algorithm for clustering the documents and discusses how the clustering helps to decompose the original prediction task. In Section 4, we give a high-level description of our maxent model and the features we used for its learning. Experimental results and comparisons with other models are discussed in Section 5. In Section 6, we draw conclusions and describe directions for future work. 2 Preprocessing the ResearchIndex data Each document indexed in ResearchIndex is assigned a unique document ID (DID). Whenever a user accesses the site with a cookie-enabledbrowser, (s)he is identified as a new or returning user and all activity is recorded on the server side with a unique user ID (UID) and a time stamp (TID). We obtained a log file that recorded approximately 3 month worth of ResearchIndex data that can roughly be viewed as a series of requests . In the first processing step, we aggregated the requests by the and broke them into sessions. For a fixed UID, a session is defined as a sequence of document requests, with no two consecutive requests more than seconds apart. In our experiments we chose
, so that if a user was inactive for more than 300 seconds, his next request was considered to mark a start of a new session. The next processing step included heuristics, such as identifying and discarding the sessions belonging to robots (they obviously contaminate the browsing patterns of human users), collapsing all same consecutive DID accesses into a single instance of this DID (our objective was to predict what interests the user beyond the currently requested document), getting rid of all DIDs that occurred less than two times in the log (for two or fewer occurrences, it is hard to reliably train the system to predict them and evaluate performance), and finally discarding sessions containing only one document. 3 Dimensionality Reduction Via Clustering Even after the log is processed, the data still remains high-dimensional (62,240 documents), and sparse, and hence still hard to model. To solve these problems we clustered the documents. Since our objective was to predict the instantaneous user interests, among many possibilities of performing the clustering we chose to cluster based on user navigation patterns. We scanned the processed log once and for each document accumulated the number of times the document was requested immediately after ; in other words, we computed the first-order Markov statistics or bigrams. Based on the user navigation patterns encoded in bigrams, the greedy clustering is done as shown in the following pseudocode: Input: Bigrams ! " $#&% ; Number of Clusters ' ; Output: Set ( of ' Clusters. Algorithm: 0. )+*,- ; 1. set ).-/1032546/1798;: <3! " $#&% // max number of transitions 2. for all docs " =# such that ! " =#% >?) do // all docs with n transitions 3. if @;"A /CBDB"=21)+EDFG>IH>J and # A /5BDB"=21)+EDFK>LH>J and )+*M N'>O 4. (P )+* % A QSRTB3UV@;"WO ; 5. (P )+* % A QSRTB3UV@ # O ; 6. "A /CB3B"=25)+E3FK # A /CBDB"X25)+EDFK-) * ; 7. )+*.YNY ; // new cluster for i and j 8. else if @Z"A /CBDB"X25)+EDF\[1LH>J and # A /CB3B"=25)+E3FK>LH>J3O 9. (P "A /5BDB"=21)+EDF % A QSRTBDU]@ # O ; 10. # A /CB3B"=25)+E3FK?"A /CBDB"=21)+EDF ; // j goes to cluster of i 11. else if @Z"A /CBDB"X25)+EDFK>^H>J and # A /5BDB"=21)+EDF\[1LH>J3O 12. (P # A /5BDB"=21)+EDF % A QSRTBDU]@Z"WO ; 13. "A /CB3B"=25)+E3FK # A /CBDB"X25)+EDF ; // i goes to cluster of j 14. end if 15. " $#% IH>J ; 16. end for Table 1: Top features for some of the clusters. Cluster 1 /25E3) /2CE3) B E US/1"D0 2 F /R D)D4
DR B A A A Cluster 2 0 /"$)T"$) 2 R B WE30D"=) 2 F"WB W/1) E /CBDB"9" / "D) E30D)+E W B14 A A A Cluster 3 ^E D F R946E3) B R E30 USE ^E D R E30D"$E B Q9/25E B A A A Cluster 4 Q / E /5B 0DR "$) 2 /5FF0 EDBDB USE )+E D0 " Q Q9/ E B A A A Cluster 5 0 /)]BD0D4 US/1)T)+E W F"$) 2 0 / WE D4 Q 0DE BDB"D) "$4 /2CE B A A A Cluster 6 FE WE "D) /2CE3) B B3E R 0D" "$) 03RTB"D) FE WE "D) A A A Cluster 7 0 /9" 0 /! WE Q / E D) 2 P"F1E B U EDFR "$) 2 A A A Cluster 8 4" " E P"$0 E E BDB QS0 B3E30" E B3E30" E B A A A 17. if @;)$#&%O goto 1 18. Return S The algorithm starts with empty clusters and then cycles through all documents picking the pairs of documents that have the current highest joint visitation frequency as prompted by a bigram frequency (lines 1 and 2). If both documents in the selected pair are unassigned, a new cluster is allocated for them (lines 3 through 7). If one of the documents in the selected pair has been assigned to one of the previous clusters, the second document is assigned to the same cluster (lines 8 through 14). The algorithm repeats for a lower frequency ) , as long as )$#&% . After the clustering, we can assume that if the user requests a document from the " -th cluster ( " % , he is considerably more likely to prefer a next document from ( " % rather than from ( #% , #(' " , i.e. ) *) @ "+ (P " %-, .+ ( " %$ / W/CO0/ JKH1) . This assumption is reasonable because by construction clusters represent densely connected (in terms of traffic) components, and the traffic across the clusters is small compared to the traffic within each cluster. In view of this observation, we broke individual user sessions down into subsessions, where each subsession consisted of documents belonging to the same cluster. The problem was thus reduced to a series of prediction problems for each cluster. We studied the clusters by trying to find out if the documents within a cluster are topically related. We ran code previously developed at NEC Labs [4] that uses information gain to find the top features that distinguish each cluster from the rest. Table 1 shows the top features for some of the created clusters. The top features are quite consistent descriptors, suggesting that in one session a ResearchIndex user is typically interested in searching among topically-related documents. 4 Trigger MaxEnt In this paper, we model ) @ , 2 @ O / W/CO as a maxent distribution, where W is the identity of the document that will be next requested by the user , given the history 2 @ O and the available /! W/ for all other users. This choice of the maxent model is natural since our intuition is that all of the previously requested documents in the user session influence the identity of . It is also clear that we cannot afford to build a high-order model, because of the sparsity and high-dimensional data, so we need to restrict ourselves to models that can be reliably estimated from the low-order statistics. Bigrams provide one type of such statistics. In order to introduce long term dependence of W on the documents that occurred in the history of the session, we define a trigger as a pair of documents @Z/ 3 O in a given cluster such that ) @ 4, / +52 O is substantially different from ) @ O . To measure the quality of triggers and in order to rank them Table 2: Average number of hits U and height 2 of predictions across the clusters for different ranges of heights and using various models. The boxed numbers are the best values across all models. Model 2 2 J 2 J 2 1%& 2 &% Mult. U 48.78 67.94 80.94 90.93 98.54 1 c. 2 1.437 2.947 4.390 5.773 7.026 Mult. U 95.49 120.52 132.07 138.89 143.33 25 c. 2 1.421 2.503 3.312 3.975 4.528 Mark. U 91.39 115.68 123.44 126.26 127.57 1 c. 2 1.959 3.007 3.571 3.875 4.063 Mark. U 89.75 114.49 122.57 125.61 127.14 25 c. 2 1.959 3.047 3.646 3.972 4.191 Maxent U 111.95 130.35 138.18 142.56 145.55 no sm. 2 1.510 2.296 2.858 3.303 3.694 Maxent U 112.68 130.86 138.53 142.85 145.78 w. sm. 2 1.476 2.258 2.810 3.248 3.633 Corr. U 111.02 132.87 140.96 144.99 147.34 2 1.973 2.801 3.340 3.726 4.021 we computed mutual information between events and 3/ +"2 . The set of features, together with maxent as an objective function, can be shown to lead to the following form of the conditional maxent model ) @ W , 2 O J @ 2 O E37Q
@ W 32 O %$ (1) where @ 2 O is a normalization constant ensuring that the distribution sums to 1. The set of parameters needs to be found from the following set of equations that restrict the distribution ) @ , 2 O to have the same expected value for each feature as seen in the training data: ) @ , 2 O @ 32 O @ @ 2 O 2 O B IJ AAA ( (2) where the LHS represents the expectation (up to a normalization factor) of the feature @ 32 O with respect to the distribution QV@ , 2 O and the RHS is the actual frequency (up to the same normalization factor) of this feature in the training data. There exist efficient algorithms for finding the parameters (e.g. improved iterative scaling [11]) that are known to converge if the constraints imposed on ) are consistent. Under fairly general assumptions, the maxent model can also be shown to be a maximum likelihood model [11]. Employing a Gaussian prior with a zero mean on parameters yields a maximum aposteriori solution that has been shown to be more accurate than the related maximum likelihood solution and other smoothing techniques for maxent models [2]. We use Gaussian smoothing in our experiments with a maxent model. 5 Experimental Results and Comparisons We compared the trigger maxent model with the following models: mixture of Markov models (1 and 25 components), mixture of multinomials (1 and 25 components) and the Table 3: Average time per 1000 predictions and average memory used by various models across 1000 clusters. Time, s Memory, KBytes Mult., 0.0049 0.5038 Mult., 25 0.0559 12.58 Markov, 1 0.0024 1.53 Markov, 25 0.0311 68.23 Maxent, no sm. 0.0746 90.12 Maxent, w. sm. 0.0696 90.12 Correlation 7.2013 17.26 correlation method [1]. The definitions of the models can be found in [9]. The maxent model came in two flavors: unsmoothed and smoothed with a Gaussian prior, with 0 mean and fixed variance 2. We did not optimize the adjustable parameters of the models (such as the number of components for the mixture or the variance of the prior for maxent models) or the number of clusters (1000). We chronologically partitioned the log into roughly 8 million training requests (covering 82 days) and 2 million test requests (covering 17 days). We used the average height of predictions on the test data as a main evaluation criteria. The height of a prediction is defined as follows. Assuming that the probability estimates ) @ 5, 2 O are available from a model ) for a fixed history 2 and all possible values of , we first sort them in the descending order of ) and then find the distance in terms of the number of documents to the actually requested (which we know from the test data) from the top of this sorted list. The height tells us how deep into the list the user must go in order to see the document that actually interests him. The height of a perfect prediction is 0, the maximum (worst) height for a given cluster equals the number of documents in this cluster. Since heights greater than 20 are of little practical interest, we binned the heights of predictions for each cluster. For binning purposes we used height ranges @ Y.J3O O for AAA . Within each bin we also computed the average height of predictions. Thus, the best performing model would place most of the predictions inside the bin(s) with low value(s) of and within those bins the averages would be as low as possible. Table 2 reports the average number of hits each model makes on average in each of the bins, as well as the average height of predictions within the bin. The smoothed maxent model has the best average height of predictions across the bins and scores roughly the same number of hits in each of the bins as the correlation method. The mixture of Markov models with 25 components evidently overfits on the training data and fails to outperform a 1 component mixture. The mixture of multinomials is quite close in quality to, but still not as good as, the maxent model with respect to both the number of hits and the height predictions in each of the bins. In Table 3, we present comparison of various models with respect to the average time taken and memory required to make a prediction. The table clearly illustrates that the maxent model (i.e., the model-based approach) is substantially more time efficient than the correlation (i.e., the memory-based approach), even despite the fact that the model takes on average more memory. In particular, our maxent approach is roughly two orders of magnitude faster than the correlation. 6 Conclusions and Future Work We have described a maxent approach to generating document recommendations in ResearchIndex. We addressed the problem of sparse, high-dimensional data by introducing a clustering of the documents based on the user navigation patterns. A particular advantage of our clustering is that by its definition the traffic across the clusters is small compared to the traffic within the cluster. This advantage allowed us to decompose the original prediction problem into a set of problems corresponding to the clusters. We also demonstrated that our clustering produces highly interpretable clusters: each cluster can be assigned a topical name based on the top-extracted features. We presented a number of models that can be used to solve a document prediction problem within cluster. We showed that the maxent model that combines zero and first order Markov terms as well as the triggers with high information content provides the best average outof-sample performance. Gaussian smoothing improved results even further. There are several important directions to extend the work described in this paper. First, we plan to perform “live” testing of the clustering approach and various models in ResearchIndex. Secondly, our recent work [10] suggests that for difficult prediction problems improvement beyond the plain maxent models can be sought by employing the mixtures of maxent models. We also plan to look at different clustering methods for documents (e.g., based on the content or the link structure) and try to combine prediction results for different clusterings. Our expectation is that such combining could yield better accuracy at the expense of longer running times. Finally, one could think of a (quite involved) EM algorithm that performs the clustering of the documents in a manner that would make prediction within resulting clusters easier. Acknowledgements We would like to thank Steve Lawrence for making available the ResearchIndex log data, Eric Glover for running his naming code on our clusters, Kostas Tsioutsiouliklis and Darya Chudova for many useful discussions, and the anonymous reviewers for helpful suggestions. References [1] J. Breese, D. Heckerman, and C. Kadie. Empirical analysis of predictive algorithms for collaborative filtering. In Proceedings of UAI-1998, pages 43–52. San Francisco, CA: Morgan Kaufmann Publishers, 1998. [2] S. Chen and R. Rosenfeld. A Gaussian prior for smoothing maximum entropy models. Technical Report CMUCS -99-108, Carnegie Mellon University, 1999. [3] D. Cosley, S. Lawrence, and D. Pennock. An open framework for practical testing of recommender systems using ResearchIindex. In International Conference on Very Large Databases (VLDB’02), 2002. [4] E. Glover, D. Pennock, S. Lawrence, and R. Krovetz. Inferring hierarchical descriptions. Technical Report NECI TR 2002-035, NEC Research Institute, 2002. [5] J. Goodman. Classes for fast maximum entropy training. In Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing, 2001. [6] D. Heckerman, D. Chickering, C. Meek, R. Rounthwaite, and C. Kadie. Dependency networks for density estimation, collaborative filtering, and data visualization. Journal of Machine Learning Research, 1:49—75, 2000. [7] T. Hofmann and J. Puzicha. Latent class models for collaborative filtering. In Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence, pages 688–693, 1999. [8] S. Lawrence, C. L. Giles, and K. Bollacker. Digital libraries and Autonomous Citation Indexing. IEEE Computer, 32(6):67–71, 1999. [9] D. Pavlov and D. Pennock. A maximum entropy approach to collaborative filtering in dynamic, sparse, high-dimensional domains. Technical Report NECI TR, NEC Research Institute, 2002. [10] D. Pavlov, A. Popescul, D. Pennock, and L. Ungar. Mixtures of conditional maximum entropy models. Technical Report NECI TR, NEC Research Institute, 2002. [11] S. D. Pietra, V. D. Pietra, and J. Lafferty. Inducing features of random fields. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(4):380–393, April 1997. [12] A. Popescul, L. Ungar, D. Pennock, and S. Lawrence. Probabilistic models for unified collaborative and content-based recommendation in sparse-data environments. In Proceedings of the Seventeenth Conference on Uncertainty in Artificial Intelligence, pages 437–444, 2001. [13] P. Resnick, N. Iacovou, M. Suchak, P. Bergstorm, and J. Riedl. GroupLens: An Open Architecture for Collaborative Filtering of Netnews. In Proceedings of ACM 1994 Conference on Computer Supported Cooperative Work, pages 175–186, Chapel Hill, North Carolina, 1994. ACM. [14] B. Sarwar, G. Karypis, J. Konstan, and J. Riedl. Analysis of recommender algorithms for e-commerce. In Proceedings of the 2nd ACM Conference on Electronic Commerce, pages 158– 167, 2000. [15] G. Shani, R. Brafman, and D. Heckerman. An MDP-based recommender system. In Proceedings of the Seventeenth Conference on Uncertainty in Artificial Intelligence, pages 453—460, 2002. [16] U. Shardanand and P. Maes. Social information filtering: Algorithms for automating “word of mouth”. In Proceedings of ACM CHI’95 Conference on Human Factors in Computing Systems, volume 1, pages 210–217, 1995.
|
2002
|
157
|
2,166
|
Annealing and the Rate Distortion Problem Albert E. Parker Department of Mathematical Sciences Montana State University Bozeman, MT 59771 parker@math.montana.edu Tom´aˇs Gedeon Department of Mathematical Sciences Montana State University gedeon@math.montana.edu Alexander G. Dimitrov Center for Computational Biology Montana State University alex@nervana.montana.edu Abstract In this paper we introduce methodology to determine the bifurcation structure of optima for a class of similar cost functions from Rate Distortion Theory, Deterministic Annealing, Information Distortion and the Information Bottleneck Method. We also introduce a numerical algorithm which uses the explicit form of the bifurcating branches to find optima at a bifurcation point. 1 Introduction This paper analyzes a class of optimization problems max q∈∆G(q) + βD(q) (1) where ∆is a linear constraint space, G and D are continuous, real valued functions of q, smooth in the interior of ∆, and maxq∈∆G(q) is known. Furthermore, G and D are invariant under the group of symmetries SN. The goal is to solve (1) for β = B ∈[0, ∞). This type of problem, which appears to be NP hard, arises in Rate Distortion Theory [1, 2], Deterministic Annealing [3], Information Distortion [4, 5, 6] and the Information Bottleneck Method [7, 8]. The following basic algorithm, various forms of which have appeared in [3, 4, 6, 7, 8], can be used to solve (1) for β = B. Algorithm 1 Let q0 be the maximizer of max q∈∆G(q) (2) and let β0 = 0. For k ≥0, let (qk, βk) be a solution to (1). Iterate the following steps until βκ = B for some κ. 1. Perform β-step: Let βk+1 = βk + dk where dk > 0. 2. Take q(0) k+1 = qk + η, where η is a small perturbation, as an initial guess for the solution qk+1 at βk+1. 3. Optimization: solve max q∈∆G(q) + βk+1D(q) to get the maximizer qk+1, using initial guess q(0) k+1. We introduce methodology to efficiently perform algorithm 1. Specifically, we implement numerical continuation techniques [9, 10] to effect steps 1 and 2. We show how to detect bifurcation and we rely on bifurcation theory with symmetries [11, 12, 13] to search for the desired solution branch. This paper concludes with the improved algorithm 6 which solves (1). 2 The cost functions The four problems we analyze are from Rate Distortion Theory [1, 2], Deterministic Annealing [3], Information Distortion [4, 5, 6] and the Information Bottleneck Method [7, 8]. We discuss the explicit form of the cost function (i.e. G(q) and D(q)) for each of these scenarios in this section. 2.1 The distortion function D(q) Rate distortion theory is the information theoretic approach to the study of optimal source coding systems, including systems for quantization and data compression [2]. To define how well a source, the random variable Y , is represented by a particular representation using N symbols, which we call YN, one introduces a distortion function between Y and YN D(q(yN|y)) = D(Y, YN) = Ey,yN d(y, yN) = X y X yN q(yN|y)p(y)d(y, yN) where d(y, yN) is the pointwise distortion function on the individual elements of y ∈Y and yN ∈YN. q(yN|y) is a stochastic map or quantization of Y into a representation YN [1, 2]. The constraint space ∆:= {q(yN|y) | X yN q(yN|y) = 1 and q(yN|y) ≥0 ∀y ∈Y } (3) (compare with (1)) is the space of valid quantizers in ℜn. A representation YN is optimal if there is a quantizer q∗(yN|y) such that D(q∗) = minq∈∆D(q). In engineering and imaging applications, the distortion function is usually chosen as the mean squared error [1, 3, 14], ˆD(Y, YN) = Ey,yN ˆd(y, yN), where the pointwise distortion function ˆd(y, yN) is the Euclidean squared distance. In this case, ˆD(Y, YN) is a linear function of the quantizer. In [4, 5, 6], the information distortion measure DI(Y, YN) := X y,yN p(y, yN)KL(p(x|yN)||p(x|y)) = I(X; Y ) −I(X; YN) is used, where the Kullback-Leibler divergence KL is the pointwise distortion function. Unlike the pointwise distortion functions usually investigated in information theory [1, 3], this one is nonlinear, it explicitly considers a third space, X, of inputs, and it depends on the quantizer q(yN|y) through p(x|yN) = P y p(x|y) q(yN|y)p(y) p(yN) . The only term in DI which depends on the quantizer is I(X; YN), so we can replace DI with the effective distortion Deff(q) := I(X; YN). Deff(q) is the function D(q) from (1) which has been considered in [4, 5, 6, 7, 8]. 2.2 Rate Distortion There are two related methods used to analyze communication systems at a distortion D(q) ≤ D0 for some given D0 ≥0 [1, 2, 3]. In rate distortion theory [1, 2], the problem of finding a minimum rate at a given distortion is posed as a minimal information rate distortion problem: R(D0) = minq(yN|y)∈∆I(Y ; YN) D(Y ; YN) ≤D0 . (4) This formulation is justified by the Rate Distortion Theorem [1]. A similar exposition using the Deterministic Annealing approach [3] is a maximal entropy problem maxq(yN|y)∈∆H(YN|Y ) D(Y ; YN) ≤D0 . (5) The justification for using (5) is Jayne’s maximum entropy principle [15]. These formulations are related since I(Y ; YN) = H(YN) −H(YN|Y ). Let I0 > 0 be some given information rate. In [4, 6], the neural coding problem is formulated as an entropy problem as in (5) maxq(yN|y)∈∆H(YN|Y ) Deff(q) ≥I0 . (6) which uses the nonlinear effective information distortion measure Deff. Tishby et. al. [7, 8] use the information distortion measure to pose an information rate distortion problem as in (4) minq(yN|y)∈∆I(Y ; YN) Deff(q) ≥I0 . (7) Using the method of Lagrange multipliers, the rate distortion problems (4),(5),(6),(7) can be reformulated as finding the maxima of max q∈∆F(q, β) = max q∈∆[G(q) + βD(q)] (8) as in (1) where β = B. For the maximal entropy problem (6), F(q, β) = H(YN|Y ) + βDeff(q) (9) and so G(q) from (1) is the conditional entropy H(YN|Y ). For the minimal information rate distortion problem (7), F(q, β) = −I(Y ; YN) + βDeff(q) (10) and so G(q) = −I(Y ; YN). In [3, 4, 6], one explicitly considers B = ∞. For (9), this involves taking limβ→∞maxq∈∆F(q, β) = maxq∈∆Deff(q) which in turn gives minq(yN|y)∈∆DI. In Rate Distortion Theory and the Information Bottleneck Method, one could be interested in solutions to (8) for finite B which takes into account a tradeoff between I(Y ; YN) and Deff. For lack of space, here we consider (9) and (10). Our analysis extends easily to similar formulations which use a norm based distortion such as ˆD(q), as in [3]. 3 Improving the algorithm We now turn our attention back to algorithm 1 and indicate how numerical continuation [9, 10], and bifurcation theory with symmetries [11, 12, 13] can improve upon the choice of the algorithm’s parameters. We begin by rewriting (8), now incorporating the Lagrange multipliers for the equality constraint P yN q(yN|yk) = 1 from (3) which must be satisfied for each yk ∈Y . This gives the Lagrangian L(q, λ, β) = F(q, β) + K X k=1 λk( X yN q(yN|yk) −1). (11) There are optimization schemes, such as the Fixed Point [4, 6] and projected Augmented Lagrangian [6, 16] methods, which exploit the structure of (11) to find local solutions to (8) for step 3 of algorithm 1. 3.1 Bifurcation structure of solutions It has been observed that the solutions {qk} undergo bifurcations or phase transitions [3, 4, 6, 7, 8]. We wish to pose (8) as a dynamical system in order to study the bifurcation structure of local solutions for β ∈[0, B]. To this end, consider the equilibria of the flow ˙q ˙λ = ∇q,λL(q, λ, β) (12) for β ∈[0, B]. These are points q∗ λ∗ where ∇q,λL(q∗, λ∗, β) = 0 for some β. The Jacobian of this system is the Hessian ∆q,λL(q, λ, β). Equilibria, (q∗, λ∗), of (12), for which ∆qF(q∗, β) is negative definite, are local solutions of (8) [16, 17]. Let |Y | = K, |YN| = N, and n = NK. Thus, q ∈∆⊂ℜn and λ ∈ℜK. The (n + K) × (n + K) Hessian of (11) is ∆q,λL(q, λ, β) = ∆qF(q, β) JT J 000 where 000 is K × K [17]. ∆qF is the n × n block diagonal matrix of N K × K matrices {Bi}N i=1 [4]. J is the K × n Jacobian of the vector of K constraints from (11), J = ( IK IK ... IK ) | {z } N blocks . (13) The kernel of ∆q,λL plays a pivotal role in determining the bifurcation structure of solutions to (8). This is due to the fact that bifurcation of an equilibria (q∗, λ∗) of (12) at β = β∗ happen when ker ∆q,λL(q∗, λ∗, β∗) is nontrivial. Furthermore, the bifurcating branches are tangent to certain linear subspaces of ker ∆q,λL(q∗, λ∗, β∗) [12]. 3.2 Bifurcations with symmetry Any solution q∗(yN|y) to (8) gives another equivalent solution simply by permuting the labels of the classes of YN. For example, if P1 and P2 are two n × 1 vectors such that for a solution q∗(yN|y), q∗(yN = 1|y) = P1 and q∗(yN = 2|y) = P2, then the quantizer where ˆq(yN = 1|y) = P2, ˆq(yN = 2|y) = P1 and ˆq(yN|y) = q∗(yN|y) for all other classes yN is a maximizer of (8) with F(ˆq, β) = F(q∗, β). Let SN be the algebraic group of all permutations on N symbols [18, 19]. We say that F(q, β) is SN-invariant if F(q, β) = F(σ(q), β) where σ(q) denotes the action on q by permutation of the classes of YN as defined by any σ ∈SN [17]. Now suppose that a solution q∗is fixed by all the elements of SM for M ≤N. Bifurcations at β = β∗in this scenario are called symmetry breaking if the bifurcating solutions are fixed (and only fixed) by subgroups of SM. To determine where a bifurcation of a solution (q∗, λ∗, β) occurs, one determines β for which ∆qF(q∗, β) has a nontrivial kernel. This approach is justified by the fact that ∆q,λL(q∗, λ∗, β) is singular if and only if ∆qF(q∗, β) is singular [17]. At a bifurcation (q∗, λ∗, β∗) where q∗is fixed by SM for M ≤N, ∆qF(q∗, β∗) has M identical blocks. The bifurcation is generic if each of the identical blocks has a single 0-eigenvector, vvv, and the other blocks are nonsingular. (14) Thus, a generic bifurcation can be detected by looking for singularity of one of the K × K identical blocks of ∆qF(q∗, β). We call the classes of YN which correspond to identical blocks unresolved classes. The classes of YN that are not unresolved are called resolved classes. The Equivariant Branching Lemma and the Smoller-Wasserman Theorem [12, 13] ascertain the existence of explicit bifurcating solutions in subspaces of ker ∆q,λL(q∗, λ∗, β∗) which are fixed by special subgroups of SM [12, 13]. Of particular interest are the bifurcating solutions in subspaces of ker ∆q,λL(q∗, λ∗, β∗) of dimension 1 guaranteed by the following theorem Theorem 2 [17] Let (q∗, λ∗, β∗) be a generic bifurcation of (12) which is fixed (and only fixed) by SM, for 1 < M ≤N. Then, for small t, with β(t = 0) = β∗, there exists M bifurcating solutions, q∗ λ∗ β∗ ! + tuuum β(t) , where 1 ≤m ≤M, (15) [uuum]ν = (M −1)vvv if ν is the mth unresolved class of YN −vvv if ν is some other unresolved class of YN 000 otherwise (16) and vvv is defined as in (14). Furthermore, each of these solutions is fixed by the symmetry group SM−1. For a bifurcation from the uniform quantizer, q 1 N , which is identically 1 N for all y and all yN, all of the classes of YN are unresolved. In this case, uuum = (−vvvT , ..., −vvvT , (N −1)vvvT , −vvvT , ..., −vvvT ,000T )T where (N −1)vvv is in the mth component of uuum. Relevant to the computationalist is that instead of looking for a bifurcation by looking for singularity of the n × n Hessian ∆qF(q∗, β), one may look for singularity of one of the K × K identical blocks, where K = n N . After bifurcation of a local solution to (8) has been detected at β = β∗, knowledge of the bifurcating directions makes finding solutions of interest for β > β∗much easier (see section 3.4.1). 3.3 The subcritical bifurcation In all problems under consideration, the solution for β = 0 is known. For (9), (10) this solution is q0 = q 1 N . For (4) and (5), q0 is the mean of Y . Rose [3] was able to compute explicitly the critical value β∗where q0 loses stability for the Euclidean pointwise distortion function. We have the following related result. Theorem 3 [20] Consider problems (9), (10). The solution q0 = 1/N loses stability at β = β∗where 1/β∗is the second largest eigenvalue of a discrete Markov chain on vertices y ∈Y , where the transition probabilities p(yl →yk) := P i p(yk|xi)p(xi|yl). Corollary 4 Bifurcation of the solution (q 1 N , β) in (9), (10) occurs at β ≥1. The discriminant of the bifurcating branch (15) is defined as [17] ζ(q∗, β∗,uuum) = ⟨uuum, ∂3 q,λL(q∗, λ∗, β∗)[uuum, EL−E∂3 q,λL(q∗, λ∗, β∗)[uuum,uuum]]⟩ −3⟨uuum, ∂4 q,λL(q∗, λ∗, β∗)[uuum,uuum,uuum]⟩, where ⟨·, ·⟩is the Euclidean inner product, ∂n q,λL[·, ..., ·] is the multilinear form of the nth derivative of L, E is the projection matrix onto range(∆q,λL(q∗, λ∗, β∗)), and L−is the Moore-Penrose generalized inverse of the Hessian ∆q,λL(q∗, λ∗, β∗). Theorem 5 [17] If ζ(q∗, β∗,uuum) < 0, then the bifurcating branch (15) is subritical (i.e. a first order phase transition). If ζ(q∗, β∗,uuum) > 0, then (15) is supercritical. For a data set with a joint probability distribution modelled by a mixture of four Gaussians as in [4], Theorem 5 predicts a subcritical bifurcation from (q 1 N , β∗≈1.038706) for (9) when N ≥3. The existence of a subcritical bifurcation (a first order phase transition) is intriguing. 1.034 1.036 1.038 1.04 1.042 1.044 1.046 1.048 1.05 0 0.5 1 1.5 2 2.5 3 ||q*−q1/N|| β Subcritical Bifurcating Branch for F=H(YN|Y)+β I(X;YN) from uniform solution q1/N for N=4 Local Maximum Stationary Solution Figure 1: A joint probability space on the random variables (X, Y ) was constructed from a mixture of four Gaussians as in [4]. Using this probability space, the equilibria of (12) for F as defined in (9) were found using Newton’s method. Depicted is the subcritical bifurcation from (q 1 4 , β∗≈1.038706). In analogy to the rate distortion curve [2, 1], we can define an H-I curve for the problem (6) H(I0) := max q∈∆,Deff ≥I0 H(YN|Y ). Let Imax = maxq∈∆Deff. Then for each I0 ∈(0, Imax) the value H(I0) is well defined and achieved at a point where Deff = I0. At such a point there is a Lagrange multiplier β such that ∇q,λL = 000 (compare with (11) and (12)) and this β solves problem (9). Therefore, for each I ∈(0, Imax), there is a corresponding β which solves problem (9). The existence of a subcritical bifurcation in β implies that this correspondence is not monotone for small values of I. 3.4 Numerical Continuation Numerical continuation methods efficiently analyze the solution behavior of dynamical systems such as (12) [9, 10]. Continuation methods can speed up the search for the solution qk+1 at βk+1 in step 3 of algorithm 1 by improving upon the perturbed choice q(0) k+1 = qk+η. First, the vector (∂βqT k ∂βλT k )T which is tangent to the curve ∇q,λL(q, λ, β) = 000 at (qk, λk, βk) is computed by solving the matrix system ∆q,λL(qk, λk, βk) ∂βqk ∂βλk = −∂β∇q,λL(qk, λk, βk). (17) Now the initial guess in step 2 becomes q(0) k+1 = qk + dk∂βqk where dk = ∆s √ ||∂βqk||2+||∂βλk||2+1 for ∆s > 0. Furthermore, βk+1 in step 1 is found by using this same dk. This choice of dk assures that a fixed step along (∂βqT k ∂βλT k )T is taken for each k. We use three different continuation methods which implement variations of this scheme: Parameter, Tangent and Pseudo Arc-Length [9, 17]. These methods can greatly decrease the optimization iterations needed to find qk+1 from q(0) k+1 in step 3. The cost savings can be significant, especially when continuation is used in conjunction with a Newton type optimization scheme which explicitly uses the Hessian ∆qF(qk, βk). Otherwise, the CPU time incurred from solving (17) may outweigh this benefit. 3.4.1 Branch switching Suppose that a bifurcation of a solution q∗of (8) has been detected at β∗. To proceed, one uses the explicit form of the bifurcating directions, {uuum}M m=1 from (16) to search for the bifurcating solution of interest, say qk+1, whose existence is guaranteed by Theorem 2. To do this, let uuu = uuum for some m ≤M, then implement a branch switch [9] q(0) k+1 = q∗+ dk · uuu. 4 A numerical algorithm We conclude with a numerical algorithm to solve (1). The section numbers in parentheses indicate the location in the text supporting each step. Algorithm 6 Let q0 be the maximizer of maxq∈∆G, β0 = 1 (3.3) and ∆s > 0. For k ≥0, let (qk, βk) be a solution to (1). Iterate the following steps until βκ = B for some κ. 1. (3.4) Perform β-step: solve (17) for (∂βqT k ∂βλT k )T and select βk+1 = βk + dk where dk = ∆s √ ||∂βqk||2+||∂βλk||2+1. 2. (3.4) The initial guess for qk+1 at βk+1 is q(0) k+1 = qk + dk · ∂βqk. 3. Optimization: solve max q∈∆G(q) + βk+1D(q) to get the maximizer qk+1, using initial guess q(0) k+1. 4. (3.2) Check for bifurcation: compare the sign of the determinant of an identical block of each of ∆q[G(qk) + βkD(qk)] and ∆q[G(qk+1) + βk+1D(qk+1)]. If a bifurcation is detected, then set q(0) k+1 = qk + dk ·uuu where uuu is defined as in (16) for some m ≤M, and repeat step 3. Acknowledgments Many thanks to Dr. John P. Miller at the Center for Computational Biology at Montana State University-Bozeman. This research is partially supported by NSF grants DGE 9972824, MRI 9871191, and EIA-0129895; and NIH Grant R01 MH57179. References [1] Thomas Cover and Jay Thomas. Elements of Information Theory. Wiley Series in Communication, New York, 1991. [2] Robert M. Gray. Entropy and Information Theory. Springer-Verlag, 1990. [3] Kenneth Rose. Deteministic annealing for clustering, compression, classification, regerssion, and related optimization problems. Proc. IEEE, 86(11):2210–2239, 1998. [4] Alexander G. Dimitrov and John P. Miller. Neural coding and decoding: communication channels and quantization. Network: Computation in Neural Systems, 12(4):441– 472, 2001. [5] Alexander G. Dimitrov and John P. Miller. Analyzing sensory systems with the information distortion function. In Russ B Altman, editor, Pacific Symposium on Biocomputing 2001. World Scientific Publushing Co., 2000. [6] Tomas Gedeon, Albert E. Parker, and Alexander G. Dimitrov. Information distortion and neural coding. Canadian Applied Mathematics Quarterly, 2002. [7] Naftali Tishby, Fernando C. Pereira, and William Bialek. The information bottleneck method. The 37th annual Allerton Conference on Communication, Control, and Computing, 1999. [8] Noam Slonim and Naftali Tishby. Agglomerative information bottleneck. In S. A. Solla, T. K. Leen, and K.-R. M¨uller, editors, Advances in Neural Information Processing Systems, volume 12, pages 617–623. MIT Press, 2000. [9] Wolf-Jurgen Beyn, Alan Champneys, Eusebius Doedel, Willy Govaerts, Yuri A. Kuznetsov, and Bjorn Sandstede. Handbook of Dynamical Systems III. World Scientific, 1999. Chapter in book: Numerical Continuation and Computation of Normal Forms. [10] Eusebius Doedel, Herbert B. Keller, and Jean P. Kernevez. Numerical analysis and control of bifurcation problems in finite dimensions. International Journal of Bifurcation and Chaos, 1:493–520, 1991. [11] M. Golubitsky and D. G. Schaeffer. Singularities and Groups in Bifurcation Theory I. Springer Verlag, New York, 1985. [12] M. Golubitsky, I. Stewart, and D. G. Schaeffer. Singularities and Groups in Bifurcation Theory II. Springer Verlag, New York, 1988. [13] J. Smoller and A. G. Wasserman. Bifurcation and symmetry breaking. Inventiones mathematicae, 100:63–95, 1990. [14] Allen Gersho and Robert M. Gray. Vector Quantization and Signal Compression. Kluwer Academic Publishers, 1992. [15] E. T. Jaynes. On the rationale of maximum-entropy methods. Proc. IEEE, 70:939–952, 1982. [16] J. Nocedal and S. J. Wright. Numerical Optimization. Springer, New York, 2000. [17] Albert E. Parker III. Solving the rate distortion problem. PhD thesis, Montana State University, 2003. [18] H. Boerner. Representations of Groups. Elsevier, New York, 1970. [19] D. S. Dummit and R. M. Foote. Abstract Algebra. Prentice Hall, NJ, 1991. [20] Tomas Gedeon and Bryan Roosien. Phase transitions in information distortion. In preparation, 2003.
|
2002
|
158
|
2,167
|
Self Supervised Boosting Max Welling, Richard S. Zemel, and Geoffrey E. Hinton Department of Computer Science University of Toronto 10 King’s College Road Toronto, M5S 3G5 Canada Abstract Boosting algorithms and successful applications thereof abound for classification and regression learning problems, but not for unsupervised learning. We propose a sequential approach to adding features to a random field model by training them to improve classification performance between the data and an equal-sized sample of “negative examples” generated from the model’s current estimate of the data density. Training in each boosting round proceeds in three stages: first we sample negative examples from the model’s current Boltzmann distribution. Next, a feature is trained to improve classification performance between data and negative examples. Finally, a coefficient is learned which determines the importance of this feature relative to ones already in the pool. Negative examples only need to be generated once to learn each new feature. The validity of the approach is demonstrated on binary digits and continuous synthetic data. 1 Introduction While researchers have developed and successfully applied a myriad of boosting algorithms for classification and regression problems, boosting for density estimation has received relatively scant attention. Yet incremental, stage-wise fitting is an attractive model for density estimation. One can imagine that the initial features, or weak learners, could model the rough outlines of the data density, and more detailed carving of the density landscape could occur on each successive round. Ideally, the algorithm would achieve automatic model selection, determining the requisite number of weak learners on its own. It has proven difficult to formulate an objective for such a system, under which the weights on examples, and the objective for training a weak learner at each round have a natural gradient-descent interpretation as in standard boosting algorithms [10] [7]. In this paper we propose an algorithm that provides some progress towards this goal. A key idea in our algorithm is that unsupervised learning can be converted into supervised learning by using the model’s imperfect current estimate of the data to generate negative examples. A form of this idea was previously exploited in the contrastive divergence algorithm [4]. We take the idea a step further here by training a weak learner to discriminate between the positive examples from the original data and the negative examples generated by sampling from the current density estimate. This new weak learner minimizes a simple additive logistic loss function [2]. Our algorithm obtains an important advantage over sampling-based, unsupervised methods that learn features in parallel. Parallel-update methods require a new sample after each iteration of parameter changes, in order to reflect the current model’s estimate of the data density. We improve on this by using one sample per boosting round, to fit one weak learner. The justification for this approach comes from the proposal that, for stagewise additive models, boosting can be considered as gradient-descent in function space, so the new learner can simply optimize its inner product with the gradient of the objective in function space [3]. Unlike other attempts at “unsupervised boosting” [9], where at each round a new component distribution is added to a mixture model, our approach will add features in the log-domain and as such learns a product model. Our algorithm incrementally constructs random fields from examples. As such, it bears some relation to maximum entropy models, which are popular in natural language processing [8]. In these applications, the features are typically not learned; instead the algorithms greedily select at each round the most informative feature from a large set of pre-enumerated features. 2 The Model Let the input, or state be a vector of random variables taking values in some finite domain . The probability of is defined by assigning it an energy,
, which is converted into a probability using the Boltzmann distribution,
! "#$ % &'( (1) We furthermore assume that the energy is additive. More explicitly, it will be modelled as a weighted sum of features, % &' #) )
)+* )-,.) &0/21 ) (2) where 3 * )54 are the weights, 3 , ) 76 4 the features and each feature may depend on its own set of parameters 1 ) . The model described above is very similar to an “additive random field”, otherwise known as “maximum entropy model”. The key difference is that we allow each feature to be flexible through its dependence on the parameters 1 ) . Learning in random fields may proceed by performing gradient ascent on the loglikelihood: 8:9 8:; < = >@?A 8 % B > 8:; C EDGF
H
8 %
8I; (3) where B > is a data-vector and ; is some arbitrary parameter that we want to learn. This equation makes explicit the main philosophy behind learning in random fields: the energy of states “occupied” by data is lowered (weighted by A = ) while the energy of all states is raised (weighted by
). Since there are usually an exponential number of states in the system, the second term is often approximated by a sample from
. To reduce sampling noise a relatively large sample is necessary and moreover, it must be drawn each time we compute gradients. These considerations make learning in random fields generally very inefficient. Iterative scaling methods have been developed for models that do not include adaptive feature parameters 3J1 ) 4 but instead train only the coefficients 3 * ) 4 [8]. These methods make more efficient use of the samples than gradient ascent, but they only minimize a loose bound on the cost function and their terminal convergence can be slow. 3 An Algorithm for Self Supervised Boosting Boosting algorithms typically implement phases: a feature (or weak learner) is trained, the relative weight of this feature with respect to the other features already in the pool is determined, and finally the data vectors are reweighted. In the following we will discuss a similar strategy in an unsupervised setting. 3.1 Finding New Features In [7], boosting is reinterpreted as functional gradient descent on a loss function. Using the log-likelihood as a negative loss function this idea can be used to find features for additive random field models. Consider a change in the energy by adding an infinitesimal multiple of a feature. The optimal feature is then the one that provides the maximal increase in log-likelihood, i.e. the feature that maximizes the second term of 9 % &' C ,.) &'( 9 % &'( C 8 9 8 ? (4) Using Eqn. 3 with 8 8 ,:) we rewrite the second term as, 8 9 8 ? < = >@?A , ) &B > C EDGF H &' , ) &' (5) where
is our current estimate of the data distribution. In order to maximize this derivative, the feature should therefore be small at the data and large at all other states. It is however important to realize that the norm of the feature must be bounded, since otherwise the derivative can be made arbitrarily large by simply increasing the length of , )
. Because the total number of possible states of a model is often exponentially large, the second term of Eqn. 5 must be approximated using samples from
, 8:9 8 ?
< = >@?A , ) B > C ?A , ) (6) These samples, or “negative examples”, inform us about the states that are likely under the current model. Intuitively, because the model is imperfect, we would like to move its density estimate away from these samples and towards the actual data. By labelling the data with and the negative examples with C , we can map this to a supervised problem where a new feature is a classifier. Since a good classifier is negative at the data and positive at the negative examples (so we can use its sign to discriminate them), adding its output to the total energy will lower the energy at states where there are data and raise it at states where there are negative examples. The main difference with supervised boosting is that the negative examples change at every round. 3.2 Weighting the Data It has been observed [6] that boosting algorithms can outperform classifications algorithms that maximize log-likelihood. This has motivated us to use the logistic loss function from the boosting literature for training new features. Loss + C! " (7) where # runs over data ( ) and negative examples ( C ). Perturbing the energy of the negative loss function by adding an infinitesimal multiple of a new feature: C , ) and computing the derivative w.r.t. we derive the following cost function for adding a new feature, = > ? A > , ) &B > C ?A , ) (8) The main difference with Eqn. 6 is the weights on data and negative examples, that give poorly “classified” examples (data with very high energy and negative examples with very low energy) a stronger vote in changes to the energy surface. The extra weights (which are bounded between [0,1]) will incur a certain bias w.r.t. the maximum likelihood solution. However, it is expected that the extra effort on “hard cases” will cause the algorithm to converge faster to good density models. It is important to realize that the loss function Eqn. 7 is a valid cost function only when the negative examples are fixed. The reason is that after a change of the energy surface, the negative examples are no longer a representative sample from the Boltzmann distribution in Eqn. 1. However, as long as we re-sample the negative examples after every change in the energy we may use Eqn. 8 as an objective to decide what feature to add to the energy, i.e. we may consider it as the derivative of some (possibly unknown) weighted log-likelihood: 8 9 8 ?
By analogy, we can interpret ' % &'2 as the probability that a certain state is occupied by a data-vector and consequently % &' as the “margin”. Note that the introduction of the weights has given meaning to the “height” of the energy surface, in contrast to the Boltzmann distribution for which only relative energy differences count. In fact, as we will further explain in the next section, the height of the energy will be chosen such that the total weight on data is equal to the total weight on the negative examples. 3.3 Adding the New Feature to the Pool According to the functional gradient interpretation, the new feature computed as described above represents the infinitesimal change in energy that maximally increases the (weighted) log-likelihood. Consistent with that interpretation we will determine * ) via a line search in the direction of this “gradient”. In fact, we will propose a slightly more general change in energy given by, %
% &' C * ) , ) &' C ) (9) As mentioned in the previous section, the constant ) will have no effect on the Boltzmann distribution in Eqn. 1. However, it does influence the relative total weight on data versus negative examples. Using the interpretation of in Eqn. 8 as 8 9 8 ? it is not hard to see that the derivatives of 9 w.r.t. to * ) and1 ) are given by, 8 9 8 * ) = >@?A > , ) &B > C ?A , ) & (10) 8 9 8 ) = >@?A > C ?A (11) Therefore, at a stationary point of 9 w.r.t. ) the total weight on data and negative examples precisely balances out. When iteratively updating * ) we not only change the weights but also the Boltzmann distribution, which makes the negative examples no longer representative of the current 1Since is independent of , it is easy to compute the second derivative "!#%$ #'& # and we can do Newton updates to compute the stationary point. 0 100 200 300 400 500 600 0 0.5 1 1.5 2 2.5 3 boosting round % Classification Error Figure 1: (a – left). Training error (lower curves) and test error (higher curves) for the weighted boosting algorithm (solid curves) and the un-weighted algorithm (dashed curves). (b – right). Features ) found by the learning algorithm. estimated data distribution. To correct for this we include importance weights on the negative examples that are all at * ) . It is very easy to update these weights from iteration to iteration using * ) ,.) & .2 and renormalizing. It is well known that in high dimensions the effective sample size of the weighted sample can rapidly become too small to be useful. We therefore monitor the effective sample size, given by , where the sum runs over the negative examples only. If it drops below a threshold we have two choices. We can obtain a new set of negative examples from the updated Boltzmann distribution, reset the importance weights to and resume fitting * ) . Alternatively, we simply accept the current value of * ) and proceed to the next round of boosting. Because we initialize * ) in the fitting procedure, the latter approach underestimates the importance of this particular feature, which is not a problem since a similar feature can be added in the next round. 4 A Binary Example: The Generalized RBM We propose a simple extension of the ”restricted Boltzmann machine” (RBM) with (+1,1)-units [1] as a model for binary data. Each feature is parametrized by weights ) and a bias ; ) : * ) , )
* )
) C ; ) (12) where the RBM is obtained by setting all * ) . One can sample from the summed energy model using straightforward Gibbs sampling, where every visible unit is sampled given all the others. Alternatively, one can design a much faster mixing Markov chain by introducing hidden variables and sampling all hidden units independently given the visible units and vice versa. Unfortunately, by including the coefficients * ) this trick is no longer valid. But an approximate Markov chain can be used * )
) C ; )
* ) ) C * ) ; ) (13) This approximate Gibbs sampling thus involves sampling from an RBM with scaled weights and biases, )
* ) ) C * ) ; ) ) * ) ) ) (14) When using the above Markov chain, we will not wait until it has reached equilibrium but initialize it at the data-vectors and use it for a fixed number of steps, as is done in contrastive divergence learning [4]. When we fit a new feature we need to make sure its norm is controlled. The appropriate value depends on the number of dimensions in the problem; in the experiment described below we bounded the norm of the vector ) ; ) to be no larger than . The updates are thus given by ) ) C ) and ; ) ; ) C ; ) with, ) ) C ; ) - ; ) ) C ; ) (15) where the weights are proportional to . The coefficients * ) are determined using the procedure of Section 3.3. To test whether we can learn good models of (fairly) high-dimensional, real-world data, we used the
% real-valued digits from the “br” set on the CEDAR cdrom # . We learned completely separate models on binarized “2”s and “3”s. The first data cases of each class were used for training while the remaining digits of each class were used for testing. The minimum effective sample size for the coefficients * ) was set to . We used different sets of negative examples, examples each, to fit , ) -6 and * ) . After a new feature was added, the total energies of all “2”s and “3”s were computed under both models. The energies of the training data (under both models) were used as two-dimensional features to compute a separation boundary using logistic regression, which was subsequently applied to the test data to compute the total misclassification. In Figure 1a we show the total error on both training data and test data as a function of the number of features in the model. For comparison we also plot the training and test error for the un-weighted version of the algorithm ( ). The classification error after rounds of boosting for the weighted algorithm is about , and only very gradually increases to about after rounds of boosting. This is good as compared to logistic regression ( ), k-nearest neighbors ( # is optimal), while a parallel-trained RBM with hidden units achieves respectively. The un-weighted learning algorithm converges much more slowly to a good solution, both on training and test data. In Figure 1b we show every ) feature ) between rounds and for both digits. 5 A Continuous Example: The Dimples Model For continuous data we propose a different form of feature, which we term a dimple because of its shape in the energy domain. A dimple is a mixture of a narrow Gaussian and a broad Gaussian, with a common mean: , )
!/! A C &0/" (16) where the mixing proportion is constant and equal, and is fixed and large. Each round of the algorithm fits and A for a new learner. A nice property of dimples is that they can reduce the entropy of an existing distribution by placing the dimple in a region that already has low energy, but they can also raise the entropy by putting the dimple in a high energy region [5]. Sampling is again simple if all * ) , since in that case we can use a Gibbs chain which first picks a narrow or broad Gaussian for every feature given the visible variables and then samples the visible variables from the resulting multivariate Gaussian. For general * the situation is less tractable, but using a similar approximation as for the generalized RBM, * !/! A C &0/" ( &0/" A !# C &0/" !# (17) This approximation will be accurate when one Gaussian is dominating the other, i.e., when the responsibilities are close to zero and one. This is expected to be the case in highdimensional applications. In the low-dimensional example discussed below we implemented a simple MCMC chain with isotropic, normal proposal density which was initiated at the data-points and run for a fixed number of steps. (a) −40 −30 −20 −10 0 10 20 30 40 −25 −20 −15 −10 −5 0 5 10 15 20 25 (c) −40 −30 −20 −10 0 10 20 30 40 −25 −20 −15 −10 −5 0 5 10 15 20 25 (b) −40 −30 −20 −10 0 10 20 30 40 −30 −20 −10 0 10 20 30 −20 −15 −10 −5 0 5 (d) −40 −30 −20 −10 0 10 20 30 40 −25 −20 −15 −10 −5 0 5 10 15 20 25 −100 −80 −60 −40 −20 0 Figure 2: (a). Plot of iso-energy contours after rounds of boosting. The crosses represent the data and the dots the negative examples generated from the model. (b). Three dimensional plot of the negative energy surface. (c). Contour plot for a mixture of Gaussians learned using EM. (d). Negative energy surface for the mixture of Gaussians. The type of dimple we used in the experiment below can adapt a common mean ( ) and the inverse-variance of the small Gaussian ( A ) in each dimension separately. The update rules are given by, C and A A C A with " A A C (18) A A A " (19) where A A A C and A are the responsibilities for the narrow and broad Gaussian respectively and the weights are given by . Finally, the combination coefficients
) are computed as described in Section 3.3. To illustrate the proposed algorithm we fit the dimples model to the two-dimensional data (crosses) shown in Figure 2a-c. The data were synthetically generated by defining angles with uniform between and a radius C with standard normal, which were converted to Euclidean coordinates and mirrored and translated to produce the spirals. The first feature is an isotropic Gaussian with the mean and the variance of the data, while later features were dimples trained in the way described above. Figure 2a also shows the contours of equal energy after rounds of boosting together with examples (dots) from the model. A 3-dimensional plot of the negative energy surface is shown in Figure 2b. For comparison, similar plots for a mixture of Gaussians, trained in parallel with EM, are depicted in Figures 2c and 2d. The main qualitative difference between the fits in Figures 2a-b (product of dimples) and 2c-d (mixture of Gaussians), is that the first seems to produce smoother energy surfaces, only creating structure where there is structure in the data. This can be understood by recalling that the role of the negative examples is precisely to remove “dips” in the energy surface where there is no data. The philosophy of avoiding structure in the model that is not dictated by the data is consistent with the ideas behind maximum entropy modelling [11] and is thought to improve generalization. 6 Discussion This paper discusses a boosting approach to density estimation, which we formulate as a sequential approach to training additive random field models. The philosophy is to view unsupervised learning as a sequence of classification problems where the aim is to discriminate between data-vectors and negative examples generated from the current model. The sampling step is usually the most time consuming operation, but it is also unavoidable since it informs the algorithm of the states whose energy is too low. The proposed algorithm uses just one sample of negative examples to fit a new feature, which is very economical as compared to most non-sequential algorithms which must generate an entire new sample for every gradient update. There are many interesting issues and variations that we have not addressed in this paper. What is the effect of using approximate, e.g. variational distributions for
? Can we improve the accuracy of the model by fitting the feature parameters and the coefficients * ) together? Does re-sampling the negative examples more frequently during learning improve the final model? What is the effect of using different functions to weight the data and how do the weighting schemes interact with the dimensionality of the problem? References [1] Y. Freund and D. Haussler. Unsupervised learning of distributions of binary vectors using 2-layer networks. In Advances in Neural Information Processing Systems, volume 4, pages 912–919, 1992. [2] J. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: A statistical view of boosting. Technical report, Dept. of Statistics, Stanford University Technical Report., 1998. [3] J.H. Friedman. Greedy function approximation: A gradient boosting machine. Technical report, Technical Report, Dept. of Statistics, Stanford University, 1999. [4] G.E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation, 14:1771–1800, 2002. [5] G.E. Hinton and A. Brown. Spiking Boltzmann machines. In Advances in Neural Information Processing Systems, volume 12, 2000. [6] G. Lebanon and J. Lafferty. Boosting and maximum likelihood for exponential models. In Advances in Neural Information Processing Systems, volume 14, 2002. [7] L. Mason, J. Baxter, P. Bartlett, and M. Frean. Boosting algorithms as gradient descent. In Advances in Neural Information Processing Systems, volume 12, 2000. [8] S. Della Pietra, V.J. Della Pietra, and J.D. Lafferty. Inducing features of random fields. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(4):380–393, 1997. [9] S. Rosset and E. Segal. Boosting density estimation. In Advances in Neural Information Processing Systems, volume 15 (this volume), 2002. [10] R.E. Schapire and Y. Singer. Improved boosting algorithms using confidence-rated predictions. In Computational Learing Theory, pages 80–91, 1998. [11] S.C. Zhu, Z.N. Wu, and D. Mumford. Minimax entropy principle and its application to texture modeling. Neural Computation, 9(8):1627–1660, 1997.
|
2002
|
159
|
2,168
|
Using Tarjan’s Red Rule for Fast Dependency Tree Construction Dan Pelleg and Andrew Moore School of Computer Science Carnegie-Mellon University Pittsburgh, PA 15213 USA dpelleg@cs.cmu.edu, awm@cs.cmu.edu Abstract We focus on the problem of efficient learning of dependency trees. It is well-known that given the pairwise mutual information coefficients, a minimum-weight spanning tree algorithm solves this problem exactly and in polynomial time. However, for large data-sets it is the construction of the correlation matrix that dominates the running time. We have developed a new spanning-tree algorithm which is capable of exploiting partial knowledge about edge weights. The partial knowledge we maintain is a probabilistic confidence interval on the coefficients, which we derive by examining just a small sample of the data. The algorithm is able to flag the need to shrink an interval, which translates to inspection of more data for the particular attribute pair. Experimental results show running time that is near-constant in the number of records, without significant loss in accuracy of the generated trees. Interestingly, our spanning-tree algorithm is based solely on Tarjan’s red-edge rule, which is generally considered a guaranteed recipe for bad performance. 1 Introduction Bayes’ nets are widely used for data modeling. However, the problem of constructing Bayes’ nets from data remains a hard one, requiring search in a super-exponential space of possible graph structures. Despite recent advances [1], learning network structure from big data sets demands huge computational resources. We therefore turn to a simpler model, which is easier to compute while still being expressive enough to be useful. Namely, we look at dependency trees, which are belief networks that satisfy the additional constraint that each node has at most one parent. In this simple case it has been shown [2] that finding the tree that maximizes the data likelihood is equivalent to finding a minimumweight spanning tree in the attribute graph, where edge weights are derived from the mutual information of the corresponding attribute pairs. Dependency tress are interesting in their own right, but also as initializers for Bayes’ Net search, as mixture components [3], or as components in classifiers [4]. It is our intent to eventually apply the technology introduced in this paper to the full problem of Bayes Net structure search. Once the weight matrix is constructed, executing a minimum spanning tree (MST) algorithm is fast. The time-consuming part is the population of the weight matrix, which takes time O(Rn2) for R records and n attributes. This becomes expensive when considering datasets with hundreds of thousands of records and hundreds of attributes. To overcome this problem, we propose a new way of interleaving the spanning tree construction with the operations needed to compute the mutual information coefficients. We develop a new spanning-tree algorithm, based solely on Tarjan’s [5] red-edge rule. This algorithm is capable of using partial knowledge about edge weights and of signaling the need for more accurate information regarding a particular edge. The partial information we maintain is in the form of probabilistic confidence intervals on the edge weights; an interval is derived by looking at a sub-sample of the data for a particular attribute pair. Whenever the algorithm signals that a currently-known interval is too wide, we inspect more data records in order to shrink it. Once the interval is small enough, we may be able to prove that the corresponding edge is not a part of the tree. Whenever such an edge can be eliminated without looking at the full data-set, the work associated with the remainder of the data is saved. This is where performance is potentially gained. We have implemented the algorithm for numeric and categorical data and tested it on real and synthetic data-sets containing hundreds of attributes and millions of records. We show experimental results of up to 5, 000-fold speed improvements over the traditional algorithm. The resulting trees are, in most cases, of near-identical quality to the ones grown by the naive algorithm. Use of probabilistic bounds to direct structure-search appears in [6] for classification and in [7] for model selection. In a sequence of papers, Domingos et al. have demonstrated the usefulness of this technique for decision trees [8], K-means clustering [9], and mixturesof-Gaussians EM [10]. In the context of dependency trees, Meila [11] discusses the discrete case that frequently comes up in text-mining applications, where the attributes are sparse in the sense that only a small fraction of them is true for any record. In this case it is possible to exploit the sparseness and accelerate the Chow-Liu algorithm. Throughout the paper we use the following notation. The number of data records is R, the number of attributes n. When x is an attribute, xi is the value it takes for the i-th record. We denote by ρxy the correlation coefficient between attributes x and y, and omit the subscript when it is clear from the context. 2 A slow minimum-spanning tree algorithm We begin by describing our MST algorithm1. Although in its given form it can be applied to any graph, it is asymptotically slower than established algorithms (as predicted in [5] for all algorithms in its class). We then proceed to describe its use in the case where some edge weights are known not exactly, but rather only to lie within a given interval. In Section 4 we will show how this property of the algorithm interacts with the data-scanning step to produce an efficient dependency-tree algorithm. In the following discussion we assume we are given a complete graph with n nodes, and the task is to find a tree connecting all of its nodes such that the total tree weight (defined to be the sum of the weights of its edges) is minimized. This problem has been extremely well studied and numerous efficient algorithms for it exist. We start with a rule to eliminate edges from consideration for the output tree. Following [5], we state the so-called “red-edge” rule: Theorem 1: The heaviest edge in any cycle in the graph is not part of the minimum 1To be precise, we will use it as a maximum spanning tree algorithm. The two are interchangeable, requiring just a reversal of the edge weight comparison operator. 1. T ←an arbitrary spanning set of n −1 edges. L ←empty set. 2. While |¯L| > n −1 do: Pick an arbitrary edge e ∈¯L \ T. Let e′ be the heaviest edge on the path in T between the endpoints of e. If e is heavier than e′: L ←L ∪{e} otherwise: T ←T ∪{e} \ {e′} L ←L ∪{e′} 3. Output T. Figure 1: The MIST algorithm. At each step of the iteration, T contains the current “draft” tree. L contains the set of edges that have been proven to not be in the MST and so ¯L contains the set of edges that still have some chance of being in the MST. T never contains an edge in L. spanning tree. Traditionally, MST algorithms use this rule in conjunction with a greedy “blue-edge” rule, which chooses edges for inclusion in the tree. In contrast, we will repeatedly use the red-edge rule until all but n −1 edges have been eliminated. The proof this results in a minimum-spanning tree follows from [5]. Let E be the original set of edges. Denote by L the set of edges that have already been eliminated, and let ¯L = E \ L. As a way to guide our search for edges to eliminate we maintain the following invariant: Invariant 2: At any point there is a spanning tree T, which is composed of edges in ¯L. In each step, we arbitrarily choose some edge e in ¯L \ T and try to eliminate it using the red-edge rule. Let P be the path in T between e’s endpoints. The cycle we will apply the red-edge rule to will be composed of e and P. It is clear we only need to compare e with the heaviest edge in P. If e is heavier, we can eliminate it by the red-edge rule. However, if it is lighter, then we can eliminate the tree edge by the same rule. We do so and add e to the tree to preserve Invariant 2. The algorithm, which we call Minimum Incremental Spanning Tree (MIST), is listed in Figure 1. The MIST algorithm can be applied directly to a graph where the edge weights are known exactly. And like many other MST algorithms, it can also be used in the case where just the relative order of the edge weights is given. Now imagine a different setup, where edge weights are not given, and instead an oracle exists, who knows the exact values of the edge weights. When asked about the relative order of two edges, it may either respond with the correct answer, or it may give an inconclusive answer. Furthermore, a constant fee is charged for each query. In this setup, MIST is still suited for finding a spanning tree while minimizing the number of queries issued. In step 2, we go to the oracle to determine the order. If the answer is conclusive, the algorithm proceeds as described. Otherwise, it just ignores the “if” clause altogether and iterates (possibly with a different edge e). For the moment, this setup may seem contrived, but in Section 4, we go back to the MIST algorithm and put it in a context very similar to the one described here. 3 Probabilistic bounds on mutual information We now concentrate once again on the specific problem of determining the mutual information between a pair of attributes. We show how to compute it given the complete data, and how to derive probabilistic confidence intervals for it, given just a sample of the data. As shown in [12], the mutual information for two jointly Gaussian numeric attributes X and Y is: I(X; Y ) = −1 2 ln(1 −ρ2) where the correlation coefficient ρ = ρXY = PR i=1((xi−¯x)(yi−¯y)) ˆσ2 X ˆσ2 Y with ¯x, ¯y, ˆσ2 X and ˆσ2 Y being the sample means and variances for attributes X and Y . Since the log function is monotonic, I(X; Y ) must be monotonic in |ρ|. This is a sufficient condition for the use of |ρ| as the edge weight in a MST algorithm. Consequently, the sample correlation can be used in a straightforward manner when the complete data is available. Now consider the case where just a sample of the data has been observed. Let x and y be two data attributes. We are trying to estimate PR i=1 xi · yi given the partial sum Pr i=1 xi ·yi for some r < R. To derive a confidence interval, we use the Central Limit Theorem 2. It states that given samples of the random variable Z (where for our purposes Zi = xi · yi), the sum P i Zi can be approximated by a Normal distribution with mean and variance closely related to the distribution mean and variance. Furthermore, for large samples, the sample mean and variance can be substituted for the unknown distribution parameters. Note in particular that the central limit theorem does not require us to make any assumption about the Gaussianity of Z. We thus can derive a two-sided confidence interval for P i Zi = P i xi · yi with probability 1 −δ for some user-specified δ, typically 1%. Given this interval, computing an interval for ρ is straightforward. Categorical data can be treated similarly; for lack of space we refer the reader to [13] for the details. 4 The full algorithm As we argued, the MIST algorithm is capable of using partial information about edge weights. We have also shown how to derive confidence intervals on edge weights. We now combine the two and give an efficient dependency-tree algorithm. We largely follow the MIST algorithm as listed in Figure 1. We initialize the tree T in the following heuristic way: first we take a small sub-sample of the data, and derive point estimates for the edge weights from it. Then we feed the point estimates to a MST algorithm and obtain a tree T. When we come to compare edge weights, we generally need to deal with two intervals. If they do not intersect, then the points in one of them are all smaller in value than any point in the other, in which case we can determine which represents a heavier edge. We apply this logic to all comparisons, where the goal is to determine the heaviest path edge e′ and to compare it to the candidate e. If we are lucky enough that all of these comparisons are conclusive, the amount of work we save is related to how much data was used in computing the confidence intervals — the rest of the data for the attribute-pair that is represented by the eliminated edge can be ignored. However, there is no guarantee that the intervals are separated and allow us to draw meaningful conclusions. If they do not, then we have a situation similar to the inconclusive 2One can use the weaker Hoeffding bound instead, and our implementation supports it as well, although it is generally much less powerful. oracle answers in Section 2. The price we need to pay here is looking at more data to shrink the confidence intervals. We do this by choosing one edge — either a tree-path edge or the candidate edge — for “promotion”, and doubling the sample size used to compute the sufficient statistics for it. After doing so we try to eliminate again (since we can do this at no additional cost). If we fail to eliminate we iterate, possibly choosing a different candidate edge (and the corresponding tree path) this time. The choice of which edge to promote is heuristic, and depends on the expected success of resolution once the interval has shrunk. The details of these heuristics are omitted due to space constraints. Another heuristic we employ goes as follows. Consider the comparison of the path-heaviest edge to an estimate of a candidate edge. The candidate edge’s confidence interval may be very small, and yet still intersect the interval that is the heavy edge’s weight (this would happen if, for example, both attribute-pairs have the same distribution). We may be able to reduce the amount of work by pretending the interval is narrower than it really is. We therefore trim the interval by a constant, parameterized by the user as ϵ, before performing the comparison. This use of δ and ϵ is analogous to their use in “Probably Approximately Correct” analysis: on each decision, with high probability (1 −δ) we will make at worst a small mistake (ϵ). 5 Experimental results In the following description of experiments, we vary different parameters for the data and the algorithm. Unless otherwise specified, these are the default values for the parameters. We set δ to 1% and ϵ to 0.05 (on either side of the interval, totaling 0.1). The initial sample size is fifty records. There are 100, 000 records and 100 attributes. The data is numeric. The data-generation process first generates a random tree, then draws points for each node from a normal distribution with the node’s parent’s value as the mean. In addition, any data value is set to random noise with probability 0.15. To construct the correlation matrix from the full data, each of the R records needs to be considered for each of the n 2 attribute pairs. We evaluate the performance of our algorithm by adding the number of records that were actually scanned for all the attribute-pairs, and dividing the total by R n 2 . We call this number the “data usage” of our algorithm. The closer it is to zero, the more efficient our sampling is, while a value of one means the same amount of work as for the full-data algorithm. We first demonstrate the speed of our algorithm as compared with the full O(Rn2) scan. Figure 2 shows that the amount of data the algorithm examines is a constant that does not depend on the size of the data-set. This translates to relative run-times of 0.7% (for the 37, 500-record set) to 0.02% (for the 1, 200, 000-record set) as compared with the full-data algorithm. The latter number translates to a 5, 000-fold speedup. Note that the reported usage is an average over the number of attributes. However this does not mean that the same amount of data was inspected for every attribute-pair — the algorithm determines how much effort to invest in each edge separately. We return to this point below. The running time is plotted against the number of data attributes in Figure 3. A linear relation is clearly seen, meaning that (at least for this particular data-generation scheme) the algorithm is successful in doing work that is proportional to the number of tree edges. Clearly speed has to be traded off. For our algorithm the risk is making the wrong decision about which edges to include in the resulting tree. For many applications this is an acceptable risk. However, there might be a simpler way to grow estimate-based dependency trees, one that does not involve complex red-edge rules. In particular, we can just run the original algorithm on a small sample of the data, and use the generated tree. It would certainly be fast, and the only question is how well it performs. 0 50 100 150 200 250 0 200000 400000 600000 800000 1e+06 1.2e+06 cells per attribute-pair records Figure 2: Data usage (indicative of absolute running time), in attribute-pair units per attribute. 0 5 10 15 20 25 30 20 40 60 80 100 120 140 160 running time number of attributes Figure 3: Running time as a function of the number of attributes. 0 0.5 1 1.5 2 0 200000 400000 600000 800000 1e+06 1.2e+06 relative log-likelihood records Figure 4: Relative log-likelihood vs. the samplebased algorithm. The log-likelihood difference is divided by the number of records. -6 -5 -4 -3 -2 -1 0 0.001 0.0015 0.002 0.0025 0.003 0.0035 0.004 relative log-likelihood data usage MIST SAMPLE Figure 5: Relative log-likelihood vs. the samplebased algorithm, drawn against the fraction of data scanned. To examine this effect we have generated data as above, then ran a 30-fold cross-validation test for the trees our algorithm generated. We also ran a sample-based algorithm on each of the folds. This variant behaves just like the full-data algorithm, but instead examines just the fraction of it that adds up to the total amount of data used by our algorithm. Results for multiple data-sets are in Figure 4. We see that our algorithm outperforms the sample-based algorithm, even though they are both using the same total amount of data. The reason is that using the same amount of data for all edges assumes all attribute-pairs have the same variance. This is in contrast to our algorithm, which determines the amount of data for each edge independently. Apparently for some edges this decision is very easy, requiring just a small sample. These “savings” can be used to look at more data for high-variance edges. The sample-based algorithm would not put more effort into those high-variance edges, eventually making the wrong decision. In Figure 5 we show the log-likelihood difference for a particular (randomly generated) set. Here, multiple runs with different δ and ϵ values were performed, and the result is plotted against the fraction of data used. The baseline (0) is the log-likelihood of the tree grown by the original algorithm using the full data. Again we see that MIST is better over a wide range of data utilization ratio. Keep in mind that the sample-based algorithm has been given an unfair advantage, compared with MIST: it knows how much data it needs to look at. This parameter is implicitly passed to it from our algorithm, and represents an important piece of information about the data. Without it, there would need to be a preliminary stage to determine the sample size. The alternative is to use a fixed amount (specified either as a fraction or as an absolute count), which is likely to be too much or too little. To test our algorithm on real-life data, we used various data-sets from [14, 15], as well as analyzed data derived from astronomical observations taken in the Sloan Digital Sky Survey. On each data-set we ran a 30-fold cross-validation test as described above. For Table 1: Results, relative to the sample-based algorithm, on real data. “Type” means numerical or categorical data. NAME ATTR. RECORDS TYPE DATA USAGE MIST BETTER? SAMPLE BETTER? CENSUS-HOUSE 129 22784 N 1.0% × √ COLORHISTOGRAM 32 68040 N 0.5% √ × COOCTEXTURE 16 68040 N 4.6% × √ ABALONE 8 4177 N 21.0% × × COLORMOMENTS 10 68040 N 0.6% × √ CENSUS-INCOME 678 99762 C 0.05% √ × COIL2000 624 5822 C 0.9% √ × IPUMS 439 88443 C 0.06% √ × KDDCUP99 214 303039 C 0.02% √ × LETTER 16 20000 N 1.5% √ × COVTYPE 151 581012 C 0.009% × √ PHOTOZ 23 2381112 N 0.008% √ × each training fold, we ran our algorithm, followed by a sample-based algorithm that uses as much data as our algorithm did. Then the log-likelihoods of both trees were computed for the test fold. Table 1 shows whether the 99% confidence interval for the log-likelihood difference indicates that either of the algorithms outperforms the other. In seven cases the MIST-based algorithm was better, while the sample-based version won in four, and there was one tie. Remember that the sample-based algorithm takes advantage of the “data usage” quantity computed by our algorithm. Without it, it would be weaker or slower, depending on how conservative the sample size was. 6 Conclusion and future work We have presented an algorithm that applies a “probably approximately correct” approach to dependency-tree construction for numeric and categorical data. Experiments in sets with up to millions of records and hundreds of attributes show it is capable of processing massive data-sets in time that is constant in the number of records, with just a minor loss in output quality. Future work includes embedding our algorithm in a framework for fast Bayes’ Net structure search. A additional issue we would like to tackle is disk access. One advantage the full-data algorithm has is that it is easily executed with a single sequential scan of the data file. We will explore the ways in which this behavior can be attained or approximated by our algorithm. While we have derived formulas for both numeric and categorical data, we currently do not allow both types of attributes to be present in a single network. Acknowledgments We would like to thank Mihai Budiu, Scott Davies, Danny Sleator and Larry Wasserman for helpful discussions, and Andy Connolly for providing access to data. References [1] Nir Friedman, Iftach Nachman, and Dana Pe´er. Learning bayesian network structure from massive datasets: The ”sparse candidate” algorithm. In Proceedings of the 15th Conference on Uncertainty in Artificial Intelligence (UAI-99), pages 206–215, Stockholm, Sweden, 1999. [2] C. K. Chow and C. N. Liu. Approximating discrete probability distributions with dependence trees. IEEE Transactions on Information Theory, 14:462–467, 1968. [3] Marina Meila. Learning with Mixtures of Trees. PhD thesis, Massachusetts Institute of Technology, 1999. [4] N. Friedman, M. Goldszmidt, and T. J. Lee. Bayesian Network Classification with Continuous Attributes: Getting the Best of Both Discretization and Parametric Fitting. In Jude Shavlik, editor, International Conference on Machine Learning, 1998. [5] Robert Endre Tarjan. Data structures and network algorithms, volume 44 of CBMSNSF Reg. Conf. Ser. Appl. Math. SIAM, 1983. [6] Oded Maron and Andrew W. Moore. Hoeffding races: Accelerating model selection search for classification and function approximation. In Jack D. Cowan, Gerald Tesauro, and Joshua Alspector, editors, Advances in Neural Information Processing Systems, volume 6, pages 59–66, Denver, Colorado, 1994. Morgan Kaufmann. [7] Andrew W. Moore and Mary S. Lee. Efficient algorithms for minimizing cross validation error. In Proceedings of the 11th International Conference on Machine Learning (ICML-94), pages 190–198. Morgan Kaufmann, 1994. [8] Pedro Domingos and Geoff Hulten. Mining high-speed data streams. In Raghu Ramakrishnan, Sal Stolfo, Roberto Bayardo, and Ismail Parsa, editors, Proceedings of the 6th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD-00), pages 71–80, N. Y., August 20–23 2000. ACM Press. [9] Pedro Domingos and Geoff Hulten. A general method for scaling up machine learning algorithms and its application to clustering. In Carla Brodley and Andrea Danyluk, editors, Proceeding of the 17th International Conference on Machine Learning, San Francisco, CA, 2001. Morgan Kaufmann. [10] Pedro Domingos and Geoff Hulten. Learning from infinite data in finite time. In Proceedings of the 14th Neural Information Processing Systems (NIPS-2001), Vancouver, British Columbia, Canada, 2001. [11] Marina Meila. An accelerated Chow and Liu algorithm: fitting tree distributions to high dimensional sparse data. In Proceedings of the Sixteenth International Conference on Machine Learning (ICML-99), Bled, Slovenia, 1999. [12] Fazlollah Reza. An Introduction to Information Theory, pages 282–283. Dover Publications, New York, 1994. [13] Dan Pelleg and Andrew Moore. Using Tarjan’s red rule for fast dependency tree construction. Technical Report CMU-CS-02-116, Carnegie-Mellon University, 2002. [14] C.L. Blake and C.J. Merz. UCI repository of machine learning databases, 1998. http://www.ics.uci.edu/∼mlearn/MLRepository.html. [15] S. Hettich and S. D. Bay. The UCI KDD archive, 1999. http:// kdd.ics.uci.edu.
|
2002
|
16
|
2,169
|
An Estimation-Theoretic Framework for the Presentation of Multiple Stimuli Christian W. Eurich∗ Institute for Theoretical Neurophysics University of Bremen Otto-Hahn-Allee 1 D-28359 Bremen, Germany eurich@physik.uni-bremen.de Abstract A framework is introduced for assessing the encoding accuracy and the discriminational ability of a population of neurons upon simultaneous presentation of multiple stimuli. Minimal square estimation errors are obtained from a Fisher information analysis in an abstract compound space comprising the features of all stimuli. Even for the simplest case of linear superposition of responses and Gaussian tuning, the symmetries in the compound space are very different from those in the case of a single stimulus. The analysis allows for a quantitative description of attentional effects and can be extended to include neural nonlinearities such as nonclassical receptive fields. 1 Introduction An important issue in the Neurosciences is the investigation of the encoding properties of neural populations from their electrophysiological properties such as tuning curves, background noise, and correlations in the firing. Many theoretical studies have used estimation theory, in particular the measure of Fisher information, to account for the neural encoding accuracy with respect to the presentation of a single stimulus (e. g., [1, 2, 3, 4, 5]). Most modeling studies, however, neglect the fact that in a natural situation, neural activity results from multiple objects or even complex sensory scenes. In particular, attention experiments require the presentation of at least one distractor along with the attended stimulus. Electrophysiological data are now available demonstrating effects of selective attention on neural firing behavior in various cortical areas [6, 7, 8]. Such experiments require the development of theoretical tools which deviate from the usual practice of considering only single stimuli in the analysis. Zemel et al. [9] employ an extended encoding scheme for stimulus distributions and use Bayesian decoding to account for the presentation of multiple objects. Similarly, Bayesian estimation has been used in the context of attentional phenomena [10]. ∗homepage: http://www-neuro.physik.uni-bremen.de/˜eurich In this paper, a new estimation-theoretic framework for the simultaneous presentation of multiple stimuli is introduced. Fisher information is employed to compute lower bounds for the encoding error and the discrimational ability of neural populations independent of a particular estimator. Here we focus on the simultaneous presentation of two objects in the context of attentional phenomena. Furthermore, we assume a linearity in the neural response for reasons of analytical tractability; however, the method can be extended to include neural nonlinearities. 2 Estimation Theory for Multiple Stimuli 2.1 Tuning Curves in Compound Space The tuning curve f(X) of a neuron is defined to be the average neural response to repetitive presentations of stimulus configurations X. In most cases, the response is taken to be the number n(X) of action potentials occurring within some time interval τ after stimulus presentation, or the neural firing rate r(X) = n(X)/τ: f(X) = ⟨r(X)⟩= ⟨n(X)⟩ τ . (1) Within an estimation-theoretic framework, the variability of the neural response is described by a probability distribution conditioned on the value of X, P(n; X). The average ⟨·⟩in (1) can be regarded either as an average over multiple presentations of the same stimulus configuration (in an experimental setup), or as an average over n (in a theoretical description). In most electrophysiological experiments, tuning curves are assessed through the presentation of a single stimulus, X = ⃗x, such as a bar or a grating characterized by a single orientation, or a dot of light at a specific position in the animal’s visual field (e.g., [11, 12]). Such tuning curves will be denoted by f1(⃗x), where the subscript refers to the single object. The behavior of a neuron upon presentation of multiple objects, however, cannot be inferred from tuning curves f1(⃗x). Instead, neurons may show nonlinearities such as the so-called non-classical receptive fields in the visual area V1 which have attracted much attention in the recent past (e. g., [13, 14]). For M simultaneously presented stimuli, X = ⃗x1, . . . , ⃗xM, the neuronal tuning curve can be written as a function fM(⃗x1, . . . , ⃗xM), where the subscript M is not necessarily a parameter of the function but an indicator of the number of stimuli it refers to. The domain of this function will be called the compound space of the stimuli. In the following, we consider a specific example consisting of two simultaneously presented stimuli, characterized by a single physical property (such as orientation or direction of movement). The resulting tuning function is therefore a function of two scalar variables x1 and x2: f2(x1, x2) = ⟨r(x1, x2)⟩= ⟨n(x1, x2)⟩/τ. Figure 1 visualizes the concept of the compound space. In order to obtain an analytical access to the encoding properties of a neural population, we will furthermore assume that a neuron’s response f2(x1, x2) is a linear superposition of the single-stimulus responses f1(x1) and f1(x2), i. e., f2(x1, x2) = kf1(x1) + (1 −k)f1(x2) , (2) where 0 < k < 1 is a factor which scales the relative importance of the two stimuli. Such linear behavior has been observed in area 17 of the cat upon presentation of bi-vectorial transparent motion stimuli [15] and in areas MT and MST of the macaque monkey upon simultaneous presentation of two moving objects [16]. In f (x) 1 x x' x'' f (x ,x ) 2 1 2 x' x1 x'' x2 Figure 1: The concept of compound space. A single-stimulus tuning curve f1(x) (left) yields the average response to the presentation of either x′ or x′′; the simultaneous presentation of x′ and x′′, however, can be formalized only through a tuning curve f2(x1, x2) (right). general, however, the compound space method is not restricted to linear neural responses. The consideration of a neural population in the compound space yields tuning properties and symmetries which are very different from those in a D-dimensional single-stimulus space considered in the literature (e. g., [2, 3, 4]). First, the tuning curves have a different appearance. Figure 2a shows a tuning curve f2(x1, x2) given by (2), where f1(x) is a Gaussian, f1(x) = F exp −(x −c)2 2σ2 ; (3) F is a gain factor which can be scaled to be the maximal firing rate of the neuron. f2(x1, x2) is not radially symmetric but has cross-shaped level curves. Second, a 2 4 6 8 2 4 6 8 0.2 0.4 0.6 0.8 1 1.2 x1 x1 x2 x2 f (x ,x ) 2 1 2 (c,c) f (x) 1 x c (a) (b) Figure 2: (a) A tuning curve f2(x1, x2) in a 2-dimensional compound space given by (2) and (3) with k = 0.5, c = 5, σ = 0.3, F = 1. (b) Arrangement of tuning curves: The centers of the tuning curves are restricted to the diagonal x1 = x2. The cross is a schematic cross-section of the tuning curve in (a). single-stimulus tuning curve f1(x) whose center is located at x = c yields a linear superposition whose center is given by the vector (c, c) in the compound space. This is due to the fact that both axes describe the same physical stimulus feature. Therefore, all tuning curve centers are restricted to the 1-dimensional subspace x1 = x2. The tuning curve centers are assumed to have a distribution in the compound space which can be written as ˜η(c1, c2) = 0 if c1 ̸= c2 η(c) if c1 = c2 . (4) The geometrical features in the compound space suggest that an estimationtheoretic approach will yield encoding properties of neural populations which are different from those obtained from the presentation of a single stimulus. 2.2 Fisher Information In order to assess the encoding accuracy of a neural population, the stochasticity of the neural response is taken into account. For N neurons, it is formalized as the probability of obtaining n(i) spikes in the i-th neuron (i = 1 . . . , N) as a response to the stimulus configuration X, P(n(1), n(2), . . . , n(N); X) ≡P(⃗n; X). Here we assume independent spike generation mechanisms in the neurons: P(n(1), n(2), . . . , n(N); X) = N Y i=1 P(n(i); X) . (5) These parameter-dependent distributions are obtained either experimentally or through a noise model; a convenient choice for the latter is a Poisson distribution with a spike count average given by the tuning curve (1) of each neuron. In the 2-dimensional compound space discussed in the previous section, P(⃗n; X) ≡ P(⃗n; x1, x2). The Fisher information is a 2 × 2 matrix J(x1, x2) = (Jij(x1, x2)) (i, j ∈{1, 2}), whose entries are given by Jij(x1, x2) = ( ∂ ∂xi ln P(⃗n; x1, x2))( ∂ ∂xj ln P(⃗n; x1, x2)) (i, j ∈{1, 2}) . (6) The Cram´er-Rao inequality states that a lower bound on the expected square estimation error of the ith feature, ϵ2 i,min (i=1,2), is given by (J−1)ii provided that the estimator is unbiased. In the following, this lower bound is studied in the 2-dimensional compound space. 3 Results Single-neuron Fisher Information. The single-neuron Fisher information in the compound space can be written down for an arbitrary noise model. Here we choose a Poissonian spike distribution, P(n; x1, x2) = (τf2(x1, x2))n exp {−τf2(x1, x2)} n! , (7) whereby the tuning is assumed to be linear according to (2), and the single-stimulus tuning curve f1(x) is a Gaussian given by (3). A straightforward calculation yields the single-neuron Fisher information matrix J c(x1, x2) = (Jc ij(x1, x2)) (i, j ∈{1, 2}) given by Jc(x1, x2) = τF σ4 ke−(x1−c)2 2σ2 + (1 −k)e−(x2−c)2 2σ2 × (8) k2(x1 −c)2e−(x1−c)2 σ2 k(1 −k)(x1 −c)(x2 −c)e−(x1−c)2+(x2−c)2 2σ2 k(1 −k)(x1 −c)(x2 −c)e−(x1−c)2+(x2−c)2 2σ2 (1 −k)2(x2 −c)2e−(x2−c)2 σ2 ; the index c refers to the center (c, c) of the tuning curve. Population Fisher Information. For independently spiking neurons (5), the population Fisher information is the sum of the single-neuron Fisher information values. Assuming some density η(c) of tuning curve centers on the diagonal x1 = x2, the population Fisher information is therefore obtained by an integration of (8). Here we consider the simple case of a constant density, η(c) ≡η0 resulting in elements Jij(x1, x2) (i, j ∈{1, 2}) of the Fisher information maxtrix given by Jij(x1, x2) = η ∞ Z −∞ Jc ij(x1, x2)dc . (9) A symmetry with respect to the diagonal x1 = x2 allows the replacement of the two variable x1, x2 by a single variable ρ visualized in Fig. 3. It is straightforward x1 x2 ( ) x ,x 1 2 x +x 1 2 2 x +x 1 2 2 , ( ) ( ) -r r Figure 3: Transformation to the variable ρ which is proportional to the distance of the point (x1, x2) to the diagonal. ρ therefore quantifies the similarity of the stimuli x1 and x2. to obtain two additional symmetries, J12(ρ) = J21(ρ) and J11(ρ) = J11(−ρ). The final population Fisher information is given by J(ρ) = J11(ρ) J12(ρ) J12(ρ) (1−k)2 k2 J11(ρ) , (10) whereby J11(ρ) = k2τFη σ ∞ Z −∞ (ξ + ρ σ)2 exp{−(ξ + ρ σ )2} k exp{−1 2(ξ + ρ σ)2} + (1 −k) exp{−1 2(ξ −ρ σ)2}dξ , J12(ρ) = k(1 −k)τFη σ ∞ Z −∞ (ξ + ρ σ)(ξ −ρ σ ) exp{−1 2((ξ + ρ σ )2 + (ξ −ρ σ )2)} k exp{−1 2(ξ + ρ σ)2} + (1 −k) exp{−1 2(ξ −ρ σ)2}dξ . In the following, three examples will be discussed. 3.1 Example 1: Symmetrical Tuning First we study the symmetrical case k = 1/2 the receptive fields of which are given in Fig. 2a. Fig. 4 shows the minimal square estimation error for x1, ϵ2 1,min(ρ), as obtained from the first diagonal element of the inverse Fisher information matrix. Due to the symmetry, it is identical to the minimal square error for x2, ϵ2 2,min(ρ). The estimation error diverges as ρ −→0. This can be understood as follows: For k = 1/2, the matrix (10) is symmetric and can be diagonalized. The eigenvector directions are ⃗v1 = 1 √ 2 1 1 ⃗v2 = 1 √ 2 −1 1 . (11) Correspondingly, the diagonal Fisher information matrix yields a lower bound for the estimation errors of (x1 + x2)/ √ 2 and (x2 −x1)/ √ 2, respectively. The results are shown in Fig. 5. The estimation error for (x1 + x2)/ √ 2 takes a finite value for -4 -2 0 2 4 0 5 10 15 20 r e r ( ) min 2 Figure 4: Minimal square estimation error for stimulus x1 or x2. Solid line: F = 1; dotted line: F = 1.5.In both cases, k = 0.5, σ = 1, τ = 1, η = 1. r e r ( ) min 2 r e r ( ) min 2 -4 -2 0 2 4 0.5 1 1.5 -4 -2 0 2 4 0 5 10 15 20 x -x 2 1 2 1/2 direction x +x 1 2 2 1/2 direction (a) (b) Figure 5: Minimal square estimation error for (a) (x1 + x2)/ √ 2 and (b) (x2 − x1)/ √ 2. Solid lines: F = 1; dotted lines: F = 1.5. Same parameters as in Fig. 4. all ϱ. However, the estimation error for (x2 −x1)/ √ 2 diverges as ρ −→0. This error corresponds to an estimation of the difference of the two presented stimuli. As expected, a discrimination becomes impossible as the stimuli merge. The Fisher information for (x2 −x1)/ √ 2 can be regarded as a discrimination measure which takes the simultaneous presentation of stimuli into account. 3.2 Example 2: Attention on Both Stimuli Electrophysiological studies in V1 and V4 [7] and MT [8] of macaque monkeys suggest that the gain but not the width of tuning curves is increased as stimuli in a cell’s receptive field are attended. This can easily be incorporated in the current model: The gain corresponds to the factor F in the tuning curve (3). Figures 4 and 5 compare the results obtained in the previous section (F = 1) with a maximal firing rate F = 1.5. As expected, the minimal square errors are smaller for higher F in all cases (dotted lines); a higher firing rate yields a better stimulus estimation. This suggests that attention increases localization accuracy of x1 and x2 as well as their discrimination if both stimuli are attended. The former is consistent with psychophysical results on attentional enhancement of spatial resolution in human subjects [17]. 3.3 Example 3: Attending One Stimulus The situation changes if only one of the two stimuli is attended. Electrophysiological recordings in monkey area V4 suggest that upon presentation of two stimuli inside a neuron’s receptive field, the influence of the attended stimulus increases as compared to the unattended one [6]. In our framework, this situation can be considered by increasing the weight factor of the attended stimulus in the linear superposition (2). Here we study the case k = 0.75 corresponding to attending stimulus x1. The resulting tuning curve shows characteristic distortions as compared to the symmetrical case k = 0.5 (Fig. 6a). The Fisher information analysis r e r ( ) min 2 -4 -2 0 2 4 0 5 10 15 20 (b) x -x 2 1 2 1/2 direction 2 4 6 8 2 4 6 8 0.2 0.4 0.6 0.8 1 1.2 x1 x2 f (x ,x ) 2 1 2 (a) Figure 6: Neural encoding for one attended stimulus. (a) Tuning curve (2), (3) for k = 0.75, i. e., stimulus x1 is attended. All other parameters as in Fig. 1a. (b) Minimal square estimation errors for the direction (x2 −x1)/ √ 2 resulting from a rotated Fisher information matrix. Solid line: k = 0.5 as in Fig. 5b; dotted line: k = 0.75. F = 1, all other parameters as in Fig. 4. reveals that the attended stimulus x1 yields a smaller minimal square estimation error than it does in the non-attention case k = 0.5 whereas the minimal square error for the unattended stimulus x2 is increased (data not shown). Figure 6b shows the minimal square error for the difference of the stimuli, (x2 −x1)/ √ 2. The minimal estimation error becomes larger as compared to k = 0.5. This result can be interpreted as follows: Attending stimulus x1 yields a better encoding of x1 but a worse encoding of x2. The latter results in the larger estimation error for the difference (x2 −x1)/ √ 2 of the stimulus values. This can be interpreted as a worse discriminational ability: In a psychophysical experiment, subjects attending stimulus x1 will have only a crude representation of the unattended stimulus x2 will therefore yield a performance which is worse as compared to the situation where both stimuli are processed in the same way. This is a prediction resulting from the presented framework. 4 Summary and Discussion A method was introduced to account for the encoding of multiple stimuli by populations of neurons. Estimation theory was performed in a compound space whose axes are defined by the features of each stimulus. Here we studied a specific example of linear neurons with Gaussian tuning and Poissonian spike statistics to gain insight into the symmetries in the compound space and the interpretation of the resulting estimation errors. The approach allows for a detailed consideration of attention effects on the neural level [7, 8, 6]. The method can be extended to include nonlinear neural behavior as multiple stimuli are presented; see e. g. [13, 14], where the response of single neurons to two orientation stimuli cannot be easily inferred from the neural behavior in the case of only one stimulus. More experimental and theoretical work has to be done in order to account for the psychophysical performance under the influence of attention as it has been measured, for example, in [17]. For this purpose, the presented approach has to be related to classical measures in discrimination and same-different tasks. From theoretical considerations in the case of a single stimulus [2, 3, 4, 5] it is well known that the encoding accuracy of a neural population may depend on various properties such as the number of encoded features, the noise model, and the correlations in the neural activity. The influence of such factors within the presented framework is currently under investigation. Acknowledgments I wish to thank Shun-ichi Amari, Hiroyuki Nakahara, Anthony Marley and Stefan Wilke for stimulating discussions. Part of this paper was written during my stay at the RIKEN institute. I also acknowledge support from SFB 517, Neurocognition. References [1] M. A. Paradiso, A theory for the use of visual orientation information which exploits the columnar structure of striate cortex, Biol. Cybern. 58 (1988) 35–49. [2] K. Zhang and T. J. Sejnowski, Neuronal tuning: to sharpen or broaden? Neural Comp. 11 (1999) 75–84. [3] C. W. Eurich and S. D. Wilke, Multidimensional encoding strategy of spiking neurons, Neural Comp. 12 (2000) 1519–1529. [4] S. D. Wilke and C. W. Eurich, Representational accuracy of stochastic neural populations, Neural Comp. 14 (2001) 155–189. [5] H. Nakahara, S. Wu and S.-i. Amari, Attention modulation of neural tuning through peak and base rate, Neural Comp. 13 (2001) 2031–2047. [6] J. Moran and R. Desimone, Selective attention gates visual processing in the extrastriate cortex, Science 229 (1985) 782–784. [7] C. J. McAdams and J. H. R. Maunsell, Effects of attention on orientation-tuning functions of single neurons in macaque cortical area V4, J. Neurosci. 19 (1999) 431– 441. [8] S. Treue and J. C. Mart´ınetz Trujillo, Feature-based attention influences motion processing gain in macaque visual cortex, Nature 399 (1999) 575–579. [9] R. S. Zemel, P. Dayan and A. Pouget, Probabilistic interpretation of population codes, Neural Comp. 10 (1998) 403–430. [10] P. Dayan and R. S. Zemel, Statistical models and sensory attention, in: D. Willshaw und A. Murray (eds), Procedings of the Ninth International Conference on Artificial Neural Networks, ICANN 99, Venue, University of Edinburgh (1999) 1017–1022. [11] D. H. Hubel and T. Wiesel, Receptive fields and functional architecture of monkey striate cortex, J. Physiol. 195 (1968) 215–244. [12] N. V. Swindale (1998), Orientation tuning curves: empirical description and estimation of parameters, Biol. Cybern. 78 (1998) 45–56. [13] J. J. Knierim und D. van Essen, Neuronal responses to static texture patterns in area V1 of the alert macaque monkey, J. Neurophysiol. 67 (1992) 961–979. [14] A. M. Sillito, K. Grieve, H. Jones, J. Cudeiro und J. Davies, Visual cortical mechanisms detecting focal orientation discontinuities, Nature 378 (1995) 492–496. [15] R. J. A. van Wezel, M. J. M. Lankheet, F. A. J. Verstraten, A. F. M. Mar´ee and W. A. van de Grind, Responses of complex cells in area 17 of the cat to bi-vectorial transparent motion, Vis. Res. 36 (1996) 2805–2813. [16] G. H. Recanzone, R. H. Wurtz and U. Schwarz, Responses of MT and MST neurons to one and two moving objects in the receptive field, J. Neurophysiol. 78 (1997) 2904–2915. [17] Y. Yeshurun and M. Carrasco, Attention improves or impairs visual performance by enhancing spatial resolution, Nature 396 (1998) 72–75.
|
2002
|
160
|
2,170
|
Convergent Combinations of Reinforcement Learning with Linear Function Approximation Ralf Schoknecht ILKD University of Karlsruhe, Germany ralf. schoknecht@ilkd. uni-karlsruhe. de Artur Merke Lehrstuhl Informatik 1 University of Dortmund, Germany arturo merke@udo.edu Abstract Convergence for iterative reinforcement learning algorithms like TD(O) depends on the sampling strategy for the transitions. However, in practical applications it is convenient to take transition data from arbitrary sources without losing convergence. In this paper we investigate the problem of repeated synchronous updates based on a fixed set of transitions. Our main theorem yields sufficient conditions of convergence for combinations of reinforcement learning algorithms and linear function approximation. This allows to analyse if a certain reinforcement learning algorithm and a certain function approximator are compatible. For the combination of the residual gradient algorithm with grid-based linear interpolation we show that there exists a universal constant learning rate such that the iteration converges independently of the concrete transition data. 1 Introduction The strongest convergence guarantees for reinforcement learning (RL) algorithms are available for the tabular case, where temporal difference algorithms for both policy evaluation and the general control problem converge with probability one independently of the concrete sampling strategy as long as all states are sampled infinitely often and the learning rate is decreased appropriately [2]. In large, possibly continuous, state spaces a tabular representation and adaptation of the value function is not feasible with respect to time and memory considerations. Therefore, linear feature-based function approximation is often used. However, it has been shown that synchronous TD(O), i.e. dynamic programming, diverges for general linear function approximation [1]. Convergence with probability one for TD('\) with general linear function approximation has been proved in [12]. They establish the crucial condition of sampling states according to the steady-state distribution of the Markov chain in order to ensure convergence. This requirement is reasonable for the pure prediction task but may be disadvantageous for policy improvement as shown in [6] because it may lead to bad action choices in rarely visited parts of the state space. When transition data is taken from arbitrary sources a certain sampling distribution cannot be assured which may prevent convergence. An alternative to such iterative TD approaches are least-squares TD (LSTD) methods [4, 3, 6, 8]. They eliminate the learning rate parameter and carry out a matrix inversion in order to compute the fixed point of the iteration directly. In [4] a leastsquares approach for TD(O) is presented which is generalised to TD(A) in [3]. Both approaches still sample the states according to the steady-state distribution. In [6, 8] arbitrary sampling distributions are used such that the transition data could be taken from any source. This may yield solutions that are not achievable by the corresponding iterative approach because this iteration diverges. All the LSTD approaches have the problem that the matrix to be inverted may be singular. This case can occur if the basis functions are not linearly independent or if the Markov chain is not recurrent. In order to apply the LSTD approach the problem would have to be preprocessed by sorting out the linear dependent basis functions and the transient states of the Markov chain. In practice one would like to save this additional work. Thus, the least-squares TD algorithm can fail due to matrix singularity and the iterative TD(O) algorithm can fail if the sampling distribution is different from the steady-state distribution. Hence, there are problems for which neither an iterative nor a least-squares TD solution exist. The actual reason for the failure of the iterative TD(O) approach lies in an incompatible combination of the RL algorithm and the function approximator. Thus, the idea is that either a change in the RL algorithm or a change in the approximator may yield a convergent iteration. Here, a change in the TD(O) algorithm is not meant to completely alter the character of the algorithm. We require that only modifications of the TD(O) algorithm be considered that are consistent according to the definition in the next section. In this paper we propose a unified framework for the analysis of a whole class of synchronous iterative RL algorithms combined with arbitrary linear function approximation. For the sparse iteration matrices that occur in RL such an iterative approach is superior to a method that uses matrix inversion as the LSTD approach does [5]. Our main theorem states sufficient conditions under which combinations of RL algorithms and linear function approximation converge. We hope that these conditions and the convergence analysis, that is based on the eigenvalues of the iteration matrix, bring new insight in the interplay of RL and function approximation. For an arbitrary linear function approximator and for arbitrary fixed transition data the theorem allows to predict the existence of a constant learning rate such that the synchronous residual gradient algorithm [1] converges. Moreover, in combination with interpolating grid-based function approximators we are able to specify a formula for a constant learning rate such that the synchronous residual gradient algorithm converges independently of the transition data. This is very useful because otherwise the learning rate would have to be decreased which slows down convergence. 2 A Framework for Synchronous Iterative RL Algorithms For a Markov decision process (MDP) with N states S = {S1' .. . ,SN}, action space A, state transition probabilities p : (S, S, A) -+ [0,1] and stochastic reward function r : (S, A) -+ R policy evaluation is concerned with solving the Bellman equation V 7r = 'YP7rV7r + R7r (1) for a fixed policy 7r : S -+ A. Vt denotes the value of state Si, Pi7j = P(Si' Sj, 7r(Si)), Ri = E{r(si,7r(Si))} and 'Y is the discount factor. As the policy 7r is fixed we will omit it in the following to make notation easier. If the state space S gets too large the exact solution of equation (1) becomes very costly with respect to both memory and computation time. Therefore, often linear feature-based function approximation is applied. The value function V is represented as a linear combination of basis functions {<PI, ... ,<P F } which can be written as V = <pw, where WE IRF is the parameter vector describing the linear combination and <P = (<PI I·· .I<p F) E IRNxF is the matrix with the basis functions as columns. The rows of <P are the feature vectors <P(Si) E IRF for the states Si. A popular algorithm for updating the parameter vector W after a single transition Xi ---+ Zi with reward ri is the TD(O)-algorithm [11] wn +l = wn + o:<p(xi)[ri + ,<p(zif wn - <p(xif wn ] = (IF + o:A;)wn + o:bi, (2) where 0: is the learning rate, Ai = <P(Xi)[,<P(Zi) - <P(Xi)Y, bi = <p(xi)ri and IF is the identity matrix in IRF. In the following we investigate the synchronous update for a fixed set of m transitions T = {(xi,zi,ri)li = 1, . . . ,m}. The start states Xi are sampled with respect to the probability distribution p, the next states Zi are sampled according to P(Xi,') and the rewards ri are sampled from r(xi). The synchronous update for the transition set T can then be written in matrix notation as (3) with ATD = Al + ... + Am and bTD = bl + ... + bm' Let X E IRmxN with Xi,j = 1 if Xi = Sj and 0 otherwise. Then, <pX = X<P E IRmxF is the matrix with feature vector <p(Xi) as its i-th row. Define Z and <p Z accordingly for the states Zi . With the vector of obtained rewards r = (rl ,'" ,rm)T we have ATD = (<pX)Th<pz - <pX) and bTD = (<px)Tr. The synchronous TD(O) algorithm is an instance of a much broader class of RL algorithms. The residual gradient algorithm [1], for example, minimises the Bellman error by gradient descent. In the following, let e = ,<pz - <px. The matrix fn D = fn XT X E IRNxN is diagonal and denotes the relative frequency of state Si as start state in the transition data T. Let 15 be the diagonal matrix with the inverse entries of D. For Di,i = 0 set 15i,i = O. The matrix of the relative frequencies for the state transitions from Si to Sj is given by P = 15XT Z and the vector of the average reward in the different states Si is given by it = 15XT r. It can be shown that the weighted Bellman error for the synchronous update EB(W) = ~ [hP - IN)<pw + itr fnD [hP - IN)<pw + it] with the estimated entities P, it and D instead of the unknown expected values P , Rand D is equivalent to the expression EB(W) = 2!n [ew + rf X15XT [ew + r]. Thus, for the residual gradient algorithm the update rule (3) becomes Wn+l = (IF + o:ARG )wn + o:bRG with ARG = -eTx15xTe and bRG = -eTx15XTr. The synchronous TD(O) and the residual gradient algorithm can be analysed in an unified framework with A = 'lTTe and b = 'lTTr. By setting 'lTTD = <pX and 'lTRG = -x15xTe , for example, one obtains the TD(O) algorithm and the residual gradient algorithm respectively. Moreover, varying 'IT yields a whole class of algorithms. We denote such algorithms as consistent RL algorithms if two conditions are fulfilled. First, for a tabular representation the algorithm converges to an optimal solution w* with Bellman error zero. And second, if the algorithm converges with a linear function approximator it achieves the same Bellman error independently of the initial value wo. This class of RL algorithms includes the Kaczmarz rule [9], which is similar to the NTD(O) rule [4], or the uniform update rule described in [7]. In general, these algorithms yield different solutions when function approximation is used. For the TD(O) and the residual gradient algorithm this is shown in [10]. However, a general assessment of the solution quality of the different algorithms is still missing. 3 Convergence Results The convergence properties of RL algorithms for synchronous updates in the general framework presented in the last section are described in the following main theorem of our paper. It generalises the case of repeated single-transition updates [7] to repeated multi-transition updates. For the following let [M] be the span of the columns of a matrix M and [M]l. the orthogonal complement of [M]. Theorem 1 Let wn+l = (IF + aA)wn + ab be the synchronous update rule for the transition data T. Let A E jRF x F be representable as A = C T D with some C, D E jRk x F and bE jRF be representable as b = CT v with some v E jRk. Let K = DCT E jRk x k and p( x) = ( _l)k (x - Al )fh ... (x - Al )f31 be the characteristic polynomial of Kover <C with IAII > ... > IAll. Also, let Ef, be the eigenspace corresponding to eigenvalue Ai and H = maxd ,J;(l:)I }. If the following assumptions hold (a) Vi: (Re(Ai) < 0) v Ai = 0 (b) dim(Ef,) = (3i for Ai = 0 (c) [CT] 11 [DT]l. = {O} then the limit w* = limn -> (1) wn exists for all learning rates 0 < a < aL, where the limit learning rate aL satisfies aL = if. The limit w* may depend on the initial value wO . Note, if the Ai leading to the maximum of H is real then H = I Ai I. A proof of this theorem can be found in the appendix. General convergence conditions of iterations have been examined in numerical mathematics. A standard result states that if the absolute value of the largest eigenvalue of the iteration matrix IF + aA, i.e. the spectral radius, is smaller than one, then the iteration converges to the unique fixed point w* = -A-I b [5] (Theorem 2.1.1). In our case, however, the matrix A may not be invertible. This happens, for example, if the features <Pi in the feature matrix <P are linearly dependent. If A is not invertible it has eigenvalue zero and, thus, IF + aA has eigenvalue one. Conditions (b) and (c) in the above theorem are needed in order to compensate for the singularity of A and to assure convergence. If the iteration converges for singular A the fixed point depends on the initial value wO and is no longer unique. Therefore, for consistent RL algorithms we require that the Bellman error of all fixed points be the same. Thus, the quality of the obtained solution to the policy evaluation problem is independent of the initial value. However, the suitability of different w* for a policy improvement step can vary but this question is not addressed here. An important implication of Theorem 1 concerns the choice of the learning rate. If sampling were involved in the update rule the learning rate would have to be decreased in the standard manner (Lt at = 00, Lt a; < (0) in order to fulfil the condition for stochastic approximation algorithms. However, for a fixed set of updates and certain synchronous RL algorithms with linear feature-based function approximation Theorem 1 predicts the existence of a constant learning rate. In general the computation of this learning rate would require knowledge of the eigenvalues of K which may not be directly available. As the following proposition shows, for certain combinations of RL algorithms and linear function approximation a universal constant learning rate exists such that the iteration in Theorem 1 converges. The proof can be found in the appendix. Proposition 1 For an appropriate constant choice of the learning rate a the residual gradient algorithm will converge independently of the linear function approximation scheme when applied to the problem of repeated synchronous multi-transition updates. The residual gradient algorithm is a consistent RL algorithm. If the residual gradient algorithm is combined with grid-based linear interpolation over an arbitrary triangulation of the state space and the transition set contains m transitions then the iteration converges for all 0: < m(1~ 'Y2)' A choice of the learning rate 0: < k according to Theorem 1 yields a convergent iteration. However, this might not be the best choice with respect to asymptotic convergence rate. The asymptotic convergence rate is better for matrices with lower spectral radius [5], which yields a criterion for the choice of an optimal learning rate 0:*. If K has only real eigenvalues then we can deduce a particular simple formula for 0:*. Assume that all nonzero eigenvalues of K satisfy Ai E [Amax, Amin], where Amin is the largest eigenvalue smaller than zero and Amax is the eigenvalue with largest absolute value. It can be shown that the asymptotic convergence rate is determined by the eigenvalues of 1m + o:K that are unequal one. The eigenvalues Ai of K are related to the eigenvalues ),i of 1m + o:K by ),i = 1 + o:Ai. Hence, the interval [Amax, Amin] is mapped to [),max, ),min] = [1 +O:Amax, 1 +o:Amin]. In order to obtain a low spectral radius of 1m +o:K this interval should lie symmetrically around zero, which is equivalent to ),min = -),max' This yields 0:* = 1 >'=in l ~ I >'=ax l < k with H = IAmaxl. Thus, 0:* leads to convergence according to Theorem 1. Note also that a larger learning rate does not necessarily lead to a faster asymptotic convergence of the iteration. 4 Counterexample of Baird - Revisited In this section we analyse the counterexample given by Baird in [1], and show how Theorem 1 and Proposition 1 can be applied to obtain explicit bounds for the learning rate 0: and the discount factor "( for which the residual gradient and TD(O) algorithms converge. The matrices <I>, X and Z are given by 12000000 1000000 10200000 0100000 10020000 0010000 <I>= 10002000 X= 0001000 10000200 0000100 10000020 0000010 20000001 0000001 Z= 0000001 0000001 0000001 0000001 0000001 0000001 0000001 which corresponds to the synchronous update of every state transition. In the residual gradient case we have K RG -("(Z X)<I>(("(Z X)<I»T which has just negative eigenvalues URG {-4, H -15 + 34"( 35"(2 ± -}2102,,(2 - 812"( - 2380"(3 + 121 + 1225"(4]}. Using Theorem 1 and Proposition 1 we can find a constant learning rate 0:, such that the iteration converges for every "( E [0,1). For example, for "( = 0.9 the eigenvalues of KRG are URG = {-0.0204,-4,-12.7296} and Theorem 1 yields 0: < 0.1571 which is also almost equal to the optimal learning rate 0:* ~ 0.1569. In the TD(O) case we have to analyse the matrix KTD = -("(Z -X)<I>(X<I»T, which has the eigenvalues UTD = {-4, H -15 + 17"( ± -}289"(2 - 406"( + 121]}. There are eigenvalues of KTD with positive real part for "( ~ 0.89. In such cases we have divergence for every 0: > 0 as described in [1] for,,( = 0.9. However, contradicting the argument in [1] the TD(O) algorithm converges for all "( :::; 0.88 if the learning rate is chosen appropriately. For example, for "( = 0.4 all eigenvalues are negative (UTD = {-3.0,-4,-5.2}), so condition (a) and (b) of Theorem 1 are trivially fulfilled. Condition (c) can also be shown by simple computation, and therefore using Theorem 1 we obtain convergence for 0: < 0.384 and optimal asymptotic convergence for 0:* ~ 0.244, which is much smaller. 5 Conclusions For the problem of repeated synchronous updates based on a fixed set of transitions we have proved sufficient conditions of convergence for arbitrary combinations of reinforcement learning algorithms and linear function approximation. Our main theorem yields a rule for determining a problem dependent learning rate such that the algorithm converges. For a combination of the residual gradient algorithm with grid-based linear interpolation we have deduced a constant learning rate such that the algorithm converges independently of the concrete transition data. Moreover, we have derived a general formula for an optimal learning rate with respect to asymptotic convergence. Finally we have applied our main theorem to fully analyse the example Baird gives for the divergence of TD(O) [1]. Appendix Lemma 1 Let D be a real m x F matrix and CT a real F x m matrix, where m > F. Then K = DCT has the same eigenvalues as A = CT D and additionally the eigenvalue zero with multiplicity (F-m). Let HI{ be the generalised eigenspace of K corresponding to the eigenvalue A and H1 the generalised eigenspace of A corresponding to the eigenvalue A. Then, CTHI{ ~ H1 and DH1 ~ HI{. For A oF 0 it even holds that CTHI{ = H1 and DH1 = HI{. Proof: The generalised eigenspace HI{ has index sI{ if sI{ is the smallest number for which ker(K - AIm)sf = ker(K - AIm)sf +1 holds, where h denotes the identity in IRkxk. Let x E HI{, i.e. (K - AIm)sf x = O. With CT Ki = AiCT we have sf ( K) CT(K - AImyf x = CT(i~ St KiASf - i)x = (A - AIF)sf CT x. (4) Thus, CT x E H1. And with the same argument we obtain Dx E HI{ from x E H1· Therefore, CTHI{ ~ H1 and DH1 ~ HI{ Let A oF 0 and BI{ a basis in HI{. As the Jordan block of K corresponding to HI{ is invertible the vectors CT Bf are linearly independent and therefore form a basis of the span [CT BI{]. With the above consideration we have [CT BI{J ~ H 1. If this is a real subset CTBI{ can be completed to form a basis B1 of H1 with IBI{I < IB11. Then we have that DB1 is linearly independent and [DB1 J ~ HI{. Moreover, we have dim(HI{) = IBI{ I < IB11 = dim([DB1]) ~ dim(HI{), which is a contradiction. Therefore, CTHI{ = [CT BfJ = H1. Similarly, we obtain DH1 = HI{. Thus, the multiplicities of the eigenvalues A oF 0 of A and K are the same. The multiplicity of the eigenvalue zero of matrix K is by (F - m) larger than that of matrix A. D Proof of Theorem 1: Due to assumption (a) and Lemma 1 every eigenvalue of A is either zero or has a real part less than zero. If the real part of every eigenvalue of A is less than zero, A is invertible. For invertible matrices Theorem 2.1.1 from [5] states that the iteration converges if and only if the spectral radius e(IF + aA), i.e. the largest eigenvalue, is less than 1. For every eigenvalue Ai of A obviously 1 + aAi is an eigenvalue of IF + aA. With H = maxi { , ~;(l:) , } we obtain for a> 0 . 2 e(IF + aA) < 1 ~ 'it: 11 + aAi l < 1 ~ a < H' (5) This completes the proof if all eigenvalues of A have a negative real part. In the following let A have the eigenvalue Al = O. The vector space IRF can be represented as the direct sum of the generalised eigenspaces IRF = H~ EB H12 EB · .. EB Htl • In the following we write ilt = Ht2 EB ... EB Htl because this is a complementary space of Ht. As the generalised eigenspaces of A are invariant against A, i.e. \::Ix E Ht. : Ax E Ht., the iteration wn+1 = (IF + aA)wn + ab can be decomposed in two parts, one in the generalised eigenspace Ht and the other in the com.Qlem~ntary space ilt. Let wn = wn + wn and b = b + b, where wn, b E Ht and wn , b E Ht. Then we have wn+1 = wn + a(Awn + b) = ~n + a(Awn + b~ +~n + a(Awn + b~ (6) Thus, the convergence analysis can be carried out separately for the two iterations. The matrix A in iteration wn+1 = wn + a(Awn + b) is not invertible. However, the iteration takes place in the subspace ilt. In this subspace the mapping associated with A is invertible. Therefore, A can be replaced by an invertible matrix A that does not ~lter the iteration in ilt. The matrix A can be constructed such that e(IF + aA) = e(IF + aA). Therefore, according to the considerations above the iteration converges for 0 < a < it. In the following we show that the iteration in Ht is the identity and therefore trivially converges. According to assumption J~ Hff = E{f. All v E IRm can be represented as v = ii + v with ii E E{f and v E Ho = H~ EB · .. EB Ht. According to Lemma 1 CTilff = ilt and CTHff ~ Ht hold. Therefore, for b + b = b = CT v we have b = CT ii and b = CT v. Let E{f =1= {o}. Then, for all ii E E{f 0= Kii = DCTii ===* CTii E [CT] n [DT].L 1% cTii = O. For E{f = {O} we also obtain CTii = 0 because ii = o. Therefore, we have CTE{f = {O} and, as a consequence, b = CTii = o. The last that remains to show is that Aw = 0 for all w E HA. According to Lemma 1 we know that Dw E Hff. Assumption (b) says that H~ = E{f and from the above considerations we know that CTE{f = {O}. Therefore, Aw = CT(Dw) = o. Thus, the iteration in Ht is the identity. As both parts of the iteration converge the overall iteration also converges which completes that part of the proof. The limit w* of wn+1 = wn + a(Awn + b) is unique and we have w* = A-lb. The limit of wn+l = wn + a(Awn + b) is not unique, but depends on the initial value wo. It holds that w* = wo. Therefore, the limit w* = w* + w* depends on the initial value wo. Proof of Proposition 1: For the residual gradient algorithm we have ARG = _8T X DXT8 and bRG = _8T X DXT r. In order to apply Theorem 1 this is decomposed in ARG = CTD and bRG = CTv with C = -D = v75XT8 and v = -v75XT r. As the diagonal entries of D are positive we can write v75 for the diagonal matrix whose entries are the square roots of D. Thus [CT] = )DT] which yields condition (c) of Theorem 1. Moreover, the matrix K = DC = -CCT is symmetric and therefore diagonalisable. Hence, condition (b) is fulfilled and all eigenvalues are real. Let now A =1= 0 be an eigenvalue of K and let x be a corresponding eigenvector. Then 0 > - (CT x) T (CT x) = xT K x = AXT x which yields A < o. Thus, all requirements are fulfilled and for an appropriate choice of a the residual gradient algorithm converges independently of the concrete form of the function approximation scheme. The consistency of the residual gradient algorithm can be shown formally but due to space limitations we only give the following informal proof. The algorithm minimises the Bellman error, which is a quadratic objective function. Hence, there are no local optima and if the global optimum is not unique, the values of all global optima are identical. Due to its gradient descent property the residual gradient algorithm converges to such a global optimum independently of the initial value. In case of a tabular representation a global minimum has Bellman error zero and corresponds to an optimal solution. Thus, the residual gradient algorithm is consistent. A detailed description of how grid-based linear interpolation works in combination with RL can be found in [7]. Important for us is that in a d-dimensional grid each feature vector ip(x) satisfies 0 ~ ipi(X) ~ land 2:::1 ipi(X) = 1. With (, -> denoting the standard scalar product and II . 112 denoting the corresponding euclidean norm, we have !Ki,jl = 1«CT)i, (CT)j )1 ~ maxdll(CT)IIID = 2::=1 Cl~j" According to the definition Cl,j = (-JD)I,1 2:~1 Xk,ICripj(Zk) - ipj(Xk)) holds. Moreover, from D = X T X it follows that Dl,l = 2:;;'=1 X~,l = 2:;;'=1 Xk ,l because Xk,l is either zero or one. And besides that we have nl,IDI ,1 = 1. Altogether we obtain IK',il ,,;~' (15", ,~, X", it, <Pi (Z.)) '+ (15", ,~, X", it, <Pi (X,l) Z ~ ~z + 1. It is well known that the spectral radius {! of the matrix K satisfies (!(K) ~ IIKII for every norm II . II . Then, for the maximum norm of K we obtain I!K II 00 = max1";i";m 2:1=1 IKi,jl ~ m(l + ,2). With H = m(l + ,2) this yields {!(K) ~ IIKll oo ~ H. Thus we have a bound for the absolute value of the largest eigenvalue of K. According to Theorem 1 the iteration converges for a < ft· D References [1] L. C. Baird. Residual algorithms: Reinforcement learning with function approximation. Proc. of the Twelfth International Conference on Machine Learning, 1995. [2] D. P. Bertsekas and J . N. Tsitsiklis. Neuro Dynamic Programming. Athena Scientific, Belmont, Massachusetts, 1996. [3] J .A. Boyan. Least-squares temporal difference learning. In Proceeding of the Sixteenth International Conference on Machine Learning, pages 49- 56, 1999. [4] S.J Bradtke and A.G. Barto. Linear least-squares algorithms for temporal difference learning. Machine Learning, 22:33- 57, 1996. [5] A. Greenbaum. Iterative Methods for Solving Linear Systems. SIAM, 1997. [6] D. Koller and R. Parr. Policy iteration for factored mdps. In Proc. of the Sixteenth Conference on Uncertainty in Artificial Intelligence (UAI), pages 326- 334, 2000. [7] A. Merke and R. Schoknecht. A necessary condition of convergence for reinforcement learning with function approximation. In Proceedings of the Nineteenth International Conference on Machine Learning, pages 411- 418, Sydney, Australia, 2002. [8] M. G. Lagoudakis and R. Parr. Model-free least-squares policy iteration. In Advances in Neural Information Processing Systems, volume 14, 2002. [9] S. Pareigis. Adaptive choice of grid and time in reinforcement learning. Advances in Neural Information Processing Systems, 1998. [10] R. Schoknecht. Optimality of reinforcement learning algorithms with linear function approximation. In Advances in Neural Information Processing Systems, volume 15, 2003. [11] R. S. Sutton. Learning to predict by the methods of temporal differences. Machine Learning, 3:9- 44, 1988. [12] J. N. Tsitsiklis and B. Van Roy. An analysis of temporal-difference learning with function approximation. IEEE Transactions on Automatic Control, 1997.
|
2002
|
161
|
2,171
|
Informed Projections David Cohn Carnegie Mellon University Pittsburgh, PA 15213 cohn+@cs.cmu.edu Abstract Low rank approximation techniques are widespread in pattern recognition research — they include Latent Semantic Analysis (LSA), Probabilistic LSA, Principal Components Analysus (PCA), the Generative Aspect Model, and many forms of bibliometric analysis. All make use of a low-dimensional manifold onto which data are projected. Such techniques are generally “unsupervised,” which allows them to model data in the absence of labels or categories. With many practical problems, however, some prior knowledge is available in the form of context. In this paper, I describe a principled approach to incorporating such information, and demonstrate its application to PCA-based approximations of several data sets. 1 Introduction Many practical problems involve modeling large, high-dimensional data sets to uncover similarities or latent structure. Linear low rank approximation techniques such as PCA [12], LSA [5], PLSA [6] and generative aspect models [1] are powerful tools for approaching these tasks. They identify (relatively) low-dimensional hyperplanes that best approximate the data according to a given noise model. In doing so, they exploit and expose regularities in the data: the hyperplanes represent a latent space whose dimensions are often observed to correspond to distinct latent categories in the data set. For example, an LSA-derived low-rank approximation to a corpus of news stories may have dimensions corresponding to “politics,” “finance,” “sports,” etc. Documents with the same inferred sources (therefore “about” the same topic) generally lie close to each other in the latent space. The broad applicability of these techniques comes from the fact that they are essentially “unsupervised” – a model is learned in the absence of labels indicating class or category memberships. There are, however, many situations in which some prior information is available; in these cases, we would like to have some way of using that information to improve our model. Nigam et al. [10] studied the problem of learning to classify data into pre-existing categories in the presence of labeled and unlabeled examples. Their approach augmented a traditional supervised learning algorithm with distribution information made available from the unlabeled data. In contrast, this paper considers a method for augmenting a traditional unsupervised learning problem with the addition of equivalence classes. Equivalence classes are a natural concept for many real-world problems. We frequently have some reason for believing that a set of observations are similar in some sense without wanting to or being able to say why they are similar. Note that the sets are not required to be comprehensive — we may only have known associations between a handful of observations. Further, the sets are not required to be disjoint; we may know that members of a set are similar, but there is no implication that members of two different sets are dissimilar. In any case, the hope is that by indicating which observations are similar, we can bias our model focus on relevant features and to ignore differences that, while statistically significant, are not correlated with our idea of similarity in the problem at hand. This paper describes an algorithm validating the use of this approach. 1.1 Related work There is too large a literature examining the combination of supervised and unsupervised learning to cover here; below I mention in passing some of the most relevant research. In terms of conceptual similarity, multiple discriminant analysis (MDA) and oriented principal components analysis (OPCA) are techniques that attempt to maximize the fidelity of a linear low rank approximation while minimizing the variance of data belonging to designated equivalence classes [2]. The difference with the approach discussed here is that MDA and OPCA maximize a ratio of variances rather than a mixture; this is equivalent to making the assumption that the covariance matrices for each set are tied. Another related technique is multidimensional scaling (MDS) which, aside from sharing the ratio-based criterion, makes the added assumption that the precise degree of similarity (or dissimilarity) of two data points is to be enforced. In general, which set of assumptions is best depends on the problem at hand. In terms of implementation, the present algorithm owes a great deal to the “shadow targets” algorithm for Neuroscale [8, 15], whose eponymous data points enforce equivalence classes on sets of (otherwise) unsupervised data. That algorithm trades fidelity of representation against fidelity of equivalence classes much in the same way as Equation 4, although it does so in the context of a Kohonen neural network instead of a linear mapping. Another closely-related technique is CI-LSI [7], which uses latent semantic analysis for cross-language retrieval. The technique involves training on text documents from a parallel corpus for two or more languages (e.g. French and English), such that each document exists as both an English and French version. In CI-LSI, each document is merged with its twin, and the hyperplane is fit to the set of paired documents. The goal of CI-LSI matches the goal of this paper, and the technique can in fact be seen as a special case of the informed projections discussed here. By using the “mean” of a pair of documents as a proxy for the documents themselves, we assert that the two come from a common source; fitting a model to a collection of such means finds a maximum likelihood solution subject to the constraint that both members of a pair comes from a common source. 2 Informed and uninformed projections To introduce informed projections, I will first briefly review principal components analysis (PCA) and an algorithm for efficiently computing the principal components of a data set. 2.1 PCA and EMPCA Given a finite data set X ⊂Rn, where each column corresponds to one observation, PCA can be used to find a rank m approximation ˆX (where m < n) which minimizes the sum −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 Figure 1: PCA maximizes the variance of the observations (on left), while an informed projection minimizes variance of projections from observations belonging to the same set. squared distortion with respect to X. It does this by identifying the m orthogonal directions in which X exhibits the greatest variance, corresponding to the m largest eigenvectors C = [C1,...,Cm]. X can then be projected onto the hyperplane defined by C as ˆX = C(CTC)−1CTX. (1) Although not strictly a generative model, PCA offers a probabilistic interpretation: C represents a maximum likelihood model of the data under the assumption that X consists of (Gaussian) noise-corrupted observations taken from linear combinations of m sources in an n-dimensional space. The values for ˆX then represent maximum likelihood estimates of the mixtures responsible for the corresponding values in X. Roweis [13] described an efficient iterative technique for identifying C using an EM procedure. Beginning with an arbitrary guess for C, the latent representation of X is computed Y = (CTC)−1CTX (2) after which C is updated to maximize the estimated likelihoods C = XY T(YY T)−1. (3) Equations 2 and 3 are iterated until convergence (typically less than 10 iterations), at which time the sum squared error of ˆX’s approximation to X will have been minimized. 2.2 Informed projections PCA only penalizes according to squared distance of an observation xi from its projection ˆxi. Given a Gaussian noise model, ˆxi is the maximum likelihood estimate of xi’s “source,” which is the only constraint with which PCA is concerned. If we believe that a set of observations Si = {x1,x2,...,xn} have a common cause, then they should share a common source. For a hyperplane defined by eigenvectors C, the maximum likelihood source is the mean of Si’s projections onto C, denoted Si. As such, the likelihood should be penalized not only on the basis of the variance of observations around their projections ∑j ||x j −ˆx j||2 , but also the variance of the projections around their set means ∑i ∑x j∈Si ||ˆx j −Si||2 . These two penalty terms may be at odds with each other, so we must introduce a hyperparameter β representing how much weight to place on accurately reproducing the original observations and how much to place on preserving the integrity of the known sets: Eβ = (1−β)∑ j ||x j −ˆx j||2 +β∑ i ∑ x j∈Si ||ˆx j −Si||2. (4) When β = 0.5, Equation 4 is equivalent to minimizing ∑i ∑x j∈Si ||x j −Si||2 under the assumption that all otherwise unaffiliated xi are members of their own singleton sets. This is just the squared distance from each observation to its projected cluster mean, which appears to be the criterion CI-LSI minimizes by averaging documents. 2.3 Finding an informed projection The error criterion in 4 may be efficiently optimized with an expectation-maximization (EM) procedure based on Roweis’ EMPCA [13], alternately computing estimated sources ˆx and maximizing the likelihoods of the observed data given those sources. The likelihood of a set is maximized by minimizing the variance of projections from members of a set around their mean. This is at odds with the efforts of PCA to maximize likelihood by maximizing the variance of projections from the data set at large. We can make these forces work together by adding a “complement set” ˜Si for each set Si such that the variance of Si’s projections is minimized by maximizing the variance of ˜Si’s projections. The complement set may be determined analytically, but can also be computed efficiently as an extra step between the “E” and “M” steps of the EM iteration. Given an observation xj ∈Si, the complement for x j may be computed in terms of its projection ˆx j onto the hyperplane and Si, the mean of the set. Figure 2: Location of a point’s complement ˜x j with respect to its mean set projection Si and the current hyperplane. In order to “pull” the current hyperplane in the direction that will minimize x j’s distance from the set mean, ˜x j must be positioned at a distance of ||x j −ˆx j|| from the hyperplane such that its projection lies along line from Si to ˆx j at a distance from Si equal to ||x j −ˆx j||. With some geometric manipulation (Figure 2), it can be shown that ˜xj = Si +(ˆx j −Si) ||ˆx j −x|| ||ˆx j −Si|| +(ˆx j −x)||ˆx j −Si|| ||ˆx j −x|| . For efficiency, it is worth noting that by subtracting each set’s mean from its constituent observations, all sets may be combined into a single zero-mean “superset” ˜S from which complements are computed. Once the complement set has been computed, it can be appended to the original observations a to create a joint data set, denoted X+ = [X| ˜S], and the “M” step of the EM procedure is continued as before:1 Y = (CTC)−1CTX+, C = X+Y T(YY T)−1. (5) Applying β to the optimization is straightforward – if we preprocess the data by subtracting the mean of the observations (as is standard for PCA), the effect of each observation is to 1Since ˜Si depends on the projections, and therefore the position of the hyperplane, it must be recomputed with each iteration. apply a “torque” to the current hyperplane around the origin. By multiplying all coordinates of an observation by the same scalar, we scale the torque applied by the same amount. As such, we can trade off the weight attached to enforcing the sets against the weight attached to reconstructing the original data by multiplying ˜S and X by β and 1−β respectively: X+ β = [(1−β)X|β· ˜S] 3 Experiments I examined the effect of “informing” projections on three data sets from two domains. The first two were text data sets taken from the WebKB project and the “20 newsgroups” data set. The third data set consisted of acoustic features from recorded music. Finally, I examine the effect of adding set information to the joint probabilistic model described by Cohn and Hofmann [3]. 3.1 WebKB data The first set of experiments began with a subset of the WebKB data set [4]. Using Rainbow [9], I tokenized 1000 randomly-selected documents, stripping out HTML and digits, and kept the 1000 terms with highest class-dependent information gain (the reduced vocabulary greatly decreased processing times). The result was 1000 documents with 1000 features, where feature fi, j represented the frequency with which term j occurred in document xi. Sets were constructed from the categories provided with each document. The experiments varied both the fraction of the training data for which set associations were provided (0-1) and the weight given to preserving those sets (also 0-1). For each combination, I ran 40 trials, each using a randomized split of 200 training documents and 100 test documents. Accuracy was evaluated based on leave-one-out nearest neighbor classification over the test set.2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.7 0.72 0.74 0.76 0.78 0.8 0.82 0.84 0.86 0.88 fraction of data with set labels accuracy weight = 0.4 weight = 0.5 weight = 0.6 weight = 0.7 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 weight given to sets accuracy frac = 0.2 frac = 0.4 frac = 0.6 frac = 0.8 Figure 3: Nearest neighbor classification of WebKB data, where a 5D PCA of document terms has been informed by web page category-determined sets (40 independent train/test splits). The fraction of observations that have been given set assignments is varied from 0 to 1 (left plot), as is β, the weight attached to preserving set associations (right plot). Figure 3 summarizes the results of these experiments. As expected, the more documents that had set associations, the greater the improvement in classification accuracy, but this 2Obviously, simple nearest neighbor is far from the most effective classification technique for this domain. But the point of the experiment is to evaluate to what degree informing a projection preserves or improves topic locality, which nearest neighbor classifiers are well-suited to measure. improvement was only evident for 0.3 ≤β ≤0.7; below 0.3, the sets were not given enough weight to make a difference, while above 0.7 there is a rapid deterioration in accuracy. 3.2 20 Newsgroups 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.24 0.26 0.28 0.3 0.32 0.34 0.36 alpha (set weighting) accuracy Figure 4: Five categories from 20 newsgroups data set, where a 5D PCA of document terms has been informed by source category (30 train/test splits, for 0 < β < 1). The second set of experiments also used a standard text classification corpus, but with an unrestricted vocabulary. Beginning with the documents of the 20 newsgroups data set, I again preprocessed the documents as above with Rainbow, but this time kept the entire vocabulary (27214 unique terms), instead of preselecting maximally informative terms. Because of the additional running time required to handle the complete vocabularies, the experiments used all set labels and only varied the weighting. Thirty independent training and test sets of 100 documents each were run for 0 ≤β ≤1, and as before, accuracy was evaluted in terms of leaveone-out classification error on the test set. Figure 4 summarizes the results of these experiments. The characteristic learning curve is very similar to that for the WebKB data — an intermediate set weighting yields significantly better performance than the purely supervised or unsupervised cases. There is, however, one notable distinction: in these experiments, there is much less variation in accuracy for large values of β — it almost appears that there are three stable regions of performance. 3.3 Album recognition from acoustic features The third test used a proprietary data set of acoustic properties of recorded music. The data set contained 11252 recorded music tracks from 939 albums. Each observation consisted of 85 highly-processed acoustic features extracted automatically via digital signal processing. The goal of this experiment was to determine whether informing a projected model could improve the accuracy with which it could identify tracks from the same album. Recalling Platt’s playlist selection problem [11], this can serve as a proxy for estimating how well the model can predict whether two tracks “belong together” by the subjective measure of the artist who created the album. For these experiments, I selected the first 8439 tracks (3/4 of the data) for training, assigning each track to be a member of the set defined by the album it came from. Many tracks appeared on multiple albums (“Best of...” and soundtrack collections). The remaining 2813 tracks were used as test data. The 85 dimensional features were projected down into a 10 dimensional space, informing the projection with sets defined by tracks from the same album. The relatively low dimension of the problem permitted also running OPCA on the data set for comparison. As above, I measured the frequency with which each test track had another track from the same album as its nearest neighbor when projected down into this same space. While the improvements in performance are not as striking as those from the previous experiments, they are nonetheless significant (Table 1). One reason for the meager improvement may be that the features from which the projections were computed had already been weight β = 0.0 β = 0.5 β = 1.0 OPCA accuracy 0.1070 0.1241 0.0551 0.1340 ratio 0.3859 0.3223 0.3414 0.3144 Table 1: Album recognition results using 2813 test tracks from 316 albums. For each weighting β, “accuracy” is the fraction of times which the closest track to a test track came from the same album; “ratio” indicates the average ratio of intra-album distances to interalbum distances in the test set. In all cases, informing the projection with a weight of β = 0.5 increases the accuracy and decreases the ratio of the model. manually optimized for classification accuracy. Interestingly, OPCA slightly outperforms the informed projection for both criteria on this problem. 3.4 Content, context and connections Prior work [3] discussed building joint probabilistic models of a document base, using both the content of the documents and the connections (citations or hyperlinks) between them. A document base frequently contains context as well, in the form of documents from the same source or by the same author. Informed projection provides a way for us to inject this third form of information and further improve our models. Figure 5 summarizes the results of using set information to “inform” the joint content+link models discussed in the previous paper. That work used a multinomial model for its approximation, so we can not use the equations defined in Section 2.3. Instead, we can make use of the observation of Section 1.1 to approximate the informing process by merging documents from the same set. Figure 5 illustrates that this process complements the earlier content+connections approach, providing a joint model of document content, context and connections. accuracy (std err) uninformed informed content 0.19 0.33 (0.017) (0.039) links 0.11 0.23 (0.013) (0.098) both 0.21 0.33 (0.023) (0.057) 0 0.5 1 0 0.2 0.4 0.6 0.8 1 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 set membership connection weight accuracy Figure 5: (left) Classification accuracy of informed vs. uninformed models of separate and joint models of document content and connections, using the WebKB dataset. (right) Effect of adding more document context in the form of set membership information on the Cora data set. See Cohn and Hofmann [3] for details. 4 Discussion and future work The experiments so far indicate that adding set information to a low rank approximation does improve the quality of a model, but only to the extent that the information is used in conjunction with the unsupervised information already present in the data set. The improvement in performance is evident for content models (such as LSA), connection models, and joint models of content and connections. 4.1 Future work Beyond experiments that to clarify the effect of β on model fitness, there are many obvious directions for future work. The first is further exploration on the relationship between informed PCA and and the variants of MDA discussed in Section 1.1. While the differences are mathematically straightforward, the effect of sum-vs.-ratio criteria bears further study. A second broad area for future work is the application of the techniques described here to richer low rank approximation models. While this paper considered the effect of informing PCA, it would be fruitful to examine both the process and effect of informing multinomialbased models [3, 6], fully-generative models [1] and local linear embeddings [14]. A third area for exploration is the study of potential applications for this approach, which include improved relevance modeling, directed web crawling, and personalized search and recommendation across a wide variety of media. References [1] D. Blei, A. Ng, and M. I. Jordan. Latent dirichlet allocation. In Advances in Neural Information Processing Systems 14, 2002. [2] C.J.C. Burges, J.C. Platt, and S. Jana. Extracting noise-robust features from audio data. In Proceedings of ICASSP, 2002. [3] D. Cohn and T. Hofmann. The missing link - a probabilistic model of document content and hypertext connectivity. In T. Leen et al., editor, Advances in Neural Information Processing Systems 13, 2001. [4] M. Craven, D. DiPasquo, D. Freitag, A. McCallum, T. Mitchell, K. Nigam, and S. Slattery. Learning to extract symbolic knowledge from the world wide web. In Proceedings of the 15th National Conference on Artificial Intelligence (AAAI-98), 1998. [5] S. Dumais, G. Furnas, T. Landauer, S. Deerwester, and R. Harshman. Using latent semantic analysis to improve access to textual information. In Proceedings of the Conference on Human Factors in Computing Systems CHI’88, 1988. [6] T. Hofmann. Probabilistic latent semantic analysis. In Proc. of Uncertainty in Artificial Intelligence, UAI’99, Stockholm, 1999. [7] M. Littman, S. Dumais, and T. Landauer. Automatic cross-language information retrieval using latent semantic indexing. In G. Grefenstette, editor, Cross Language Information Retrieval. Kluwer, 1998. [8] D. Lowe and M. E. Tipping. Feed-forward neural networks and topographic mappings for exploratory data analysis. Neural Computing and Applications, 4:83–95, 1996. [9] A. K. McCallum. Bow: A toolkit for statistical language modeling, text retrieval, classification and clustering. http://www.cs.cmu.edu/ mccallum/bow, 1996. [10] K. Nigam, A. K. McCallum, S. Thrun, and T. M. Mitchell. Learning to classify text from labeled and unlabeled documents. In Proceedings of AAAI-98, pages 792–799, Madison, US, 1998. AAAI Press, Menlo Park, US. [11] J. Platt, C. Burges, S. Swenson, C. Weare, and A. Zheng. Learning a gaussian process prior for automatically generating music playlists. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14. MIT Press, 2002. [12] B. D. Ripley. Pattern Recognition and Neural Networks. Cambridge: University Press, 1996. [13] S. Roweis. EM algorithms for PCA and SPCA. In M. I. Jordan, M. J. Kearns, and S. A. Solla, editors, Advances in Neural Information Processing Systems, volume 10. MIT Press, 1998. [14] S. Roweis and L. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323–2326, Dec 2000. [15] M. E. Tipping and D. Lowe. Shadow targets: A novel algorithm for topographic projections by radial basis functions. Neurocomputing, 19(1):211–222, 1998.
|
2002
|
162
|
2,172
|
Automatic Acquisition and Efficient Representation of Syntactic Structures Zach Solan, Eytan Ruppin, David Horn Faculty of Exact Sciences Tel Aviv University Tel Aviv, Israel 69978 {rsolan,ruppin,horn}@post.tau.ac.il Shimon Edelman Department of Psychology Cornell University Ithaca, NY 14853, USA se37@cornell.edu Abstract The distributional principle according to which morphemes that occur in identical contexts belong, in some sense, to the same category [1] has been advanced as a means for extracting syntactic structures from corpus data. We extend this principle by applying it recursively, and by using mutual information for estimating category coherence. The resulting model learns, in an unsupervised fashion, highly structured, distributed representations of syntactic knowledge from corpora. It also exhibits promising behavior in tasks usually thought to require representations anchored in a grammar, such as systematicity. 1 Motivation Models dealing with the acquisition of syntactic knowledge are sharply divided into two classes, depending on whether they subscribe to some variant of the classical generative theory of syntax, or operate within the framework of “general-purpose” statistical or distributional learning. An example of the former is the model of [2], which attempts to learn syntactic structures such as Functional Category, as stipulated by the Government and Binding theory. An example of the latter model is Elman’s widely used Simple Recursive Network (SRN) [3]. We believe that polarization between statistical and classical (generative, rule-based) approaches to syntax is counterproductive, because it hampers the integration of the stronger aspects of each method into a common powerful framework. Indeed, on the one hand, the statistical approach is geared to take advantage of the considerable progress made to date in the areas of distributed representation, probabilistic learning, and “connectionist” modeling. Yet, generic connectionist architectures are ill-suited to the abstraction and processing of symbolic information. On the other hand, classical rule-based systems excel in just those tasks, yet are brittle and difficult to train. We present a scheme that acquires “raw” syntactic information construed in a distributional sense, yet also supports the distillation of rule-like regularities out of the accrued statistical knowledge. Our research is motivated by linguistic theories that postulate syntactic structures (and transformations) rooted in distributional data, as exemplified by the work of Zellig Harris [1]. 2 The ADIOS model The ADIOS (Automatic DIstillation Of Structure) model constructs syntactic representations of a sample of language from unlabeled corpus data. The model consists of two elements: (1) a Representational Data Structure (RDS) graph, and (2) a Pattern Acquisition (PA) algorithm that learns the RDS in an unsupervised fashion. The PA algorithm aims to detect patterns — repetitive sequences of “significant” strings of primitives occurring in the corpus (Figure 1). In that, it is related to prior work on alignment-based learning [4] and regular expression (“local grammar”) extraction [5] from corpora. We stress, however, that our algorithm requires no pre-judging either of the scope of the primitives or of their classification, say, into syntactic categories: all the information needed for its operation is extracted from the corpus in an unsupervised fashion. In the initial phase of the PA algorithm the text is segmented down to the smallest possible morphological constituents (e.g., ed is split off both walked and bed; the algorithm later discovers that bed should be left whole, on statistical grounds).1 This initial set of unique constituents is the vertex set of the newly formed RDS (multi-)graph. A directed edge is inserted between two vertices whenever the corresponding transition exists in the corpus (Figure 2(a)); the edge is labeled by the sentence number and by its within-sentence index. Thus, corpus sentences initially correspond to paths in the graph, a path being a sequence of edges that share the same sentence number. mi ml mj mk ci{j,k}l mh mn (a) mi ml cj ck ... (b) cu cv Figure 1: (a) Two sequences mi, mj, ml and mi, mk, ml form a pattern ci{j,k}l .= mi, {mj, mk}, ml, which allows mj and mk to be attributed to the same equivalence class, following the principle of complementary distributions [1]. Both the length of the shared context and the cohesiveness of the equivalence class need to be taken into account in estimating the goodness of the candidate pattern (see eq. 1). (b) Patterns can serve as constituents in their own right; recursively abstracting patterns from a corpus allows us to capture the syntactic regularities concisely, yet expressively. Abstraction also supports generalization: in this schematic illustration, two new paths (dashed lines) emerge from the formation of equivalence classes associated with cu and cv. In the second phase, the PA algorithm repeatedly scans the RDS graph for Significant Patterns (sequences of constituents) (SP), which are then used to modify the graph (Algorithm 1). For each path pi, the algorithm constructs a list of candidate constituents, ci1, . . . , cik. Each of these consists of a “prefix”(sequence of graph edges), an equivalence class of vertices, and a “suffix”(another sequence of edges; cf. Figure 2(b)). The criterion I′ for judging pattern significance combines a syntagmatic consideration (the pattern must be long enough) with a paradigmatic one (its constituents c1, . . . , ck must have high mutual information): I′(c1, c2, . . . , ck) = e−(L/k)2P(c1, c2, . . . , ck) log P(c1, c2, . . . , ck) Πk j=1P(cj) (1) where L is the typical context length and k is the length of the candidate pattern; the probabilities associated with a cj are estimated from frequencies that are immediately available 1We remark that the algorithm can work in any language, with any set of tokens, including individual characters – or phonemes, if applied to speech. Algorithm 1 PA (pattern acquisition), phase 2 1: while patterns exist do 2: for all path ∈graph do {path=sentence; graph=corpus} 3: for all source node ∈path do 4: for all sink node ∈path do {source and sink can be equivalence classes} 5: degree of separation = path index(sink) −path index(source); 6: pattern table ⇐detect patterns(source, sink, degree of separation, equivalence table); 7: end for 8: end for 9: winner ⇐get most significant pattern(pattern table); 10: equivalence table ⇐detect equivalences(graph, winner); 11: graph ⇐rewire graph(graph, winner); 12: end for 13: end while in the graph (e.g., the out-degree of a node is related to the marginal probability of the corresponding cj). Equation 1 balances two opposing “forces” in pattern formation: (1) the length of the pattern, and (2) the number and the cohesiveness of the set of examples that support it. On the one hand, shorter patterns are likely to be supported by more examples; on the other hand, they are also more likely to lead to over-generalization, because shorter patterns mean less context. A pattern tagged as significant is added as a new vertex to the RDS graph, replacing the constituents and edges it subsumes (Figure 2). Note that only those edges of the multigraph that belong to the detected pattern are rewired; edges that belong to sequences not subsumed by the pattern are untouched. This highly context-sensitive approach to pattern abstraction, which is unique to our model, allows ADIOS to achieve a high degree of representational parsimony without sacrificing generalization power. During the pass over the corpus the list of equivalence sets is updated continuously; the identification of new significant patterns is done using thecurrent equivalence sets (Figure 3(d)). Thus, as the algorithm processes more and more text, it “bootstraps” itself and enriches the RDS graph structure with new SPs and their accompanying equivalence sets. The recursive nature of this process enables the algorithm to form more and more complex patterns, in a hierarchical manner. The relationships among these can be visualized recursively in a tree format, with tree depth corresponding to the level of recursion (e.g., Figure 3(c)). The PA algorithm halts if it processes a given amount of text without finding a new SP or equivalence set (in real-life language acquisition this process may never stop). Generalization. A collection of patterns distilled from a corpus can be seen as an empirical grammar of sorts; cf. [6], p.63: “the grammar of a language is simply an inventory of linguistic units.” The patterns can eventually become highly abstract, thus endowing the model with an ability to generalize to unseen inputs. Generalization is possible, for example, when two equivalence classes are placed next to each other in a pattern, creating new paths among the members of the equivalence classes (dashed lines in Figure 1(b)). Generalization can also ensue from partial activation of existing patterns by novel inputs. This function is supported by the input module, designed to process a novel sentence by forming its distributed representation in terms of activities of existing patterns (Figure 6). These are computed by propagating activation from bottom (the terminals) to top (the patterns) of the RDS. The initial activities wj of the terminals cj are calculated given the novel input s1, . . . , sk as follows: wj = max m=1..k {I(sk, cj)} (2) PATTERN 230: the cat is {eat, play, stay} -ing show END her ing play is Pam cat the BEGIN eat 102: do you see the cat? 101: the cat is eating 103: are you sure? 101_1 101_2 101_3 101_4 101_5 101_6 Sentence Number Within-Sentence Index stay ing eat is cat the play 101_1 101_6 121_8 109_4 101_2 121_9 109_5 101_3 121_10 109_6 109_7 109_8 101_4 101_5 121_11 121_12 109_9 121_13 131_1 131_2 131_3 stay ing eat is cat the play 101_2 109_5 121_9 131_1 131_2 131_3 101_1 121_8 109_4 Equivalence Class 230: stay, eat, play BEGIN BEGIN END END 165_2 221_2 stay eat play 171_2 165_3 171_3 221_3 here we they 171_1 BEGIN 165_1 221_1 (a) (b) (c) (d) PATTERN 231: BEGIN {they, we} {230} here Figure 2: (a) A small portion of the RDS graph for a simple corpus, with sentence #101 (the cat is eat -ing) indicated by solid arcs. (b) This sentence joins a pattern the cat is {eat, play, stay} -ing, in which two others (#109,121) already participate. (c) The abstracted pattern, and the equivalence class associated with it (edges that belong to sequences not subsumed by this pattern, e.g., #131, are untouched). (d) The identification of new significant patterns is done using the acquired equivalence classes (e.g., #230). In this manner, the system “bootstraps” itself, recursively distilling more and more complex patterns. where I(sk, cj) is the mutual information between sk and cj. For an equivalence class, the value propagated upwards is the strongest non-zero activation of its members; for a pattern, it is the average weight of the children nodes, on the condition that all the children were activated by adjacent inputs. Activity propagation continues until it reaches the top nodes of the pattern lattice. When the algorithm encounters a novel word, all the members of the terminal equivalence class contribute a value of ϵ, which is then propagated upwards as usual. This enables the model to make an educated guess as to the meaning of the unfamiliar word, by considering the patterns that become active (Figure 6(b)). 3 Results We now briefly describe the results of several studies designed to evaluate the viability of the ADIOS model, in which it was exposed to corpora of varying size and complexity. propnoun: "Joe" | "Beth" | "Jim" | "Cindy" | "Pam" | "George"; is verb: working | living | playing article "The" | "A" noun: "cat" | "dog" | "cow" | "bird" | "rabbit" | "horse" emphasize: very | extremely|really far away article "The" noun: "cats" | "dogs" | "cows" | "birds" | "rabbits" | "horses" are END BEGIN the horse is living very extremely far away. the cow is working at least until Thursday. Jim loved Pam. George is staying until Wednesday. George worshipped the horse. Cindy and George have a great personality. Pam has a fast boat. (a) (b) (c) BEGIN Beth Cindy George Jim Joe Pam 70 95 is celebrat liv play stay work 65 ing 66 120 extreme real 98 ly far away END 67 101 PATTERN.ID=144 SIGNIFICANCE=0.11 OCCURRENCES=38 SEQUENCE=(120)+(101) MEAN.LENGTH=29.4 Sentence: George is working extremely far away 144 Figure 3: (a) A part of a simple grammar. (b) Some sentences generated by this grammar. (c) The structure of a sample sentence (pattern #144), presented in the form of a tree that captures the hierarchical relationships among constituents. Three equivalence classes are shown explicitly (highlighted). Emergence of syntactic structures. Figure 3 shows an example of a sentence from a corpus produced by a simple artificial grammar and its ADIOS analysis (the use of a simple grammar, constructed with Rmutt, http://www.schneertz.com/rmutt, in these initial experiments allowed us to examine various properties of the model on tightly controlled data). The abstract representation of the sample sentence in Figure 3(c) looks very much like a parse tree, indicating that our method successfully identified the grammatical structure used to generate its data. To illustrate the gradual emergence of our model’s ability for such concise representation of syntactic structures, we show in Figure 4, top, four trees built for the same sentence after exposing the model to progressively more data from the same corpus. Note that both the number of distinct patterns and the average number of patterns per sentence asymptote for this corpus after exposure to about 500 sentences (Figure 4, bottom). Novel inputs; systematicity. An important characteristic of a cognitive representation scheme is its systematicity, measured by the ability to deal properly with structurally related items (see [7] for a definition and discussion). We have assessed the systematicity of the ADIOS model by splitting the corpus generated by the grammar of Figure 3 into training and test sets. After training the model on the former, we examined the representations of unseen sentences from the test set. A typical result appears in Figure 5; the general finding was of Level 3 systematicity according to the nomenclature of [7]. This example can be also understood using the concept of generating novel sentences from patterns, explained in detail below; the novel sentence (Beth is playing on Sunday) can be produced by the same pattern (#173) that accounts for the familiar sentence (the horse is playing on Thursday) that is a part of the training corpus. The ADIOS system’s input module allows it to process a novel sentence by forming its distributed representation in terms of activities of existing patterns. Figure 6 shows the activation of two patterns (#141 and #120) by a phrase that contains a word in a novel context (stay), as well as another word never before encountered in any context (5pm). BEGIN Jim is celebrat liv play stay work 65 ing 66 at least until Friday Monday Saturday Sunday Thursday Tuesday Wednesday tomorrow 68 END 122 69 BEGIN Beth Cindy George Jim Joe Pam 70 is celebrat liv play stay work 65 ing 66 is celebrat liv play stay work 65 ing 66 at least until 68 89 95 Thursday Friday Monday Saturday Sunday Thursday Tuesday Wednesday tomorrow 72 113 END 114 (a) (c) (d) until Friday Monday Saturday Sunday Thursday Tuesday Wednesday tomorrow END BEGIN Beth Cindy George Jim Joe 71 is celebrat liv play stay work 65 ing 66 68 (b) 69 72 0 20 40 60 80 100 120 0 200 400 600 800 1000 Number of Sentences in the corpus Total Number of Detected Patterns 0.00 1.00 2.00 3.00 4.00 5.00 6.00 7.00 8.00 Average Number of Detected Patterns Figure 4: Top: the build-up of structured information with progressive exposure to a corpus generated by the simple grammar of Figure 3. (a) Prior to exposure. (b) 100 sentences. (c) 200 sentences. (d) 400 sentences. Bottom: the total number of detected patterns (△) and the average number of patterns in a sentence ( ), plotted vs. corpus size. Unseen: Beth is playing on Sunday. the horse is playing on Thursday. BEGIN Beth the bird cat cow dog horse rabbit 79 86 147 148 is celebrat liv play stay work 82 ing 83 on Friday Monday Saturday Sunday Thursday Tuesday Wednesday 92 END 93 173 BEGIN Beth the bird cat cow dog horse rabbit 79 86 147 148 is celebrat liv play stay work 82 ing 83 on Friday Monday Saturday Sunday Thursday Tuesday Wednesday 92 END 93 173 (a) (b) Figure 5: (a) Structured representation of an “unseen” sentence that had been excluded from the corpus used to learn the patterns; note that the detected structure is identical to that of (b), a “seen” sentence. The identity between the structures detected in (a) and (b) is a manifestation of Level-3 systematicity of the ADIOS model (“Novel Constituent: the test set contains at least one atomic constituent that did not appear anywhere in the training set”; see [7], pp.3-4). Wednesday BEGIN Beth Cindy George Joe Jim Pam and are liv work ing 141... activation level: 0.972 74 C0 C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 C11 C12 C13 W8=1.0 W0=1.0 C14 C15 C16 C17 C18 play Beth Cindy George Joe Jim Pam 86 112 113 W15=0.8 until tomorrow Friday Monday Saturday Sunday Thursday Tuesday Wednesday next month week winter END 120... activation level: 0.667 100 93 89 119 C0 C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 C11 C12 C13 W13=1.0 W0=1.0 W2..8=ε W1=ε Figure 6: the input module in action (the two most relevant – highly active – patterns responding to the input Joe and Beth are staying until 5pm). Leaf activation is proportional to the mutual information between inputs and various members of the equivalence classes (e.g., on the left W15 = 0.8 is the mutual information between stay and liv, which is a member of equivalence class #112). It is then propagated upwards by taking the average at each junction. Working with real data: the CHILDES corpus. To illustrate the scalability of our method, we describe here briefly the outcome of applying the PA algorithm to a subset of the CHILDES collection [8], which consists of transcribed speech produced by, or directed at, children. The corpus we selected contained 9665 sentences (74500 words) produced by parents. The results, one of which is shown in Figure 7, were encouraging: the algorithm found intuitively significantSPs and produced semantically adequate corresponding equivalence sets. Altogether, 1062 patterns and 775 equivalence classes were established. Representing the corpus in terms of these constituents resulted in a significant compression: the average number of constituents per sentence dropped from 6.70 in the raw data to 2.18 after training, and the entropy per letter was reduced from 2.6 to 1.5. BEGIN can did 1865 we you 1734 build get give help see show tell 2080 her me 1726 2081 BEGIN bunny start go look 1785 they you 1398 're just not 1828 go look 1785 ing s 1558 1829 2076 at that the 1739 2077 BEGIN where 1629 ' s Becky Brennen Eric Miffy mommy that the 1739 the big biggest blue different easy little littlest next right round square white other yellow 1407 green orange yellow 1656 1912 chicken one room side way 1913 1914 1959 ? 1960 CHILDES_2764 : they don ’t want ta go for a ride ? you don ’t want ta look for another ride ? CHILDES_2642 : can we make a little house ? should we make another little dance ? CHILDES_2504 : should we put the bed s in the house ? should we take some doggie s on that house ? CHILDES_1038 : where ’d the what go ? where are the what ’ s he gon ta do go ? CHILDES_2304 : want Mommy to show you ? like her to help they ? Figure 7: Left: a typical pattern extracted from a subset of the CHILDES corpora collection [8]. Hundreds of such patterns and equivalence classes (underscored in this figure) together constitute a concise representation of the raw data. Some of the phrases that can be described/generated by pattern 1960 are: where’s the big room?; where’s the yellow one?; where’s Becky?; where’s that?. Right: some of the phrases generated by ADIOS (lower lines in each pair) using sentences from CHILDES (upper lines) as examples. The generation module works by traversing the top-level pattern tree, stringing together lowerlevel patterns and selecting randomly one member from each equivalence class. Extensive testing (currently under way) is needed to determine whether the grammaticality of the newly generated phrases (which is at present less than ideal, as can be seen here) improves with more training data. 4 Concluding remarks We have described a linguistic pattern acquisition algorithm that aims to achieve a streamlined representation by compactly representing recursively structured constituent patterns as single constituents, and by placing strings that have an identical backbone and similar context structure into the same equivalence class. Although our pattern-based representations may look like collections of finite automata, the information they contain is much richer, because of the recursive invocation of one pattern by another, and because of the context sensitivity implied by relationships among patterns. The sensitivity to context of pattern abstraction (during learning) and use (during generation) contributes greatly both to the conciseness of the ADIOS representation and to the conservative nature of its generative behavior. This context sensitivity — in particular, the manner whereby ADIOS balances syntagmatic and paradigmatic cues provided by the data — is mainly what distinguishes it from other current work on unsupervised probabilistic learning of syntax, such as [9, 10, 4]. In summary, finding a good set of structured units leads to the emergence of a convergent representation of language, which eventually changes less and less with progressive exposure to more data. The power of the constituent graph representation stems from the interacting ensembles of patterns and equivalence classes that comprise it. Together, the local patterns create global complexity and impose long-range order on the linguistic structures they encode. Some of the challenges implicit in this approach that we leave for future work are (1) interpreting the syntactic structures found by ADIOS in the context of contemporary theories of syntax, and (2) relating those structures to semantics. Acknowledgments. We thank Regina Barzilai, Morten Christiansen, Dan Klein, Lillian Lee and Bo Pang for useful discussion and suggestions, and the US-Israel Binational Science Foundation, the Dan David Prize Foundation, the Adams Super Center for Brain Studies at TAU, and the Horowitz Center for Complexity Science for financial support. References [1] Z. S. Harris. Distributional structure. Word, 10:140–162, 1954. [2] R. Kazman. Simulating the child’s acquisition of the lexicon and syntax - experiences with Babel. Machine Learning, 16:87–120, 1994. [3] J. L. Elman. Finding structure in time. Cognitive Science, 14:179–211, 1990. [4] M. van Zaanen and P. Adriaans. Comparing two unsupervised grammar induction systems: Alignment-based learning vs. EMILE. Report 05, School of Computing, Leeds University, 2001. [5] M. Gross. The construction of local grammars. In E. Roche and Y. Schab`es, ed., Finite-State Language Processing, 329–354. MIT Press, Cambridge, MA, 1997. [6] R. W. Langacker. Foundations of cognitive grammar, volume I: theoretical prerequisites. Stanford University Press, Stanford, CA, 1987. [7] T. J. van Gelder and L. Niklasson. On being systematically connectionist. Mind and Language, 9:288–302, 1994. [8] B. MacWhinney and C. Snow. The child language exchange system. Journal of Computational Lingustics, 12:271–296, 1985. [9] D. Klein and C. D. Manning. Natural language grammar induction using a constituent-context model. In T. G. Dietterich, S. Becker, and Z. Ghahramani, ed., Adv. in Neural Information Proc. Systems 14, Cambridge, MA, 2002. MIT Press. [10] A. Clark. Unsupervised Language Acquisition: Theory and Practice. PhD thesis, COGS, University of Sussex, 2001.
|
2002
|
163
|
2,173
|
Application of Variational Bayesian Approach to Speech Recognition Shinji Watanabe, Yasuhiro Minami, Atsushi Nakamura and Naonori Ueda NTT Communication Science Laboratories, NTT Corporation 2-4, Hikaridai, Seika-cho, Soraku-gun, Kyoto, Japan {watanabe,minami,ats,ueda}@cslab.kecl.ntt.co.jp Abstract In this paper, we propose a Bayesian framework, which constructs shared-state triphone HMMs based on a variational Bayesian approach, and recognizes speech based on the Bayesian prediction classification; variational Bayesian estimation and clustering for speech recognition (VBEC). An appropriate model structure with high recognition performance can be found within a VBEC framework. Unlike conventional methods, including BIC or MDL criterion based on the maximum likelihood approach, the proposed model selection is valid in principle, even when there are insufficient amounts of data, because it does not use an asymptotic assumption. In isolated word recognition experiments, we show the advantage of VBEC over conventional methods, especially when dealing with small amounts of data. 1 Introduction A statistical modeling of spectral features of speech (acoustic modeling) is one of the most crucial parts in the speech recognition. In acoustic modeling, a triphone-based hidden Markov model (triphone HMM) has been widely employed. The triphone is a context dependent phoneme unit that considers both the preceding and following phonemes. Although the triphone enables the precise modeling of spectral features, the total number of triphones is too large to prepare sufficient amounts of training data for each triphone. In order to deal with the problem of data insufficiency, an HMM state is usually shared among multiple triphone HMMs, which means the amount of training data per state inflates. Such shared-state triphone HMMs (SST-HMMs) can be constructed by successively clustering states based on the phonetic decision tree method [4] [7]. The important practical problem that must be solved when constructing SST-HMMs is how to optimize the total number of shared states adaptively to the amounts of available training data. Namely, maintaining the balance between model complexity and training data size is quite important for high generalization performance. The maximum likelihood (ML) is inappropriate as a model selection criterion since ML increases monotonically as the number of states increases. Some heuristic thresholding is therefore necessary to terminate the partitioning. To solve this problem, the Bayesian information criterion (BIC) and minimum description length (MDL) criterion have been employed to determine the tree structure of SST-HMMs [2] [5] 1. However, since the BIC/MDL is based on an asymptotic assumption, it is invalid in principle when the number of training data is small because of the failure of the assumption. In this paper, we present a practical method within the Bayesian framework for estimating posterior distributions over parameters and selecting an appropriate model structure of SST-HMMs (clustering triphone HMM states) based on a variational Bayesian (VB) approach, and recognizing speech based on the Bayesian prediction classification: variational Bayesian estimation and clustering for speech recognition (VBEC). Unlike the BIC/MDL, VB does not assume asymptotic normality, and it is therefore applicable in principle, even when there are insufficient data. The VB approach has been successfully applied to model selection problems, but mainly for relatively simple mixture models [1] [3] [6] [8]. Here, we try to apply VB to SST-HMMs with more a complex model structure than the mixture model and evaluate the effectiveness through a large-scale real speech recognition experiment. 2 Variational Bayesian framework First, we briefly review the VB framework. Let O be a given data set. In the Bayesian approach we are interested in posterior distributions over model parameters, p(Θ|O, m), and the model structure, p(m|O). Here, Θ is a set of model parameters and m is an index of the model structure. Let us consider a general probabilistic model with latent variables. Let Z be a set of latent variables. Then the model with a fixed model structure m can be defined by the joint distribution p(O, Z|Θ, m). In VB, variational posteriors q(Θ|O, m), q(Z|O, m), and q(m|O) are introduced to approximate the true corresponding posteriors. The optimal variational posteriors over Θ and Z, and the appropriate model structure that maximizes the optimal q(m|O) can be obtained by maximizing the following objective function: Fm[q] = log p(O, Z|Θ, m)p(Θ|m) q(Z|O, m)q(Θ|O, m) q(Z|O,m),q(Θ|O,m) , (1) w.r.t. q(Θ|O, m), q(Z|O, m), and m. Here ⟨f(x)⟩p(x) denotes the expectation of f(x) w.r.t. p(x). p(Θ|m) is a prior distribution. This optimization can be effectively performed by an EM-like iterative algorithm (see [1] for the details). 3 Applying a VB approach to acoustic models 3.1 Output distributions and prior distributions We attempt to apply a VB approach to a left-to-right HMM, which has been widely used to represent a phoneme unit in acoustic models for speech recognition, as shown in Figure 1. Let O = {Ot ∈RD : t = 1, ..., T } be a sequential data set for a phoneme unit. The output distribution in an HMM is given by p(O, S, V |Θ, m) = YT t=1 ast−1stcstvtbstvt(Ot), (2) where S is a set of sequences of hidden states, V is a set of sequences of Gaussian mixture components, and st and vt denote the state and mixture components at time t. S and V are sets of discrete latent variables that correspond to Z mentioned above. aij denotes the state 1These criteria have been independently proposed, but they are practically the same. Therefore, we refer to them hereafter as BIC/MDL. 23 a 12 a 11 a 22 a 33 a Gaussian mixture for state i 1 = i 3 = i 2 = i Figure 1: Hidden Markov model for each phoneme unit. A state is represented by the Gaussian mixture distribution below the state. There are three states and three Gaussian components in this figure. transition probability from state i to state j, and cjk is the k-th weight factor of the Gaussian mixture for state j. bjk(= N(Ot|µjk, Σjk)) denotes the Gaussian distribution with mean vector µjk and covariance Σjk. Θ = {aij, cjk, µjk, Σ−1 jk |i, j = 1, ..., J, k = 1, ..., L} is a set of model parameters. J denotes the number of states in an HMM and L denotes the number of Gaussian components in a state. In this paper, we restrict covariance matrices in the Gaussian distribution to diagonal ones. The conjugate prior distributions are assumed to be as follows: p(Θ|m) = Y i,j,k D({aij′}J j′=1|φ0)D({cjk′}L k′=1|ϕ0) × N(µjk|ν0 jk, (ξ0)−1Σjk) YD d=1 G(Σ−1 jk,d|η0, R0 jk,d). (3) Φ0 = {φ0, ϕ0, ν0 jk, ξ0, η0, R0 jk} is a set of hyperparameters. We assume the hyperparameters are constants. In Eq.(3), D denotes a Dirichlet distribution and G denotes a gamma distribution. 3.2 Optimal variational posterior distribution ˜q(Θ|O, m) From the output distributions and prior distributions in section 3.1, the optimal variational posterior distribution ˜q(Θ|O, m) can be obtained as: ˜q({aij}J j=1|O, m) = D({aij}J j=1|{˜φij}J j=1) ˜q({cjk}L k=1|O, m) = D({cjk}L k=1|{ ˜ϕjk}L k=1) ˜q(bjk|O, m) = N(µjk|˜νjk, ˜ξ−1 jk Σjk) QD d=1 G(Σ−1 jk,d|˜ηjk, ˜Rjk,d), (4) ˜Φ ≡{˜φ, ˜ϕ, ˜νjk, ˜ξ, ˜η, ˜Rjk} is a set of posterior distribution parameters defined as: ˜φij =φ0 + ˜γij, ˜ϕjk = ϕ0 + ˜ζjk, ˜ξjk = ξ0 + ˜ζjk, ˜νjk = ξ0ν0 jk + XT t=1 ˜ζt jkOt /˜ξjk, ˜ηjk =η0 + ˜ζjk, ˜Rjk,d = R0 jk,d + ξ0(ν0 jk,d −˜νjk,d)2 + XT t=1 ˜ζt jk(Ot d −˜νjk,d)2. (5) ˜Φ is composed of ˜γt ij ≡˜q(st = i, st+1 = j|O, m), ˜γij ≡ΣT t=1˜γt ij, ˜ζt jk ≡˜q(st = j, vt = k|O, m) and ˜ζjk ≡ΣT t=1˜ζt jk. ˜γt ij denotes the transition probability from state i to state j at time t. ˜ζt jk denotes the occupation probability on mixture component k in state j at time t. 3.3 Optimal variational posterior distribution ˜q(S, V |O, m) From the output distributions and prior distributions in section 3.1, the optimal variational posterior distribution over latent variables ˜q(S, V |O, m) can be obtained as: ˜q(S, V |O, m) ∝ YT t=1 ˜ast−1st˜cstvt˜bstvt(Ot), (6) where ˜ast−1st = exp Ψ(˜φst−1st) −Ψ( XJ st′=1 ˜φst−1st′ ) , ˜cstvt = exp Ψ( ˜ϕstvt) −Ψ( XL vt′ =1 ˜ϕstvt′ ) , ˜bstvt(Ot) = exp D/2 log 2π −1/˜ξstvt + Ψ(˜ηstvt/2) − −1/2 XD d=1 log( ˜Rstvt,d/2) + (Ot d −˜νstvt,d)2˜ηstvt/ ˜Rstvt,d .(7) Ψ(y) is a digamma function. From these results, transition and occupation probability ˜γt ij and ˜ζt ij can be obtained by using either a deterministic assignment via the Viterbi algorithm or a probabilistic assignment via the Forward-Backward algorithm. Thus, ˜q(Θ|O, m) and ˜q(S, V |O, m) can be calculated iteratively that result in maximizing Fm. 4 VB training algorithm for acoustic models Based on the discussion in section 3, a VB training algorithm for an acoustic model based on an HMM and Gaussian mixture model with a fixed model structure m is as follows: ———————————————————————————————————— Step 1) Initialize ˜γt ij[τ = 0], ˜ζt ij[τ = 0] and set Φ0. Step 2) Compute q(S, V |O, m)[τ + 1] using ˜γt ij[τ], ˜ζt ij[τ] and Φ0. Step 3) Update ˜γt ij[τ+1] and ˜ζt ij[τ+1] using q(S, V |O, m)[τ+1] via the Viterbi algorithm or Forward-Backward algorithm. Step 4) Compute ˜Φ[τ + 1] using ˜γt ij[τ + 1], ˜ζt ij[τ + 1] and Φ0. Step 5) Compute q(Θ|O, m)[τ + 1] using ˜Φ[τ + 1] and calculate Fm[τ] based on q(Θ|O, m)[τ + 1] and q(S, V |O, m)[τ + 1]. Step 6) If |(Fm[τ + 1] −Fm[τ])/Fm[τ + 1]| ≤ε, then stop; otherwise set τ ←τ + 1 and go to Step 2. ———————————————————————————————————— τ denotes an iteration count. In our experiments, we employed the Viterbi algorithm in Step 3. 5 Variational Bayesian estimation and clustering for speech recognition In the previous section, we described a VB training algorithm for HMMs. Here, we explain VBEC, which constructs an acoustic model based on SST-HMMs and recognizes speech based on the Bayesian prediction classification. VBEC consists of three phases: model structure selection, retraining and recognition. The model structure is determined based on triphone-state clustering by using the phonetic decision tree method [4] [7]. The phonetic decision tree is a kind of binary tree that has a phonetic “Yes/No” question attached at each node, as shown in Figure 2. Let Ω(n) denote a set of states held by a tree node n. We start with only a root node (n = 0), which holds a set of all the triphone HMM states Ω(0) for an identical center phoneme. The set of triphone states is then split into two sets, Ω(nY ) and Ω(nN), which are held by two new nodes, nY and nN, respectively, as shown in Figure 3. The partition is determined by an answer to a phonetic question such as “is the preceding phoneme a vowel?” or “is the following phoneme a nasal?” We choose a particular question for a node that maximize the gain of Fm when the node is split into two Yes No Yes No Yes No Yes No Yes No */a(i)/* k/a(i)/i k/a(i)/oú ts/a(i)/m ch/a(i)/ngú root node(n=0) leaf node Figure 2: A set of all triphone HMM states */a(i)/* is clustered based on the phonetic decision tree method. nY nN n Yes No Phonetic question Ω(nY) Ω(nN) Ω(n) Figure 3: Splitting a set of triphone HMM states Ω(n) into two sets Ω(nY ) Ω(nN) by answering phonetic questions according to an objective function. nodes, and if all the questions decrease Fm after splitting, we stop splitting. We continue this splitting successively for every new set of states to obtain a binary tree, each leaf node of which holds a clustered set of triphone states. The states belonging to the same cluster are merged into a single state. A set of triphones is thus represented by a set of sharedstate triphone HMMs (SST-HMMs). An HMM, which represents a phonemic unit, usually consists of a linear sequence of three or four states. A decision tree is produced specifically for each state in the sequence, and the trees are independent of each other. Note that in the triphone-states clustering mentioned above, we assume the following conditions to reduce computations: • The state assignments while splitting are fixed. • A single Gaussian distribution for one state is used. • Contributions of the transition probabilities to the objective function are ignored. By using these conditions, latent variables are removed. As a result, all variational posteriors and Fm can be obtained as closed forms without an iterative procedure. Once we have obtained the model structure, we retrain the posterior distributions using the VB algorithm given in section 4. In recognition, an unknown datum xt for a frame t is classified as the optimal phoneme class y using the predictive posterior classification probability p(y|xt, O, ˜m) ≡p(y)p(xt|y, O, ˜m)/p(xt) for the estimated model structure ˜m. Here, p(y) is the class prior obtained by language and lexcon models, and p(xt|y, O, ˜m) is the predictive density. If we approximate the true posterior p(Θ|y, O, ˜m) by the estimated variational posteriors ˜q(Θ|y, O, ˜m), p(xt|y, O, ˜m) can be calculated by p(xt|y, O, ˜m) ≈ R p(xt|y, Θ, ˜m)˜q(Θ|y, O, ˜m)dΘ. Therefore, the optimal class y can be obtained by y = arg max y′ p(y′|xt, O, ˜m) ≈arg max y′ p(y′) Z p(xt|y′, Θ, ˜m)˜q(Θ|y, O, ˜m)dΘ. (8) In the calculation of (8), the integral over Gaussian means and covariances for a frame can be solved analytically to be Student distributions. Therefore, we can compute a Bayesian predictive score for a frame, and then can compute a phoneme sequence score by using the Viterbi algorithm. Thus, we can construct a VBEC framework for speech recognition by selecting an appropriate model structure and estimating posterior distributions with the VB approach, and then obtaining a recognition result based on the Bayesian prediction classification. Table 1: Acoustic conditions Sampling rate 16 kHz Quantization 16 bit Feature vector 12 - order MFCC with ∆MFCC Window Hamming Frame size/shift 25/10 ms Table 2: Prepared HMM # of states 3 (Left to right) # of phoneme categories 27 Output distribution Single Gaussian 6 Experiments We conducted two experiments to evaluate the effectiveness of VBEC. The first experiment compared VBEC with the conventional ML-BIC/MDL method for variable amounts of training data. In the ML-BIC/MDL, retraining and recognition are based on the ML approach and model structure selection is based on the BIC/MDL. The second experiment examined the robustness of the recognition performance with preset hyperparameter values against changes in the amounts of training data. 6.1 VBEC versus ML-BIC/MDL The experimental conditions are summarized in Tables 1 and 2. As regards the hyperparameters, the mean and variance values of the Gaussian distribution were set at ν0 and R0 in each root node, respectively, and the heuristics were removed for ν0 and R0. The determination of ξ0 and η0 was still heuristic. We set ξ0 = η0 = 0.01, each of which were determined experimentally. The training and recognition data used in these experiments are shown in Table 3. The total training data consisted of about 3,000 Japanese sentences spoken by 30 males. These sentences were designed so that the phonemic balance was maintained. The total recognition data consisted of 2,500 Japanese city names spoken by 25 males. Several subsets were randomly extracted from the training data set, and each subset was used to construct a set of SST-HMMs. As a result, 40 sets of SST-HMMs were prepared for several subsets of training data. Figures 4 and 5 show the recognition rate and the total number of states in a set of SSTHMMs, according to the varying amounts of training data. As shown in Figure 4, when the number of training sentences was less than 40, VBEC greatly outperformed the MLBIC/MDL (A). With ML-BIC/MDL (A), an appropriate model structure was obtained by maximizing an objective function lBIC/MDL m w.r.t. m based on BIC/MDL defined as: lBIC/MDL m = l(O, m) −#(ΘΩ) 2 log TΩ(0), (9) where, l(O, m) denotes the likelihood of training data O for a model structure m, #(ΘΩ) denotes the number of free parameters for a set of states Ω, and TΩ(0) denotes the total frame number of training data for a set of states Ω(0) in a root node, as shown in Figure 2. The term #(ΘΩ) 2 log TΩ(0) in Eq.(9) is regarded as a penalty term added to a likelihood, and is dependent on the number of free parameters #(ΘΩ) and total frame number TΩ(0) of the training data. ML-BIC/MDL (A) was based on the original definitions of BIC/MDL and has been widely used in speech recognition [2] [5]. With such small amounts of training data, there was a great difference between the total number of shared states with VBEC and Table 3: Training and recognition data Training Continuous speech sentences (Acoustical Society of Japan) Recognition 100 city names (Japan Electronic Industry Development Association) 㪇 㪉㪇 㪋㪇 㪍㪇 㪏㪇 㪈㪇㪇 㪈 㪈㪇 㪈㪇㪇 㪈㪇㪇㪇 㪈㪇㪇㪇㪇 㩺㩷㫆㪽㩷㫊㪼㫅㫋㪼㫅㪺㪼㫊 㪩㪼㪺㫆㪾㫅㫀㫋㫀㫆㫅㩷㫉㪸㫋㪼㩷㩿㩼㪀 㪭㪙㪜㪚 㪤㪣㪄㪙㪠㪚㪆㪤㪛㪣㩷㩿㪘㪀 㪤㪣㪄㪙㪠㪚㪆㪤㪛㪣㩷㩿㪙㪀 Figure 4: Recognition rates according to the amounts of training data based on the VBEC and ML-BIC/MDL (A) and (B). The horizontal axis is scaled logarithmically. 㪈㪇 㪈㪇㪇 㪈㪇㪇㪇 㪈㪇㪇㪇㪇 㪈 㪈㪇 㪈㪇㪇 㪈㪇㪇㪇 㪈㪇㪇㪇㪇 㩺㩷㫆㪽㩷㫊㪼㫅㫋㪼㫅㪺㪼㫊 㩺㩷㫆㪽㩷㫊㫋㪸㫋㪼㫊 㪭㪙㪜㪚 㪤㪣㪄㪙㪠㪚㪆㪤㪛㪣㩷㩿㪘㪀 㪤㪣㪄㪙㪠㪚㪆㪤㪛㪣㩷㩿㪙㪀 Figure 5: Number of shared states according to the amounts of training data based on the VBEC and ML-BIC/MDL (A) and (B). The horizontal and vertical axes are scaled logarithmically. ML-BIC/MDL (A) (Figure 5). This suggests that VBEC, which does not use an asymptotic assumption, determines the model structure more appropriately than the ML-BIC/MDL (A), when the training data size is small. Next, we adjusted the penalty term of ML-BIC/MDL in Eq. (9) so that the total numbers of states for small amounts of data were as close as possible to those of VBEC (ML-BIC/MDL (B) in Figure 5). Nevertheless, the recognition rates obtained by VBEC were about 15 % better than those of ML-BIC/MDL (B) with fewer than 15 training sentences (Figure 4). With such very small amounts of data, the VBEC and ML-BIC/MDL (B) model structures were almost same (Figure 5). It is assumed that the effects of the posterior estimation and the Bayesian prediction classification (Eq. (8)) suppressed the over-fitting of the models to very small amounts of training data compared with the ML estimation and recognition in ML-BIC/MDL (B). With more than 100 training sentences, the recognition rates obtained by VBEC converged asymptotically to those obtained by ML-BIC/MDL methods as the amounts of training data became large. In summary, VBEC performed as well or better for every amount of training data. This advantage was due to the superior properties of VBEC, e.g., the appropriate determination of the number of states and the suppression effect on over-fitting. 6.2 Influence of hyperparameter values on the quality of SST-HMMs Throughout the construction of the model structure, the estimation of the posterior distribution, and recognition, we used a fixed combination of hyperparameter values, ξ0 = η0 = 0.01. In the small-scale experiments conducted in previous research [1] [3] [6] [8], the selection of such values was not a major concern. However, when the scale of the target application is large, the selection of hyperparameter values might affect the quality of the models. Namely, the best or better values might differ greatly according to the amounts of training data. Moreover, estimating appropriate hyperparameters with training SST-HMMs needs so much time that it is impractical in speech recognition. Therefore, we examined how robustly the SST-HMMs produced by VBEC performed against changes in the hyperparameter values with varying amounts of training data. We varied the values of hyperparameters ξ0 and η0 from 0.0001 to 1, and examined the speech recognition rates in two typical cases; one in which the amount of data was very small (10 sentences) and one in which the amount was fairly large (150 sentences). Tables Table 4: Recognition rates in each prior distribution parameter when using training data of 10 sentences. ξ0 η0 100 10−1 10−2 10−3 10−4 100 1.0 66.3 65.9 66.5 66.1 10−1 2.2 65.9 66.2 66.7 66.1 10−2 31.2 66.1 66.5 66.3 65.5 10−3 60.3 66.2 66.7 66.1 65.5 10−4 66.5 66.6 66.3 65.5 64.6 Table 5: Recognition rates in each prior distribution parameter when using training data of 150 sentences. ξ0 η0 100 10−1 10−2 10−3 10−4 100 22.0 93.5 94.0 93.1 92.3 10−1 49.3 94.3 93.9 93.3 92.5 10−2 83.5 94.4 93.2 92.3 92.3 10−3 92.5 93.8 93.3 92.5 92.4 10−4 94.1 93.2 92.3 92.3 92.2 4 and 5 show the recognition rates for each combination of hyperparameters. We can see that the hyperparameter values for acceptable performance are broadly distributed for both very small and fairly large amounts of training data. Moreover, roughly the ten best recognition rates are highlighted in the tables. The combinations of hyperparameter values that achieved the highlighted recognition rates were similar for the two different amounts of training data. Namely, appropriate combinations of hyperparameter values can consistently provide good performance levels regardless of the varying amounts of training data. In summary, the hyperparameter values do not greatly influence the quality of the SSTHMMs. This suggests that it is not necessary to select the hyperparameter values very carefully. 7 Conclusion In this paper, we proposed VBEC, which constructs SST-HMMs based on the VB approach, and recognizes speech based on the Bayesian prediction classification. With VBEC, the model structure of SST-HMMs is adaptively determined according to the amounts of given training data, and therefore a robust speech recognition system can be constructed. The first experimental results, obtained by using real speech recognition tasks, showed the effectiveness of VBEC. In particular, when the training data size was small, VBEC significantly outperformed conventional methods. The second experimental results suggested that it is not necessary to select the hyperparameter values very carefully. From these results, we conclude that VBEC provides a completely Bayesian framework for speech recognition which effectively hundles the sparse data problem. References [1] H. Attias, “A Variational Bayesian Framework for Graphical Models,” NIPS12, MIT Press, (2000). [2] W. Chou and W. Reichl, “Decision Tree State Tying Based on Penalized Bayesian Information Criterion,” Proc. ICASSP’99, vol. 1, pp. 345-348, (1999). [3] Z. Ghahramani and M. J. Beal, “Variational Inference for Bayesian Mixtures of Factor Analyzers,” NIPS12, MIT Press, (2000). [4] J. J. Odell, “The Use of Context in Large Vocabulary Speech Recognition,” PhD thesis, Cambridge University, (1995). [5] K. Shinoda and T. Watanabe, “Acoustic Modeling Based on the MDL Principle for Speech Recognition,” Proc. EuroSpeech’97, vol. 1, pp. 99-102, (1997). [6] N. Ueda and Z. Ghahramani, “Bayesian Model Search for Mixture Models Based on Optimizing Variational Bounds,” Neural Networks, vol. 15, pp. 1223-1241, (2002). [7] S. Watanabe et. al., “Constructing Shared-State Hidden Markov Models Based on a Bayesian Approach,” Proc. ICSLP’02, vol. 4, pp. 2669-2672, (2002). [8] S. Waterhouse et. al., “Bayesian Methods for Mixture of Experts,” NIPS8, MIT Press, (1995).
|
2002
|
164
|
2,174
|
Stable Fixed Points of Loopy Belief Propagation Are Minima of the Bethe Free Energy Tom Heskes SNN, University of Nijmegen Geert Grooteplein 21, 6252 EZ, Nijmegen, The Netherlands Abstract We extend recent work on the connection between loopy belief propagation and the Bethe free energy. Constrained minimization of the Bethe free energy can be turned into an unconstrained saddle-point problem. Both converging double-loop algorithms and standard loopy belief propagation can be interpreted as attempts to solve this saddle-point problem. Stability analysis then leads us to conclude that stable fixed points of loopy belief propagation must be (local) minima of the Bethe free energy. Perhaps surprisingly, the converse need not be the case: minima can be unstable fixed points. We illustrate this with an example and discuss implications. 1 Introduction Pearl’s belief propagation [1] is a popular algorithm for inference in Bayesian networks. It is exact in special cases, e.g., for tree-structured (singly-connected) networks with just Gaussian or just discrete nodes. But also on networks containing cycles, so-called loopy belief propagation often leads to good performance (approximate marginals close to exact marginals) [2]. The notion that fixed points of loopy belief propagation correspond to extrema of the so-called Bethe free energy [3] has been an important step in the theoretical understanding of this success. Empirically it has further been observed that loopy belief propagation, when it does, converges to a minimum. The main goal of this article is to understand why. In Section 2 we will introduce loopy belief propagation in terms of a sum-product algorithm on factor graphs [4]. The corresponding Bethe free energy is derived in Section 3 from a variational point of view, indicating that we should be particularly interested in minima. In Section 4 we show that minimization of the Bethe free energy under the appropriate constraints is equivalent to an unconstrained saddlepoint problem. The converging double-loop algorithm, described in Section 3, as well as the standard sum-product algorithm are in fact attempts to solve this saddlepoint problem. More specifically, (a damped version of) the sum-product algorithm has the same local stability properties as a gradient descent-ascent procedure. Stability analysis of this gradient descent-ascent procedure then leads to the conclusion in the title. With an example we illustrate that the converse need not be the case. In Section 5 we discuss further implications and relations to other studies. x1 E E E E E E E E E E x2 yyyyyyyyyy x3 x4 1, 2 E E E E E E E E E R R R R R R R R R R R R R R R R R R 1, 3 R R R R R R R R R R R R R R R R R R 1, 4 yyyyyyyyy R R R R R R R R R R R R R R R R R R 2, 3 yyyyyyyyy 2, 4 llllllllllllllllll 3, 4 llllllllllllllllll yyyyyyyyy 1 2 3 4 (a) Graphical model of (b) Factor graph with potentials P(x1, . . . , xn) ∝ Ψij(xi, xj) = exp wijxixj + 1 n−1θixi + 1 n−1θjxj . exp hP ij wijxixj + P i θixi i . Figure 1: A Boltzmann machine. (a) Graphical representation of the probability distribution. (b) Corresponding factor graph with a factor for each pair of nodes. 2 The sum-product algorithm on factor graphs We start with a description of (loopy) belief propagation as the sum-product algorithm on factor graphs [4]. We assume that the probability distribution over (disjoint subsets of) variables xβ factorizes over “factors” Ψα(Xα): P(x1, . . . , xβ, . . . , xN) = 1 Z Y α Ψα(Xα) , (1) with Z a proper normalization constant. We will use notation similar to [4]: uppercase Xα for the factors (“local function nodes”) and lowercase xβ for the variables. β ⊂α means that xβ is a neighbor of Xα in the factor graph, i.e., is included in the potential Ψα(Xα). An example of the transformation of a Markov network into a factor graph is shown in Figure 1. In a similar manner one can transform Bayesian networks into factor graphs, where each factor contains the child and its parents [4]. On singly-connected structures, Pearl’s belief propagation algorithm [1] can be applied to compute the exact marginals (“beliefs”) P(Xα) = X X\α P(X) and P(xβ) = X X\β P(X) . If the structure contains cycles, one can still apply (loopy) belief propagation, in an attempt to obtain accurate approximations Pα(Xα) and Pβ(xβ). Pseudo-code for the sum-product algorithm is given in Algorithm 1. In the factorgraph representation we distinguish messages from factor α to variable β, µα→β(xβ), and vice versa, µβ→α(xβ). The beliefs follow by multiplying the potential, a mere 1 for the variables and Ψα(Xα) for the factors, with the incoming messages, see (1.3) and (1.2) in Algorithm 1. The update for an outgoing message is the variable belief, either calculated with the definition (1.2) or through the marginalization (1.6), divided by the incoming message, see (1.4) and (1.5). We interpret the update of factor-variable message µα→β in line 8 of Algorithm 1 as the only actual update: beliefs and variable-factor messages directly follow from definitions in lines 11 to 15. For later reference we introduce the damped update log µnew α→β(xβ) = log µα→β(xβ) + ϵ log µfull α→β(xβ) −log µα→β(xβ) , (2) where µfull refers to the result of the full update (1.5) and µ to the previous message. These and other seemingly arbitrary choices, among which the particular ordering 1: repeat 2: for all variables β do 3: for all factors α ⊃β do 4: if initial then 5: initialize message (1.1) 6: else 7: marginalize (1.6) 8: update message (1.5) 9: end if 10: end for 11: compute variable belief (1.2) 12: for all factors α ⊃β do 13: compute message (1.4) 14: compute factor belief (1.3) 15: end for 16: end for 17: until convergence Initial messages: µα→β(xβ) = 1 (1.1) Beliefs: Pβ(xβ) = 1 Zβ Y α⊃β µα→β(xβ) (1.2) Pα(Xα) = 1 Zα Ψα(Xα) Y β⊂α µβ→α(xβ) (1.3) Messages: µβ→α(xβ) = Pβ(xβ) µα→β(xβ) (1.4) µα→β(xβ) = Pα(xβ) µβ→α(xβ) (1.5) with Pα(xβ) ≡ X Xα\β Pα(Xα) (1.6) Algorithm 1: The sum-product algorithm on factor graphs. of updates, follow naturally from the analysis below. Besides, for the results on local stability we will consider the limit of small step sizes ϵ, where any effects of the ordering disappear. Last but not least, the description in Algorithm 1 is mainly pedagogical and can be made more efficient in several ways. 3 The Bethe free energy The exact distribution (1) can be written as the result of the variational problem P(X) = argmin ˆ P X X ˆP(X) log " ˆP(X) Q α Ψα(Xα) # , (3) where here and in the following normalization and positivity constraints on probabilities are implicitly assumed. Next we confine our search to “tree-like” probability distributions of the form ˆP(X) ∝ Q α Pα(Xα) Q β Pβ(xβ)nβ−1 with nβ ≡ X α⊃β 1 , (4) the number of neighboring factors of variable β. Here Pα(Xα) and Pβ(xβ) are interpreted as (approximate) local marginals that should normalize to 1, but should also be consistent, i.e., obey ∀β∀α⊃β Pα(xβ) = Pβ(xβ) , (5) with Pα(xβ) as in (1.6). The denominator in (4) prevents double-counting. For singly-connected structures, it can be shown that the exact solution P(X) is of this form, with proportionality constant equal to 1 and where Pα(Xα) = P(Xα) and Pβ(xβ) = P(xβ). For structures containing cycles, this need not be the case, but we can still assume it to be true approximately. Plugging (4) into the objective (3) and implementing the above assumptions, we obtain the Bethe free energy F(P) = X α X Xα Pα(Xα) log Pα(Xα) Ψα(Xα) − X β (nβ −1) X xβ Pβ(xβ) log Pβ(xβ) . (6) 1: for all α and β ⊂α do 2: initialize (2.1) 3: end for 4: repeat 5: for all factors α do 6: update potential (2.4) 7: update variable belief (2.3) 8: end for 9: inner loop with (2.2) and (2.3) 10: until convergence Initial messages and beliefs: µβ→α(xβ) = 1 and Pα(xβ) = 1 (2.1) Beliefs: Pβ(xβ) = 1 Zβ Y α⊃β µα→β(xβ) 1 nβ (2.2) Pα(Xα) = 1 Zα ˆΨα(Xα) Y β⊂α µβ→α(xβ)(2.3) Potential update: log ˆΨα(Xα) = log Ψα(Xα) + X β⊂α nβ −1 nβ log P old α (xβ) (2.4) Algorithm 2: Double-loop algorithm for minimizing the Bethe free energy. The inner loop is Algorithm 1 with redefinitions of the factor and variable beliefs. Minus the Bethe free energy is an approximation, but not a bound of the loglikelihood log Z. A key observation in [3] is that the fixed points of the sum-product algorithm, described in the previous section, correspond to extrema of the Bethe free energy under the constraints (5). The above derivation suggests that we should be specifically interested in minima of the Bethe free energy, not “just” stationary points. The resulting constrained minimization problem is well-defined (the Bethe free energy is bounded from below), but not necessarily convex, mainly because of the negative Pβ log Pβ-terms. The crucial trick, implicit or explicit in recently suggested procedures is to bound [5] or clamp [6] the possibly concave part (outer loop: recompute the bound) and solve the remaining convex problem (inner loop: maximization with respect to Lagrange multipliers; see below). Here we propose to use the linear bound − X xβ Pβ(xβ) log Pβ(xβ) ≤− X xβ Pβ(xβ) log P old β (xβ) , (7) with P old β (xβ) from the result of the previous inner loop. The (convex) bound of the Bethe free energy then boils down to Fbound(P) = X α X Xα Pα(Xα) log " Pα(Xα) ˆΨα(Xα) # ≥F(P) , if we define ˆΨα as in (2.4). The outer loop corresponds to a reset of the bound, i.e., at the start of the inner loop we have Fbound(P) = F(P). In the inner loop (see the next section for its derivation), we solve the remaining convex constrained minimization problem with the method of Lagrange multipliers. At the end of the inner loop, we then have F(P new) ≤Fbound(P new) ≤Fbound(P) = F(P). 4 Saddle-point problem In this section we will translate the (non-convex) minimization of the Bethe free energy under linear constraints into an equivalent (non-convex/concave) saddle-point problem. We replace the bound (7) with an explicit minimization over auxiliary variables γ (see also [7]; an alternative interpretation is a Legendre transform): − X xβ Pβ(xβ) log Pβ(xβ) = min γβ − X xβ γβ(xβ)Pβ(xβ) + log X xβ eγβ(xβ) . (8) Substitution into (6) then yields a constrained minimization problem, where the minimization is w.r.t. {Pα, Pβ, γβ} under constraints (5). Using (any other convex combination will work as well, but this symmetric one is most convenient) Pβ(xβ) = 1 nβ X α⊃β Pα(xβ) we can get rid of all dependencies on Pβ, both in (8) and in the constraints (5), which simplifies the following analysis and derivations considerably. For fixed γβ, the remaining minimization problem is convex in Pα with linear constraints and can thus be solved with the method of Lagrange multipliers. In terms of these multipliers λ and the auxiliary variables γ, the solution for Pα reads Pα(Xα) = 1 Zα(λ, γ)Ψα(Xα) exp X β⊂α ¯λαβ(xβ) + nβ −1 nβ γβ(xβ) , (9) with Zα(λ, γ) the proper normalization and ¯λαβ(xβ) ≡λαβ(xβ) −1 nβ X α′⊃β λα′β(xβ) . Substituting this back into the Lagrangian, we end up with an unconstrained saddlepoint problem of the type minγ maxλ F(λ, γ) with F(λ, γ) = X α log Zα(λ, γ) − X β (nβ −1) log X xβ eγβ(xβ) . From the fixed-point equations we derive the updates λnew αβ (xβ) = λαβ(xβ) −log Pα(xβ) + 1 nβ X α′⊃β log Pα′(xβ) , (10) γnew β (xβ) = log 1 nβ X α⊃β Pα(xβ) , (11) with Pα(xβ) the marginal computed from Pα(Xα) as in (9). Proof. Introduce a new set of auxiliary variables ˆZα by writing −log Zα = max ˆ Zα ( −log ˆZα + 1 −1 ˆZα X Xα Pα(Xα)Zα !) . Next consider maximizing λαβ(xβ) for a particular variable β and all α ⊃β, while keeping all others as well as all ˆZα fixed (by convention, we update ˆZα to Zα after each update of λ’s). Taking derivatives, we find that the new ¯λnew should satisfy e ¯λnew αβ (xβ)Pα(xβ) e ¯λαβ(xβ) = 1 nβ X α′⊃β e ¯λnew α′β (xβ)Pα′(xβ) e ¯λα′β(xβ) . Any update of the form λnew αβ (xβ) = −log Pα(xβ) + λαβ(xβ) + νβ(xβ) will do, where choosing νβ(xβ) such that λnew αβ = ¯λnew αβ yields (10). The updates (10) and (11) are properly aligned with the respective gradients and satisfy the saddle-point equations F(λnew, γ) ≥F(λ, γ) ≥F(λ, γnew) . (12) This saddle-point problem is concave in λ, but not necessarily convex in γ. One way to guarantee convergence to a “correct” saddle point is then to solve the (up to irrelevant linear translations unique) maximization with respect to λ in an inner loop, followed by an update of γ in the outer loop. This is precisely the doubleloop algorithm sketched in the previous section. We obtain the description given in Algorithm 2 if we substitute (up to irrelevant constants) γβ(xβ) = log P old β (xβ), ¯λαβ(xβ) = log µβ→α(xβ), and λαβ(xβ) = −log µα→β(xβ) . Note that in the inner loop of the double-loop algorithm the scheduling does matter. The ordering described in Algorithm 1 - run over variables β and update all corresponding messages from and to neighboring factors before moving on to the next variable - satisfies (12) without damping. An alternative approach is to apply (damped versions of) the updates (10) and (11) in parallel. This can be loosely interpreted as doing gradient descent-ascent. Gradient descent-ascent is a standard procedure for solving saddle-point problems and guaranteed to converge to the correct solution if the saddle-point problem is indeed convex/concave (see e.g. [8]). Similarly, it is easy to show that gradient descent-ascent applied to a non-convex/concave problem is locally stable at a particular saddle point {λ∗, γ∗}, if and only if the objective is locally convex/concave. The statement in the title now follows from two observations. 1. The damped version (2) of the sum-product algorithm has the same local stability properties as a gradient descent-ascent procedure derived from (10) and (11). Proof. We replace (11) with γnew β (xβ) = 1 nβ X α⊃β log Pα(xβ) . (13) At a saddle point Pα(xβ) = Pβ(xβ) ∀α⊃β and thus the difference between the logarithmic average (13) and the linear average (11) as well as its derivatives vanish. Consequently, (13) has the same local stability properties as (11). Now consider parallel application of a damped version of (10), with step size ϵ, and (13), with step size nβϵ. We obtain the damped version (2) of the standard sum-product algorithm, in combination with the other definitions in Algorithm 1, when we apply the definitions log µβ→α(xβ) = ¯λαβ(xβ) + nβ −1 nβ γβ(xβ) and log µα→β(xβ) = 1 nβ γβ(xβ) −λαβ(xβ) . 2. Local stability of the gradient descent-ascent procedure at {λ∗, γ∗} implies that the corresponding Pα is at a minimum of the Bethe free energy and that all constraints are satisfied. The converse need not be the case. Proof. Local stability of the gradient descent-ascent procedure and thus the sum-product algorithm depends on the local curvature of F(λ, γ), defined through the Hessian matrices Hγγ ≡∂2F(λ, γ) ∂γ∂γT {λ∗,γ∗} 0 50 10 −1 10 0 10 1 #iterations KL−divergence (a) 0 500 10 −1 10 0 10 1 #iterations (b) 0 10 20 10 −1 10 0 10 1 #iterations (c) 0 1000 2000 10 −1 10 0 10 1 #iterations (d) Figure 2: Loopy belief propagation on a Boltzmann machine with 4 nodes, weights (upper diagonal) (3, 2, 2; 1, 3; −3), and thresholds (0, 0, 1, 1). Plotted is the KullbackLeibler divergence between the exact and the approximate single-node marginals. (a) No damping leads to somewhat erratic cyclic behavior. (b) Damping with step size 0.1 yields a smoother cycle, but no convergence. (c) The double-loop algorithm does converge to a stable solution. (d) This solution is unstable under standard loopy belief propagation (here again with step size 0.1). and Hλλ. Gradient descent-ascent is locally stable iffHγγ is positive and Hλλ negative (semi-)definite. The latter is true by construction. The “total” curvature, defined through H∗ γγ ≡∂2F ∗(γ) ∂γ∂γT γ∗ with F ∗(γ) ≡max λ F(λ, γ) , can be shown to obey H∗ γγ = Hγγ −HγλH−1 λλ Hλγ . With Hλλ negative definite, we then conclude that if Hγγ is positive definite (gradient descent-ascent locally stable), then so is H ∗ γγ (local minimum). The converse, however, need not be the case: H∗ γγ can be positive definite (minimum) where Hγγ has one or more negative eigenvalues (gradient descent-ascent unstable). An example of this phenomenom is F(λ, γ) = −λ2 −γ2 + 4λγ. Non-convergence of loopy belief propagation on a Boltzmann machine is shown in Figure 2. Typically, standard loopy belief propagation converges to a stable solution without damping. In rare cases, damping is required to obtain convergence and in very rare cases, even considerable damping does not help, as in Figure 2. The double-loop algorithm does converge and the solution obtained is indeed unstable under standard belief propagation, even with damping. The larger the weights, the more often these instabilities seem to occur. This is consistent with the empirical observation that the max-product algorithm (“belief revision”) is typically less stable than the sum-product algorithm: max-product on a Boltzmann machine corresponds to (a properly scaled version of) the sum-product algorithm in the limit of infinite weights. The example in Figure 2 is about the smallest that we have found: we have observed these instabilities in many other (larger) instances of Markov networks, as well as directed Bayesian networks, yet not in structures with just a single loop. The latter seems consistent with the notion that not only for trees, but also for networks with a single loop, the Bethe free energy is still convex. 5 Discussion The above gradient descent-ascent interpretation shows that loopy belief propagation is more than just fixed-point iteration: the updates tend to move in the right uphill-downhill directions, which might explain its success in practical applications. Still, loopy belief propagation can fail to converge, and apparently for two different reasons. The first rather innocent one is a too large step size, similar to taking a too large “learning parameter” in gradient-descent learning. Straightforwardly damping the updates, as in (2), is then sufficient to converge to a stable fixed point. Note that this damping is in the logarithmic domain and thus slightly different from the damping linear in the messages as described in [2]. The damping proposed in [7] is restricted to the Lagrange multipliers λ and may therefore not share the nice properties of the damping discussed here. Local stability in the limit of small step sizes is independent of the scheduling of messages, but in practice particular schedules can still favor others and, for example, be stable with larger step sizes or converge more rapidly. For example, in [9] the message updates follow the structure of a spanning tree, which empirically seems to help a lot. The other more serious reason for non-convergence is inherent instability of the fixed point, even in the limit of infinitely small step sizes. In that case, loopy belief propagation just does not work and one can resort to a more tedious double-loop algorithm to guarantee convergence to a local minimum. The double-loop algorithm described here is similar to the CCCP algorithm of [5]. The latter implicitly uses a less strict bound, which makes it (slightly) less efficient and arguably a little more complicated. Whether double-loop algorithms are worth the effort is an open question: in several simulation studies a negative correlation between the quality of the approximation and the convergence of standard belief propagation has been found [6, 7, 10], but still without a convincing theoretical explanation. Acknowledgments I would like to thank Wim Wiegerink and Onno Zoeter for many helpful suggestions and interesting discussions and the Dutch Technology Foundation STW for support. References [1] J. Pearl. Probabilistic Reasoning in Intelligent systems: Networks of Plausible Inference. Morgan Kaufmann, San Francisco, CA, 1988. [2] K. Murphy, Y. Weiss, and M. Jordan. Loopy belief propagation for approximate inference: An empirical study. In UAI’99, pages 467–475, 1999. [3] J. Yedidia, W. Freeman, and Y. Weiss. Generalized belief propagation. In NIPS 13, pages 689–695, 2001. [4] F. Kschischang, B. Frey, and H. Loeliger. Factor graphs and the sum-product algorithm. IEEE Transactions on Information Theory, 47(2):498–519, 2001. [5] A. Yuille. CCCP algorithms to minimize the Bethe and Kikuchi free energies: Convergent alternatives to belief propagation. Neural Computation, 14:1691– 1722, 2002. [6] Y. Teh and M. Welling. The unified propagation and scaling algorithm. In NIPS 14, 2002. [7] T. Minka. The EP energy function and minimization schemes. Technical report, MIT Media Lab, 2001. [8] S. Seung, T. Richardson, J. Lagarias, and J. Hopfield. Minimax and Hamiltonian dynamics of excitatory-inhibitory networks. In NIPS 10, 1998. [9] M. Wainwright, T. Jaakola, and A. Willsky. Tree-based reparameterization for approximate estimation on loopy graphs. In NIPS 14, 2002. [10] T. Heskes and O. Zoeter. Expectation propagation for approximate inference in dynamic Bayesian networks. In UAI-2002, pages 216–223, 2002.
|
2002
|
165
|
2,175
|
Identity Uncertainty and Citation Matching Hanna Pasula, Bhaskara Marthi, Brian Milch, Stuart Russell, Ilya Shpitser Computer Science Division, University Of California 387 Soda Hall, Berkeley, CA 94720-1776 pasula, marthi, milch, russell, ilyas@cs.berkeley.edu Abstract Identity uncertainty is a pervasive problem in real-world data analysis. It arises whenever objects are not labeled with unique identifiers or when those identifiers may not be perceived perfectly. In such cases, two observations may or may not correspond to the same object. In this paper, we consider the problem in the context of citation matching—the problem of deciding which citations correspond to the same publication. Our approach is based on the use of a relational probability model to define a generative model for the domain, including models of author and title corruption and a probabilistic citation grammar. Identity uncertainty is handled by extending standard models to incorporate probabilities over the possible mappings between terms in the language and objects in the domain. Inference is based on Markov chain Monte Carlo, augmented with specific methods for generating efficient proposals when the domain contains many objects. Results on several citation data sets show that the method outperforms current algorithms for citation matching. The declarative, relational nature of the model also means that our algorithm can determine object characteristics such as author names by combining multiple citations of multiple papers. 1 INTRODUCTION Citation matching is the problem currently handled by systems such as Citeseer [1].1 Such systems process a large number of scientific publications to extract their citation lists. By grouping together all co-referring citations (and, if possible, linking to the actual cited paper), the system constructs a database of “paper” entities linked by the “cites(p1, p2)” relation. This is an example of the general problem of determining the existence of a set of objects, and their properties and relations, given a collection of “raw” perceptual data; this problem is faced by intelligence analysts and intelligent agents as well as by citation systems. A key aspect of this problem is determining when two observations describe the same object; only then can evidence be combined to develop a more complete description of the object. Objects seldom carry unique identifiers around with them, so identity uncertainty is ubiquitous. For example, Figure 1 shows two citations that probably refer to the same paper, despite many superficial differences. Citations appear in many formats and are rife with errors of all kinds. As a result, Citeseer—which is specifically designed to overcome such problems—currently lists more than 100 distinct AI textbooks published by Russell 1See citeseer.nj.nec.com. Citeseer is now known as ResearchIndex. [Lashkari et al 94] Collaborative Interface Agents, Yezdi Lashkari, Max Metral, and Pattie Maes, Proceedings of the Twelfth National Conference on Articial Intelligence, MIT Press, Cambridge, MA, 1994. Metral M. Lashkari, Y. and P. Maes. Collaborative interface agents. In Conference of the American Association for Artificial Intelligence, Seattle, WA, August 1994. Figure 1: Two citations that probably refer to the same paper. and Norvig on or around 1995, from roughly 1000 citations. Identity uncertainty has been studied independently in several fields. Record linkage [2] is a method for matching up the records in two files, as might be required when merging two databases. For each pair of records, a comparison vector is computed that encodes the ways in which the records do and do not match up. EM is used to learn a naive-Bayes distribution over this vector for both matched and unmatched record pairs, so that the pairwise match probability can then be calculated using Bayes’ rule. Linkage decisions are typically made in a greedy fashion based on closest match and/or a probability threshold, so the overall process is order-dependent and may be inconsistent. The model does not provide for a principled way to combine matched records. A richer probability model is developed by Cohen et al [3], who model the database as a combination of some “original” records that are correct and some number of erroneous versions. They give an efficient greedy algorithm for finding a single locally optimal assignment of records into groups. Data association [4] is the problem of assigning new observations to existing trajectories when multiple objects are being tracked; it also arises in robot mapping when deciding if an observed landmark is the same as one previously mapped. While early data association systems used greedy methods similar to record linkage, recent systems have tried to find high-probability global solutions [5] or to approximate the true posterior over assignments [6]. The latter method has also been applied to the problem of stereo correspondence, in which a computer vision system must determine how to match up features observed in two or more cameras [7]. Data association systems usually have simple observation models (e.g., Gaussian noise) and assume that observations at each time step are all distinct. More general patterns of identity occur in natural language text, where the problem of anaphora resolution involves determining whether phrases (especially pronouns) co-refer; some recent work [8] has used an early form of relational probability model, although with a somewhat counterintuitive semantics. Citeseer is the best-known example of work on citation matching [1]. The system groups citations using a form of greedy agglomerative clustering based on a text similarity metric (see Section 6). McCallum et al [9] use a similar technique, but also develop clustering algorithms designed to work well with large numbers of small clusters (see Section 5). With the exception of [8], all of the preceding systems have used domain-specific algorithms and data structures; the probabilistic approaches are based on a fixed probability model. In previous work [10], we have suggested a declarative approach to identity uncertainty using a formal language—an extension of relational probability models [11]. Here, we describe the first substantial application of the approach. Section 2 explains how to specify a generative probability model of the domain. The key technical point (Section 3) is that the possible worlds include not only objects and relations but also mappings from terms in the language to objects in the domain, and the probability model must include a prior over such mappings. Once the extended model has been defined, Section 4 details the probability distributions used. A general-purpose inference method is applied to the model. We have found Markov chain Monte Carlo (MCMC) to be effective for this and other applications (see Section 5); here, we include a method for generating effective proposals based on ideas from [9]. The system also incorporates an EM algorithm for learning the local probability models, such as the model of how author names are abbreviated, reordered, and misspelt in citations. Section 6 evaluates the performance of four datasets originally used to test the Citeseer algorithms [1]. As well as providing significantly better performance, our system is able to reason simultaneously about papers, authors, titles, and publication types, and does a good job of extracting this information from the grouped citations. For example, an author’s name can be identified more accurately by combining information from multiple citations of several different papers. The errors made by our system point to some interesting unmodeled aspects of the citation process. 2 RPMs Reasoning about identity requires reasoning about objects, which requires at least some of the expressive power of a first-order logical language. Our approach builds on relational probability models (RPMs) [11], which let us specify probability models over possible worlds defined by objects, properties, classes, and relations. 2.1 Basic RPMs At its most basic, an RPM, as defined by Koller et al [12], consists of • A set C of classes denoting sets of objects, related by subclass/superclass relations. • A set I of named instances denoting objects, each an instance of one class. • A set A of complex attributes denoting functional relations. Each complex attribute A has a domain type Dom[A] ∈C and a range type Range[A] ∈C. • A set B of simple attributes denoting functions. Each simple attribute B has a domain type Dom[B] ∈C and a range V al[B]. • A set of conditional probability models P(B|Pa[B]) for the simple attributes. Pa[B] is the set of B’s parents, each of which is a nonempty chain of (appropriately typed) attributes σ = A1. · · · .An.B′, where B′ is a simple attribute. Probability models may be attached to instances or inherited from classes. The parent links should be such that no cyclic dependencies are formed. • A set of instance statements, which set the value of a complex attribute to an instance of the appropriate class. We also use a slight variant of an additional concept from [11]: number uncertainty, which allows for multi-valued complex attributes of uncertain cardinality. We define each such attribute A as a relation rather than a function, and we associate with it a simple attribute #[A] (i.e., the number of values of A) with a domain type Dom[A] and a range {0, 1, . . ., max #[A]}. 2.2 RPMs for citations Figure 2 outlines an RPM for the example citations of Figure 1. There are four classes, the self-explanatory Author, Paper, and Citation, as well as AuthorAsCited, which represents not actual authors, but author names as they appear when cited. Each citation we wish to match leads to the creation of a Citation instance; instances of the remaining three classes are then added as needed to fill all the complex attributes. E.g., for the first citation of Figure 1, we would create a Citation instance C1, set its text attribute to the string “Metral M. ...August 1994.”, and set its paper attribute to a newly created Paper instance, which we will call P1. We would then introduce max(#[author]) (here only 3, for simplicity) AuthorAsCited instances (D11, D12, and D13) to fill the P1.obsAuthors (i.e., observed authors) attribute, and an equal number of Author instances (A11, A12, and A13) to fill both the P1.authors[i] and the D1i.author attributes. (The complex attributes would be set using instance statements, which would then also constrain the cited authors to be equal to the authors of the actual paper. 2) Assuming (for now) that the value of C1.parse 2Thus, uncertainty over whether the authors are ordered correctly can be modeled using probabilistic instance statements. Citation author fnames surname surname AuthorAsCited publication type Author paper text parse obsTitle obsAuthors Paper #(fnames) fnames #(fnames) authors title #(authors) #(obsAuthors) P1 P2 C1 C2 A12 A11 A13 A21 A22 A23 D23 D22 D13 D12 D11 D21 Figure 2: An RPM for our Citeseer example. The large rectangles represent classes: the dark arrows indicate the ranges of their complex attributes, and the light arrows lay out all the probabilistic dependencies of their basic attributes. The small rectangles represent instances, linked to their classes with thick grey arrows. We omit the instance statements which set many of the complex attributes. is observed, we can set the values of all the basic attributes of the Citation and AuthorAsCited instances. (E.g., given the correct parse, D11.surname would be set to Lashkari, and D12.fnames would be set to (Max)). The remaining basic attributes — those of the Paper and Author instances — represent the “true” attributes of those objects, and their values are unobserved. The standard semantics of RPMs includes the unique names assumption, which precludes identity uncertainty. Under this assumption, any two papers are assumed to be different unless we know for a fact that they are the same. In other words, although there are many ways in which the terms of the language can map to the objects in a possible world, only one of these identity mappings is legal: the one with the fewest co-referring terms. It is then possible to express the RPM as an equivalent Bayesian network: each of the basic attributes of each of the objects becomes a node, with the appropriate parents and probability model. RPM inference usually involves the construction of such a network. The Bayesian network equivalent to our RPM is shown in Figure 3. 3 IDENTITY UNCERTAINTY In our application, any two citations may or may not refer to the same paper. Thus, for citations C1 and C2, there is uncertainty as to whether the corresponding papers P1 and P2 are in fact the same object. If they are the same, they will share one set of basic attributes; A11. fnames fnames D11. A11. #(fnames) #(fnames) D11. surname surname #(fnames) #(fnames) surname surname fnames fnames #(fnames) #(fnames) fnames fnames surname A11. surname D11. A12. D12. D12. A12. D12. A12. A13. A13. A13. D13. D13. D13. C1. obsTitle title P1. C1. parse #(authors) C1. P1. C1. text fnames fnames #(fnames) #(fnames) surname surname #(fnames) #(fnames) surname surname fnames fnames #(fnames) #(fnames) fnames fnames surname surname D12. obsTitle title parse #(authors) text P2. C2. C2. C2. C2. P2. D23. A23. A23. D23. D23. A23. D21. A21. A21. D21. A21. D21. A22. A22. A22. D22. D22. pubtype pubtype Figure 3: The Bayesian network equivalent to our RPM, assuming C1 ̸= C2. if they are distinct, there will be two sets. Thus, the possible worlds of our probability model may differ in the number of random variables, and there will be no single equivalent Bayesian network. The approach we have taken to this problem [10] is to extend the representation of a possible world so that it includes not only the basic attributes of a set of objects, but also the number of objects n and an identity clustering ι, that is, a mapping from terms in the language (such as P1) to objects in the world. We are interested only in whether terms co-refer or not, so ι can be represented by a set of equivalence classes of terms. For example, if P1 and P2 are the only terms, and they co-refer, then ι is {{P1, P2}}; if they do not co-refer, then ι is {{P1}, {P2}}. We define a probability model for the space of extended possible worlds by specifying the prior P(n) and the conditional distribution P(ι|n). As in standard RPMs, we assume that the class of every instance is known. Hence, we can simplify these distributions further by factoring them by class, so that, e.g., P(ι) = Q C∈C P(ιC). We then distinguish two cases: • For some classes (such as the citations themselves), the unique names assumptions remains appropriate. Thus, we define P(ιCitation) to assign a probability of 1.0 to the one assignment where each citation object is unique. • For classes such as Paper and Author, whose elements are subject to identity uncertainty, we specify P(n) using a high-variance log-normal distribution.3 Then we make appropriate uniformity assumptions to construct P(ιC). Specifically, we assume that each paper is a priori equally likely to be cited, and that each author is a priori equally likely to write a paper. Here, “a priori” means prior to obtaining any information about the object in question, so the uniformity assumption is entirely reasonable. With these assumptions, the probability of an assignment ιC,k,m that maps k named instances to m distinct objects, when C contains n objects, is given by P(ιC,k,m) = n! (n −m)! 1 nk When n > m, the world contains objects unreferenced by any of the terms. However, these filler objects are obviously irrelevant (if they affected the attributes of some named term, they would have been named as functions of that term.) Therefore, we never have to create them, or worry about their attribute values. Our model assumes that the cardinalities and identity clusterings of the classes are independent of each other, as well as of the attribute values. We could remove these assumptions. For one, it would be straightforward to specify a class-wise dependency model for n or ι using standard Bayesian network semantics, where the network nodes correspond to the cardinality attributes of the classes. E.g., it would be reasonable to let the total number of papers depend on the total number of authors. Similarly, we could allow ι to depend on the attribute values—e.g., the frequency of citations to a given paper might depend on the fame of the authors—provided we did not introduce cyclic dependencies. 4 The Probability Model We will now fill in the details of the conditional probability models. Our priors over the “true” attributes are constructed off-line, using the following resources: the 1990 Census data on US names, a large A.I. BibTeX bibliography, and a hand-parsed collection of 500 citations. We learn several bigram models (actually, linear combinations of a bigram model and a unigram model): letter-based models of first names, surnames, and title words, as well as higher-level models of various parts of the citation string. More specifically, the values of Author.fnames and Author.surname are modeled as having a a 0.9 chance of being 3Other models are possible; for example, in situations where objects appear and disappear, P(ι) can be modeled implicitly by specifying the arrival, transition, and departure rates [6]. drawn from the relevant US census file, and a 0.1 chance of being generated using a bigram model learned from that file. The prior over Paper.titles is defined using a two-tier bigram model constructed using the bibliography, while the distributions over Author.#(fnames), Paper.#(authors), and Paper.pubType 4 are derived from our hand-parsed file. The conditional distributions of the “observed” variables given their true values (i.e., the corruption models of Citation.obsTitle, AuthorAsCited.surname, and AuthorAsCited.fnames) are modeled as noisy channels where each letter, or word, has a small probability of being deleted, or, alternatively, changed, and there is also a small probability of insertion. AuthorAsCited.fnames may also be abbreviated as an initial. The parameters of the corruption models are learnt online, using stochastic EM. Let us now return to Citation.parse, which cannot be an observed variable, since citation parsing, or even citation subfield extraction, is an unsolved problem. It is therefore fortunate that our approach lets us handle uncertainty over parses so naturally. The state space of Citation.parse has two different components. First of all, it keeps track of the citation style, defined as the ordering of the author and title subfields, as well as the format in which the author names are written. The prior over styles is learned using our hand-segmented file. Secondly, it keeps track of the segmentation of Citation.text, which is divided into an author segment, a title segment, and three filler segments (one before, one after, and one in between.) We assume a uniform distribution over segmentations. Citation.parse greatly constrains Citation.text: the title segment of Citation.text must match the value of Citation.obsTitle, while its author segment must match the combined values of the simple attributes of Citation.obsAuthors. The distributions over the remaining three segments of Citation.text are defined using bigram models, with the model used for the final segment chosen depending on the publication type. These models were, once more, learned using our pre-segmented file. 5 INFERENCE With the introduction of identity uncertainty, our model grows from a single Bayesian network to a collection of networks, one for each possible value of ι. This collection can be rather large, since the number of ways in which a set can be partitioned grows very quickly with the size of the set. 5 Exact inference is, therefore, impractical. We use an approximate method based on Markov chain Monte Carlo. 5.1 MARKOV CHAIN MONTE CARLO MCMC [13] is a well-known method for approximating an expectation over some distribution π(x), commonly used when the state space of x is too large to sum over. The weighted sum over the values of x is replaced by a sum over samples from π(x), which are generated using a Markov chain constructed to have π(x) as a stationary distribution. There are several ways of building up an appropriate Markov chain. In the Metropolis– Hastings method (M-H), transitions in the chain are constructed in two steps. First, a candidate next state x′ is generated from the current state x, using the (more or less arbitrary) proposal distribution q(x′|x). The probability that the move to x′ is actually made is the acceptance probability, defined as α(x′|x) = min 1, π(x′)q(x|x′) π(x)q(x′|x) . Such a Markov chain will have the right stationary distribution π(x) as long as q is defined in such a way that the chain is ergodic. It is even possible to factor q into separate proposals for various subsets of variables. In those situations, the variables that are not changed by the transition cancel in the ratio π(x′)/π(x), so the required calculation can be quite simple. 4Publication types range over {article, conference paper, book, thesis, and tech report} 5This sequence is described by the Bell numbers, whose asymptotic behaviour is more than exponential. 5.2 THE CITATION-MATCHING ALGORITHM The state space of our MCMC algorithm is the space of all the possible worlds, where each possible world contains an identity clustering ι, a set of class cardinalities n, and the values of all the basic attributes of all the objects. Since the ι is given in each world, the distribution over the attributes can be represented using a Bayesian network as described in Section 3. Therefore, the probability of a state is simply the product pf P(n), P(ι), and the probability of the hidden attributes of the network. Our algorithm uses a factored q function. One of our proposals attempts to change n using a simple random walk. The other suggests, first, a change to ι, and then, values for all the hidden attributes of all the objects (or clusters in ι) affected by that change. The algorithm for proposing a change in ιC works as follows: Select two clusters a1, a2 ∈ιC 6 Create two empty clusters b1 and b2 place each instance i ∈a1 ∪a2 u.a.r. into b1 or b2 Propose ι′ C = ιC −{a1, a2} ∪{b1, b2} Given a proposed ι′ C, suggesting values for the hidden attributes boils down to recovering their true values from (possibly) corrupt observations, e.g., guessing the true surname of the author currently known both as “Simth” and “Smith”. Since our title and name noise models are symmetric, our basic strategy is to apply these noise models to one of the observed values. In the case of surnames, we have the additional resource of a dictionary of common names, so, some of the time, we instead pick one of the set of dictionary entries that are within a few corruptions of our observed names. (One must, of course, careful to account for this hybrid approach in our acceptance probability calculations.) Parses are handled differently: we preprocess each citation, organizing its plausible segmentations into a list ordered in terms of descending probability. At runtime, we simply sample from these discrete distributions. Since we assume that boundaries occur only at punctuation marks, and discard segmentations of probability < 10−6, the lists are usually quite short. 7 The publication type variables, meanwhile, are not sampled at all. Since their range is so small, we sum them out. 5.3 SCALING UP One of the acknowledged flaws of the MCMC algorithm is that it often fails to scale. In this application, as the number of papers increases, the simplest approach — one where the two clusters a1 and a2 are picked u.a.r — is likely to lead to many rejected proposals, as most pairs of clusters will have little in common. The resulting Markov chain will mix slowly. Clearly, we would prefer to focus our proposals on those pairs of clusters which are actually likely to exchange their instances. We have implemented an approach based on the efficient clustering algorithm of McCallum et al [9], where a cheap distance metric is used to preprocess a large dataset and fragment it into many canopies, or smaller, overlapping sets of elements that have a non-zero probability of matching. We do the same, using word-matching as our metric, and setting the thresholds to 0.5 and 0.2. Then, at runtime, our q(x′|x) function proposes first a canopy c, and then a pair of clusters u.a.r. from c. (q(x|x′) is calculated by summing over all the canopies which contain any of the elements of the two clusters.) 6 EXPERIMENTAL RESULTS We have applied the MCMC-based algorithm to the hand-matched datasets used in [1]. (Each of these datasets contains several hundred citations of machine learning papers, about half of them in clusters ranging in size from two to twenty-one citations.) We have also 6Note that if the same cluster is picked twice, it will probably be split. 7It would also be possible to sample directly from a model such as a hierarchical HMM Phrase matching RPM + MCMC Reinforcement Reasoning Face Constraint 94% 97% 79% 94% 93% 89% 86% 96% 406 citations, 148 papers 295 citations, 199 papers 349 citations, 242 papers 514 citations, 296 papers Table 1: Results on four Citeseer data sets, for the text matching and MCMC algorithms. The metric used is the percentage of actual citation clusters recovered perfectly; for the MCMC-based algorithm, this is an average over all the MCMC-generated samples. implemented their phrase matching algorithm, a greedy agglomerative clustering method based on a metric that measures the degrees to which the words and phrases of any two citations overlap. (They obtain their “phrases” by segmenting each citation at all punctuation marks, and then taking all the bigrams of all the segments longer than two words.) The results of our comparison are displayed in Figure 1, in terms of the Citeseer error metric. Clearly, the algorithm we have developed easily beats our implementation of phrase matching. We have also applied our algorithm to a large set of citations referring to the textbook Artificial Intelligence: A Modern Approach. It clusters most of them correctly, but there are a couple of notable exceptions. Whenever several citations share the same set of unlikely errors, they are placed together in a separate cluster. This occurs because we do not currently model the fact that erroneous citations are often copied from reference list to reference list, which could be handled by extending the model to include a copiedFrom attribute. Another possible extension would be the addition of a topic attribute to both papers and authors: tracking the authors’ research topics might enable the system to distinguish between similarly-named authors working in different fields. Generally speaking, we expect that relational probabilistic languages with identity uncertainty will be a useful tool for creating knowledge from raw data. References [1] S. Lawrence, K. Bollacker, and C. Lee Giles. Autonomous citation matching. In Agents, 1999. [2] I. Fellegi and A. Sunter. A theory for record linkage. In JASA, 1969. [3] W. Cohen, H. Kautz, and D. McAllester. Hardening soft information sources. In KDD, 2000. [4] Y. Bar-Shalom and T. E. Fortman. Tracking and Data Association. Academic Press, 1988. [5] I. J. Cox and S. Hingorani. An efficient implementation and evaluation of Reid’s multiple hypothesis tracking algorithm for visual tracking. In IAPR-94, 1994. [6] H. Pasula, S. Russell, M. Ostland, and Y. Ritov. Tracking many objects with many sensors. In IJCAI-99, 1999. [7] F. Dellaert, S. Seitz, C. Thorpe, and S. Thrun. Feature correspondence: A markov chain monte carlo approach. In NIPS-00, 2000. [8] E. Charniak and R. P. Goldman. A Bayesian model of plan recognition. AAAI, 1993. [9] A. McCallum, K. Nigam, and L. H. Ungar. Efficient clustering of high-dimensional data sets with application to reference matching. In KDD-00, 2000. [10] H. Pasula and S. Russell. Approximate inference for first-order probabilistic languages. In IJCAI-01, 2001. [11] A. Pfeffer. Probabilistic Reasoning for Complex Systems. PhD thesis, Stanford, 2000. [12] A. Pfeffer and D. Koller. Semantics and inference for recursive probability models. In AAAI/IAAI, 2000. [13] W.R. Gilks, S. Richardson, and D.J. Spiegelhalter. Markov chain Monte Carlo in practice. Chapman and Hall, London, 1996.
|
2002
|
166
|
2,176
|
Fast Sparse Gaussian Process Methods: The Informative Vector Machine Neil Lawrence University of Sheffield 211 Portobello Street Sheffield, S1 4DP neil@dcs.shef.ac.uk Matthias Seeger University of Edinburgh 5 Forrest Hill Edinburgh, EH1 2QL seeger@dai.ed.ac.uk Ralf Herbrich Microsoft Research Ltd 7 J J Thomson Avenue Cambridge, CB3 0FB rherb@microsoft.com Abstract We present a framework for sparse Gaussian process (GP) methods which uses forward selection with criteria based on informationtheoretic principles, previously suggested for active learning. Our goal is not only to learn d–sparse predictors (which can be evaluated in O(d) rather than O(n), d ≪n, n the number of training points), but also to perform training under strong restrictions on time and memory requirements. The scaling of our method is at most O(n · d2), and in large real-world classification experiments we show that it can match prediction performance of the popular support vector machine (SVM), yet can be significantly faster in training. In contrast to the SVM, our approximation produces estimates of predictive probabilities (‘error bars’), allows for Bayesian model selection and is less complex in implementation. 1 Introduction Gaussian process (GP) models are powerful non-parametric tools for approximate Bayesian inference and learning. In comparison with other popular nonlinear architectures, such as multi-layer perceptrons, their behavior is conceptually simpler to understand and model fitting can be achieved without resorting to non-convex optimization routines. However, their training time scaling of O(n3) and memory scaling of O(n2), where n the number of training points, has hindered their more widespread use. The related, yet non-probabilistic, support vector machine (SVM) classifier often renders results that are comparable to GP classifiers w.r.t. prediction error at a fraction of the training cost. This is possible because many tasks can be solved satisfactorily using sparse representations of the data set. The SVM is triggered towards finding such representations through the use of a particular loss function1 that encourages some degree of sparsity, i.e. the final predictor depends only on a fraction of training points crucial for good discrimination on the task. Here, we call these utilized points the active set of the sparse predictor. In case of SVM classification, the active set contains the support vectors, the points closest to 1An SVM classifier is trained by minimizing a regularized loss functional, a process which cannot be interpreted as approximation to Bayesian inference. the decision boundary and the misclassified ones. If the active set size d is much smaller than n, an SVM classifier can be trained in average case running time between O(n · d2) and O(n2 · d) with memory requirements significantly less than n2. Note, however, that without any restrictions on the data distribution, d can rise to n. In an effort to overcome scaling problems a range of sparse GP approximations have been proposed [1, 8, 9, 10, 11]. However, none of these has fully achieved the goals of being a nontrivial approximation to a non-sparse GP model and matching the SVM w.r.t. both prediction performance and run time. The algorithm proposed here accomplishes these objectives and, as our experiments show, can even be significantly faster in training than the SVM. Furthermore, time and memory requirements may be restricted a priori. The potential benefits of retaining the probabilistic characteristics of the method are numerous, since hard problems, e.g. feature and model selection, can be dealt with using standard techniques from Bayesian learning. Our approach builds on earlier work of Lawrence and Herbrich [2] which we extend here by considering randomized greedy selections and focusing on an alternative representation of the GP model which facilitates generalizations to settings such as regression and multi-class classification. In the next section we introduce the GP classification model and a method for approximate inference. Section 3 then contains the derivation of our fast greedy approximation and a description of the associated algorithm. In Section 4, we present large-scale experiments on the MNIST database, comparing our method directly against the SVM. Finally we close with a discussion in Section 5. We denote vectors g = (gi)i and matrices G = (gi,j)i,j in bold-face2. If I, J are sets of row and column indices respectively, we denote the corresponding submatrix of G ∈Rp,q by GI,J, furthermore we abbreviate GI,· to GI,1...q, GI,j to GI,{j}, GI to GI,I, etc. The density of the Gaussian distribution with mean µ and covariance matrix Σ is denoted by N(·|µ, Σ). Finally, we use diag(·) to represent an ‘overloaded’ operator which extracts the diagonal elements of a matrix as a vector or produces a square matrix with diagonal elements from a given vector, all other elements 0. 2 Gaussian Process Classification Assume we are given a sample S := ((x1, y1), . . . , (xn, yn)), xi ∈X, yi ∈{−1, +1}, drawn independently and identically distributed (i.i.d.) from an unknown data distribution3 P(x, y). Our goal is to estimate P(y|x) for typical x or, less ambitiously, to learn a predictor x →y with small error on future data. To model this situation, we introduce a latent variable u ∈R separating x and y, and some classification noise model P(y|u) := Φ(y·(u+b)), where Φ is the cumulative distribution function of the standard Gaussian N(0, 1), and b ∈R is a bias parameter. From the Bayesian viewpoint, the relationship x →u is a random process u(·), which, in a Gaussian process (GP) model, is given a GP prior with mean function 0 and covariance kernel k(·, ·). This prior encodes the belief that (before observing any data) for any finite set X = {˜x1, . . . , ˜xp} ⊂X, the corresponding latent outputs (u(˜x1), . . . , u(˜xp))T are jointly Gaussian with mean 0 ∈Rp and covariance matrix (k(˜xi, ˜xj))i,j ∈Rp,p. GP models are non-parametric, that is, there is in general no finite-dimensional 2Whenever we use a bold symbol g or G for a vector or matrix, we denote its components by the corresponding normal symbols gi and gi,j. 3We focus on binary classification, but our framework can be applied straightforwardly to regression estimation and multi-class classification. parametric representation for u(·). It is possible to write u(·) as linear function in some feature space F associated with k, i.e. u(x) = wTφ(x), w ∈F, in the sense that a Gaussian prior on w induces a GP distribution on the linear function u(·). Here, φ is a feature map from X into F, and the covariance function can be written k(x, x′) = φ(x)Tφ(x′). This linear function view, under which predictors become separating hyper-planes in F, is frequently used in the SVM community. However, F is, in general, infinite-dimensional and not uniquely determined by the kernel function k. We denote the sequence of latent outputs at the training points by u := (u(x1), . . . , u(xn))T ∈Rn and the covariance or kernel matrix by K := (k(xi, xj))i,j ∈Rn,n. The Bayesian posterior process for u(·) can be computed in principle using Bayes’ formula. However, if the noise model P(y|u) is non-Gaussian (as is the case for binary classification), it cannot be handled tractably and is usually approximated by another Gaussian process, which should ideally preserve mean and covariance function of the former. It is easy to show that this is equivalent to fitting the moments between the finite-dimensional (marginal) posterior P(u|S) over the training points and a Gaussian approximation Q(u), because the conditional posterior P(u(x∗)|u, S) for some non-training point x∗is identical to the conditional prior P(u(x∗)|u). In general, computing Q is also infeasible, but several authors have proposed to approximate the global moment matching by iterative schemes which locally focus on one training pattern at a time [1, 4]. These schemes (at least in their simplest forms) result in a parametric form for the approximating Gaussian Q(u) ∝P(u) n Y i=1 exp −pi 2 (ui −mi)2 . (1) This may be compared with the form of the true posterior P(u|S) ∝ P(u) Qn i=1 P(yi|ui) and shows that Q(u) is obtained from P(u|S) by a likelihood approximation. Borrowing from graphical models vocabulary, the factors in (1) are called sites. Initially, all pi, mi are 0, thus Q(u) = P(u). In order to update the parameters for a site i, we replace it in Q(u) by the corresponding true likelihood factor P(yi|ui), resulting in a non-Gaussian distribution whose mean and covariance matrix can still be computed. This allows us to approximate it by a Gaussian Qnew(u) using moment matching. The site update is called the inclusion of i into the active set I. The factorized form of the likelihood implies that the new and old Q differ only in the parameters pi, mi of site i. This is a useful locality property of the scheme which is referred to as assumed density filtering (ADF) (e.g. [4]). The special case of ADF4 for GP models has been proposed in [5]. 3 Sparse Gaussian Process Classification The simplest way to obtain a sparse Gaussian process classification (GPC) approximation from the ADF scheme is to leave most of the site parameters at 0, i.e. pi = 0, mi = 0 for all i ̸∈I, where I ⊂{1, . . ., n} is the active set, |I| =: d < n. For this to succeed, it is important to choose I so that the decision boundary between classes is represented essentially as accurately as if we used the whole training set. An exhaustive search over all possible subsets I is, of course, intractable. Here, we follow a greedy approach suggested in [2], including new patterns one at a time into I. The selection of a pattern to include is made by computing a score function for 4A generalization of ADF, expectation propagation (EP) [4], allows for several iterations over the data. In the context of sparse approximations, it allows us to remove points from I or exchange them against such outside I, although we do not consider such moves here. Algorithm 1 Informative vector machine algorithm Require: A desired sparsity d ≪n. I = ∅, m = 0, Π = diag(0), diag(A) = diag(K), h = 0, J = {1, . . . , n}. repeat for j ∈J do Compute ∆j according to (4). end for i = argmaxj∈J ∆j Do updates for pi and mi according to (2). Update matrices L, M, diag(A) and h according to (3). I ←I ∪{i}, J ←J \ {i}. until |I| = d all points in J = {1, . . ., n} \ I (or a subset thereof) and then picking the winner. The heuristic we implement has also been considered in the context of active learning (see chapter 5 of [3]): score an example (xi, yi) by the decrease in entropy of Q(·) upon its inclusion. As a result of the locality property of ADF and the fact that Q is Gaussian, it is easy to see that the entropy difference H[Qnew] −H[Q] is proportional to the log ratio between the variances of the marginals Qnew(ui) and Q(ui). Thus, our heuristic (referred to as the differential entropy score) favors points whose inclusion leads to a large reduction in predictive (posterior) variance at the corresponding site. Whilst other selection heuristics can be argued for and utilized, it turns out that the differential entropy score together with the simple likelihood approximation in (1) leads to an extremely efficient and competitive algorithm. In the remainder of this section, we describe our method and give a schematic algorithm. A detailed derivation and discussions of some extensions can be found in [7]. From (1) we have Q(·) = N(·|h, A), A := (K−1 + Π)−1, h := AΠm and Π := diag(p). If I is the current active set, then all components of p and m not in I are zero, and some algebra using the Woodbury formula gives A = K −M TM, M = L−1Π1/2 I KI,· ∈Rd,n, where L is the lower-triangular Cholesky factor of B = I + Π1/2 I KIΠ1/2 I ∈Rd,d. In order to compute the differential entropy score for a point j ̸∈I, we have to know aj,j and hj. Thus, when including i into the active set I, we need to update diag(A) and h accordingly, which in turn requires the matrices L and M to be kept up-to-date. The update equations for pi, mi are pi = νi 1 −ai,iνi , mi = hi + αi νi , where zi = yi · (hi + b) p1 + ai,i , αi = yi · N(zi|0, 1) Φ(zi)p1 + ai,i , νi = αi αi + hi + b 1 + ai,i . (2) We then update L →Lnew by appending the row (lT, l) and M →M new by appending the row µT, where l = √piM ·,i, l = q 1 + piKi,i −lTl, µ = l−1(√piK·,i −M Tl). (3) Finally, diag(Anew) ←diag(A)−(µ2 j)j and hnew ←h +αilp−1/2 i µ. The differential entropy score for j ̸∈I can be computed based on the variables in (2) (with i →j) as ∆j = 1 2 log(1 −aj,jνj), (4) which can be computed in O(1), given hj and aj,j. In Algorithm 1 we give an algorithmic version of this scheme. Each inclusion costs O(n · d), dominated by the computation of µ, apart from the computation of the kernel matrix column K·,i. Thus the total time complexity is O(n·d2). The storage requirement is O(n·d), dominated by the buffer for M. Given diag(A) and h, the error or the expected log likelihood of the current predictor on the remaining points J can be computed in O(n). These scores can be used in order to decide how many points to include into the final I. For kernel functions with constant diagonal, our selection heuristic is constant over patterns if I = ∅, so the first (or the first few) inclusion candidate is chosen at random. After training is complete, we can predict on test points x∗by evaluating the approximate predictive distribution Q(u∗|x∗, S) = R P(u∗|u)Q(u) du = N(u∗|µ(x∗), σ2(x∗)), where µ(x∗) = βTk(x∗), σ2(x∗) = k(x∗, x∗) −k(x∗)TΠ1/2 I B−1Π1/2 I k(x∗), (5) with β := Π1/2 I B−1Π1/2 I mI and k(x∗) := (k(xi, x∗))i∈I. We may compute σ2(x∗) using one back-substitution with the factor L. The approximate predictive distribution over y∗can be obtained by averaging the noise model over the Gaussian. The optimal predictor for the approximation is sgn(µ(x∗)+b), which is independent of the variance σ2(x∗). The simple scheme above employs full greedy selection over all remaining points to find the inclusion candidate. This is sensible during early inclusions, but computationally wasteful during later ones, and an important extension of the basic scheme of [2] allows for randomized greedy selections. To this end, we maintain a selection index J ⊂{1, . . . , n} with J ∩I = ∅at all times. Having included i into I we modify the selection index J. This means that only the components J of diag(A) and h have to be updated, which requires only the columns M ·,J. Hence, if J exhibits some inertia while moving over {1, . . ., n} \ I, many of the columns of M will not have to be kept up-to-date. In our implementation, we employ a simple delayed updating scheme for the columns of M which avoids double computations (see [7] for details). After a number of initial inclusions are done using full greedy selection, we use a J of fixed size m together with the following modification rule: for a fraction τ ∈(0, 1), retain the τ · m best-scoring points in J, then fill it up to size m by drawing at random from {1, . . ., n} \ (I ∪J). 4 Experiments We now present results of experiments on the MNIST handwritten digits database5, comparing our method against the SVM algorithm. We considered binary tasks of the form ‘c-against-rest’, c ∈{0, . . ., 9}. c is mapped to +1, all others to −1. We down-sampled the bitmaps to size 13 × 13 and split the MNIST training set into a (new) training set of size n = 59000 and a validation set of size 1000; the test set size is 10000. A run consisted of model selection, training and testing, and all results are averaged over 10 runs. We employed the RBF kernel k(x, x′) = C exp(−(γ/(2 · 169))∥x −x′∥2), x ∈R169 with hyper-parameters C > 0 (process variance) and γ > 0 (inverse squared length-scale). Model selection was done by minimizing validation set error, training on random training set subsets of size 5000.6 5Available online at http://www.research.att.com/∼yann/exdb/mnist/index.html. 6The model selection training set for a run i is the same across tested methods. The list of kernel parameters considered for selection has the same size across methods. SVM IVM c d gen time c d gen time 0 1247 0.22 1281 0 1130 0.18 627 1 798 0.20 864 1 820 0.26 427 2 2240 0.40 2977 2 2150 0.40 1690 3 2610 0.41 3687 3 2500 0.39 2191 4 1826 0.40 2442 4 1740 0.33 1210 5 2306 0.29 2771 5 2200 0.32 1758 6 1331 0.28 1520 6 1270 0.29 765 7 1759 0.54 2251 7 1660 0.51 1110 8 2636 0.50 3909 8 2470 0.53 2024 9 2731 0.58 3469 9 2740 0.55 2444 Table 1: Test error rates (gen, %) and training times (time, s) on binary MNIST tasks. SVM: Support vector machine (SMO); d: average number of SVs. IVM: Sparse GPC, randomized greedy selections; d: final active set size. Figures are means over 10 runs. Our goal was to compare the methods not only w.r.t. performance, but also running time. For the SVM, we chose the SMO algorithm [6] together with a fast elaborate kernel matrix cache (see [7] for details). For the IVM, we employed randomized greedy selections with fairly conservative settings.7 Since each binary digit classification task is very unbalanced, the bias parameter b in the GPC model was chosen to be non-zero. We simply fixed b = Φ−1(r), where r is the ratio between +1 and −1 patterns in the training set, and added a constant vb = 1/10 to the kernel k to account for the variance of the bias hyper-parameter. Ideally, both b and vb should be chosen by model selection, but initial experiments with different values for (b, vb) exhibited no significant fluctuations in validation errors. To ensure a fair comparison, we did initial SVM runs and initialized the active set size d with the average number (over 10 runs) of SVs found, independently for each c. We then re-ran the SVM experiments, allowing for O(d n) cache space. Table 1 shows the results. Note that IVM shows comparable performance to the SVM, while achieving significantly lower training times. For less conservative settings of the randomized selection parameters, further speed-ups might be realizable. We also registered (not shown here) significant fluctuations in training time for the SVM runs, while this figure is stable and a-priori predictable for the IVM. Within the IVM, we can obtain estimates of predictive probabilities for test points, quantifying prediction uncertainties. In Figure 1, which was produced for the hardest task c = 9, we reject fractions of test set examples based on the size of |P(y∗= +1)−1/2|. For the SVM, the size of the discriminant output is often used to quantify predictive uncertainty heuristically. For c = 9, the latter is clearly inferior (although the difference is less pronounced for the simpler binary tasks). In the SVM community it is common to combine the ‘c-against-rest’ classifiers to obtain a multi-class discriminant8 as follows: for a test point x∗, decide for the class whose associated classifier has the highest real-valued output. For the IVM, the 7First 2 selections at random, then 198 using full greedy, after that a selection index of size 500 and a retained fraction τ = 1/2. 8Although much recent work has looked into more powerful combination schemes, e.g. based on error-correcting codes. 0 0.05 0.1 0.15 0.2 10 −4 10 −3 10 −2 rejected fraction error rate SVM IVM Figure 1: Plot of test error rate against increasing rejection rate for the SVM (dashed) and IVM (solid), for the task c = 9 against the rest. For SVM, we reject based on “distance” from separating plane, for IVM based on estimates of predictive probabilities. The IVM line runs below the SVM line exhibiting lower classification errors for identical rejection rates. equivalent would be to compare the estimates log P(y∗= +1) from each c-predictor and pick the maximizing c. This is suboptimal, because the different predictors have not been trained jointly.9 However, the estimates of log P(y∗= +1) do depend on predictive variances, i.e. a measure of uncertainty about the predictive mean, which cannot be properly obtained within the SVM framework. This combination scheme results in test errors of 1.54%(±0.0417%) for IVM, 1.62%(±0.0316%) for SVM. When comparing these results to others in the literature, recall that our experiments were based on images sub-sampled to size 13 × 13 rather than the usual 28 × 28. 5 Discussion We have demonstrated that sparse Gaussian process classifiers can be constructed efficiently using greedy selection with a simple fast selection criterion. Although we focused on the change in differential entropy in our experiments here, the simple likelihood approximation at the basis of our method allows for other equally efficient criteria such as information gain [3]. Our method retains many of the benefits of probabilistic GP models (error bars, model combination, interpretability, etc.) while being much faster and more memory-efficient both in training and prediction. In comparison with non-probabilistic SVM classification, our method enjoys the further advantages of being simpler to implement and having strictly predictable time requirements. Our method can also be significantly faster10 than SVM with the SMO algorithm. This is due to the fact that SMO’s active set typically fluctuates heavily across the training set, thus a large fraction of the full kernel matrix must be evaluated. In contrast, IVM requires only d/n of K. 9It is straightforward to obtain the IVM for a joint GP classification model, however the training costs raise by a factor of c2. Whether this factor can be reduced to c using further sensible approximations, is an open question. 10We would expect SVMs to catch up with IVMs on tasks which require fairly large active sets, and for which very simple and fast covariance functions are appropriate (e.g. sparse input patterns). Among the many proposed sparse GP approximations [1, 8, 9, 10, 11], our method is most closely related to [1]. The latter is a sparse Bayesian online scheme which does not employ greedy selections and uses a more accurate likelihood approximation than we do, at the expense of slightly worse training time scaling, especially when compared with our randomized version. It also requires the specification of a rejection threshold and is dependent on the ordering in which the training points are presented. It incorporates steps to remove points from I, which can also be done straightforwardly in our scheme, however such moves are likely to create numerical stability problems. Smola and Bartlett [8] use a likelihood approximation different from both the IVM and the scheme of [1] for GP regression, together with greedy selections, but in contrast to our work they use a very expensive selection heuristic (O(n · d) per score computation) and are forced to use randomized greedy selection over small selection indexes. The differential entropy score has previously been suggested in the context of active learning (e.g. [3]), but applies more directly to our problem. In active learning, the label yi is not known at the time xi has to be scored, and expected rather than actual entropy changes have to be considered. Furthermore, MacKay [3] applies the selection to multi-layer perceptron (MLP) models for which Gaussian posterior approximations over the weights can be very poor. Acknowledgments We thank Chris Williams, David MacKay, Manfred Opper and Lehel Csat´o for helpful discussions. MS gratefully acknowledges support through a research studentship from Microsoft Research Ltd. References [1] Lehel Csat´o and Manfred Opper. Sparse online Gaussian processes. N. Comp., 14:641– 668, 2002. [2] Neil D. Lawrence and Ralf Herbrich. A sparse Bayesian compression scheme - the informative vector machine. Presented at NIPS 2001 Workshop on Kernel Methods, 2001. [3] David MacKay. Bayesian Methods for Adaptive Models. PhD thesis, California Institute of Technology, 1991. [4] Thomas Minka. A Family of Algorithms for Approximate Bayesian Inference. PhD thesis, MIT, January 2001. [5] Manfred Opper and Ole Winther. Gaussian processes for classification: Mean field algorithms. N. Comp., 12(11):2655–2684, 2000. [6] John C. Platt. Fast training of support vector machines using sequential minimal optimization. In Sch¨olkopf et. al., editor, Advances in Kernel Methods, pages 185– 208. 1998. [7] Matthias Seeger, Neil D. Lawrence, and Ralf Herbrich. Sparse Bayesian learning: The informative vector machine. Technical report, Department of Computer Science, Sheffield, UK, 2002. See www.dcs.shef.ac.uk/~neil/papers/. [8] Alex Smola and Peter Bartlett. Sparse greedy Gaussian process regression. In Advances in NIPS 13, pages 619–625, 2001. [9] Michael Tipping. Sparse Bayesian learning and the relevance vector machine. J. M. Learn. Res., 1:211–244, 2001. [10] Volker Tresp. A Bayesian committee machine. N. Comp., 12(11):2719–2741, 2000. [11] Christopher K. I. Williams and Matthias Seeger. Using the Nystr¨om method to speed up kernel machines. In Advances in NIPS 13, pages 682–688, 2001.
|
2002
|
167
|
2,177
|
Recovering Articulated Model Topology from Observed Rigid Motion Leonid Taycher, John W. Fisher III, and Trevor Darrell Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, MA, 02139 {lodrion, fisher, trevor}@ai.mit.edu Abstract Accurate representation of articulated motion is a challenging problem for machine perception. Several successful tracking algorithms have been developed that model human body as an articulated tree. We propose a learning-based method for creating such articulated models from observations of multiple rigid motions. This paper is concerned with recovering topology of the articulated model, when the rigid motion of constituent segments is known. Our approach is based on finding the Maximum Likelihood tree shaped factorization of the joint probability density function (PDF) of rigid segment motions. The topology of graphical model formed from this factorization corresponds to topology of the underlying articulated body. We demonstrate the performance of our algorithm on both synthetic and real motion capture data. 1 Introduction Tracking human motion is an integral part of many proposed human-computer interfaces, surveillance and identification systems, as well as animation and virtual reality systems. A common approach to this task is to model the body as a kinematic tree, and reformulate the problem as articulated body tracking[6]. Most of the state-of-the-art systems rely on predefined kinematic models [16]. Some methods require manual initialization, while other use heuristics [12], or predefined protocols [10] to adapt the model to observations. We are interested in a principled way to recover articulated models from observations. The recovered models may then be used for further tracking and/or recognition. We would like to approach model estimation as a multistage problem. In the first stage the rigidly moving segments are tracked independently; at the second stage, the topology of the body (the connectivity between the segments) is recovered. After the topology is determined, the joint parameters may be determined. In this paper we concentrate on the second stage of this task, estimating the underlying topology of the observed articulated body, when the motion of the constituent rigid bodies is known. We approach this as a learning problem, in the spirit of [17]. If we assume that the body may be modeled as a kinematic tree, and motion of a particular rigid segment is known, then the motions of the rigid segments that are connected through that segment are independent of each other. That is, we can model a probability distribution of the full bodypose as a tree-structured graphical model, where each node corresponds to pose of a rigid segment. This observation allows us to formulate the problem of recovering topology of an articulated body as finding the tree-shaped graphical model that best (in the Maximum Likelihood sense) describes the observations. 2 Prior Work While state-of-the-art tracking algorithms [16] do not address either model creation or model initialization, the necessity of automating these two steps has been long recognized. The approach in [10] required a subject to follow a set of predefined movements, and recovered the descriptions of body parts and body topology from deformations of apparent contours. Various heuristics were used in [12] to adapt an articulated model of known topology to 3D observations. Analysis of magnetic motion capture data was used by [14] to recover limb lengths and joint locations for known topology, it also suggested similar analysis for topology extraction. A learning based approach for decomposing a set of observed marker positions and velocities into sets corresponding to various body parts was described in [17]. Our work builds on the latter two approaches in estimating the topology of the articulated tree model underlying the observed motion. Several methods have been used to recover multiple rigid motions from video, such as factorization [3, 18], RANSAC [7], and learning based methods [9]. In this work we assume that the 3-D rigid motions has been recovered and are represented using a 2-D Scaled Prismatic Model (SPM). 3 Representing Pose and Motion A 2-D Scaled Prismatic Model (SPM) was proposed by [15] and is useful for representing image motion of projections of elongated 3-D objects. It is obtained by orthographically “projecting” the major axis of the object to the image plane. The SPM has four degrees of freedom: in-plane translation, rotation, and uniform scale. 3-D rigid motion of an object, may be simulated by SPM transformations, using in-plane translation for rigid translation, and rotation and uniform scaling for plane-parallel and out-of-plane rotations respectively. SPM motion (or pose) may be expressed as a linear transformation in projective space as M = a −b e b a f 0 0 1 ! (1) Following [13] we have chosen to use exponential coordinates, derived from constant velocity equations, to parameterize motion. An SPM transformation may be represented as an exponential map M = e ˆξ ˆξ = θ c −ω vx ω c vy 0 0 0 ! ξ = θ vx vy ω c (2) In this representation vx is a horizontal velocity, vy – vertical velocity, ω – angular velocity, and c is a rate of scale change. θ is analogous to time parameter. Note that there is an inherent scale ambiguity, since θ and (vx, vy, ω, c)T may be chosen arbitrarily, as long as eˆξ = M. It can be shown ([13]) that if the SPM transformation is a combination of scaling and rotation, it may be expressed by the sum of two twists, with coincident centers (ux, uy)T of rotation and expansion. ξ = ω uy −ux 1 0 + c −ux −uy 0 1 = −c ω −ω −c 1 1 ux uy ω c (3) While “pure” translation, rotation or scale have intuitive representation with twists, the combination or rotation and scale does not. We propose a scaled twist representation, that preserves the intuitiveness of representation for all possible SPM motions. We want to separate the “direction” of motion (the direction of translation or the relative amounts of rotation and scale) from the amount of motion. If the transformation involves rotation and/or scale, then we choose θ so that ||(ω, c)||2 = 1, and then use eq. 3 to compute the center of rotation/expansion. The computation may be expressed as a linear transformation: τ = θ ux uy ω c = √ ˜ω2 + ˜c2 − ˜c ˜ω2+˜c2 − ˜ω ˜ω2+˜c2 ˜ω ˜ω2+˜c2 − ˜c ˜ω2+˜c2 1 √ ˜ω2+˜c2 1 √ ˜ω2+˜c2 1 ˜vx ˜vy ˜ω ˜c (4) where ξ = (˜vx, ˜vy, ˜ω, ˜c)T . The the pure translational motion (ω = c = 0) may be regarded as an infinitely small rotation about a point at infinity, e.g. the translation by l in the direction (ux, uy) may be represented as τ = limω→0(l|ω|, −uy ω , ux ω , ω, 0)T, but we choose a direct representation τ = θ ux uy 0 0 = p ˜v2x + ˜v2y 1 √ ˜v2x+˜v2y 1 √ ˜v2x+˜v2y 1 1 1 ˜vx ˜vy ˜ω ˜c (5) In both cases τ = A(1, ˜ξT )T , and det(A) = θ−3 ω ̸= 0 ∨c ̸= 0 (rotation/scaling) θ−1 ω = 0 ∧c = 0 (pure translation) (6) Note that τI = (0, ux, uy, ω, c)T represents identity transformation for any ux, uy, ω, and c. It is always reported as τI = 0. 4 Learning Articulated Topology We wish to infer the underlying topology of an articulated body from noisy observations of a set of rigid body motions. Towards that end we will adopt a statistical framework for fitting a joint probability density. As a practical matter, one must make choices regarding density models; we discuss one such choice although other choices are also suitable. We denote the set of observed motions of N rigid bodies at time t, 1 ≤t ≤F as a set {Mt s|1 ≤s ≤N}. Graphical models provide a useful methodology for expressing the dependency structure of a set of random variables (cf. [8]). Variables Mi with observations {Mt i|1 ≤t ≤F} are assigned to the vertices of a graph, while edges between nodes indicate dependency. We shall denote presence or absence of an edge between two variables, Mi and Mj by an index variable Eij, equal to one if an edge is present and zero otherwise. Furthermore, if the corresponding graphical model is a spanning tree, it can be expressed as a product of conditional densities (e.g. see [11]) PM (M1, . . . , MN) = Y Ms PMs|pa(Ms) (Ms|pa (Ms)) (7) where pa(Ms) is the parent of Ms. While multiple nodes may have the same parent, each individual node has only one parent node. Furthermore, in any decomposition one node (the root node) has no parent. Any node (variable) in the model can serve as the root node [8]. Consequently, a tree model constrains E. Of the possible tree models (choices of E), we wish to choose the maximum likelihood tree which is equivalent to the minimum entropy tree [4]. The entropy of a tree model can be written H(M) = X s H(Ms) − X Eij=1 I(Mi; Mj) (8) where H(Ms) is the marginal entropy of each variable and I(Mi; Mj) is the mutual information between nodes Mi and Mj and quantifies their statistical dependence. Consequently, the minimum entropy tree corresponds to the choice of E which minimizes the sum of the pairwise mutual informations [1]. The tree denoted by E can be found via the maximum spanning tree algorithm [2] using I(Mi; Mj) for all i, j as the edge weights. Our conjecture is that if our data are sampled from a variety of motions the topology of the estimated density model is likely to be the same as the topology of the articulated body model. It follows from the intuition that when considering only pairwise relationships, the relative motions of physically connected bodies will be most strongly related. 4.1 Estimation of Mutual Information Computing the minimum entropy spanning tree requires estimating the pairwise mutual informations between rigid motions Mi and Mj for all i, j pairs. In order to do so we must make a choice regarding the parameterization of motion and a probability density over that parameterization; to estimate articulated topology it is sufficient to use the the Scaled Prismatic Model with twist parameterization described in Section 3). 4.2 Estimating Motion Entropy We parameterize rigid motion, Mt i, by the vector of quantities ξt i (cf. Eq. 2). In general, H(Mi) ̸= H(ξi), (9) but since there is a one-to-one correspondence between the Mi’s and ξi’s [4], we can estimate the I(Mi; Mj) by first computing ξt i, ξt j from Mt i, Mt j I(Mi; Mj) = I(ξi; ξj) = H(ξj) −H(ξj|ξi) (10) Furthermore, if the relative motion Mj|i between segments si and sj (M t j = M t i M t j|i) is assumed to be independent of Mi, it can be shown that H(ξj|ξi) = H(log MiMj|i| log Mi) = H(log Mj|i) = H(ξj|i). (11) We wish to use scaled twists (Section 3) to compute the entropies involved. Since the involved quantities are in the linear relationship τ = A(1, ˜ξT )T (Eqs. 4 and 5), the entropies are related, H(ξ) = H(τ) −E[log det(A)], (12) where E[log det(A)] may be estimated using Equation 6. 4.3 Estimating the Motion Kernel In order to estimate the entropy of motion, we need to estimate the probability density based on the available samples. Since the functional form of the underlying density is not known we have chosen to use kernel-based density estimator, ˆp(τ) = α X i K(τ; τi). (13) Since our task is to determine the articulated topology, we wish to concentrate on “spatial” features of the transformation, center of rotation for rotational motion, and the direction of translation for translational, that correspond to two common kinds of joints, spherical and prismatic. Thus we need to define a kernel function K(τ1; τ2) that captures the following notion of “distance” between the motions: 1. If τ1 and τ2 do not represent pure translational motions, then they should be considered to be close if their centers of rotation are close. 2. If τ1 and τ2 are pure translations, then they should be considered close if their directions are close. 3. If τ1 and τ2 represent different types of motion (i.e. rotation/scale vs. translation), then they are arbitrarily far apart. 4. The identity transformation (θ = 0) is equidistant from all possible transformations (since any (ux, uy, ω, c)T combined with θ = 0 produces identity) One kernel that satisfies these requirements is the following: K(τ1; τ2) = KR((ux1, uy1); (ux2, uy2)) Condition 1 (ω1 ̸= 0 ∨c1 ̸= 0) ∧(ω2 ̸= 0 ∨c2 ̸= 0) KT ((ux1, uy1); (ux2, uy2)) Condition 2 ω1 = 0 ∧c1 = 0 ∧ω2 = 0 ∧c2 = 0 0 Condition 3 (ω1 ̸= 0 ∨c1 ̸= 0) ∧(ω2 = 0 ∧c2 = 0) 0 Condition 3 (ω1 = 0 ∧c1 = 0) ∧(ω2 ̸= 0 ∨c2 ̸= 0) δ(0) Condition 4. θ1 = 0 ∨θ2 = 0 (14) where KR and KT are Gaussian kernels with covariances estimated using methods from [5]. 5 Implementation The input to our algorithm is a set of SPM poses (Section 3) {Pt s|1 ≤s ≤S, 1 ≤t ≤T }, where S is the number of tracked rigid segments and F is the number of frames. In order to compute the mutual information between the motion of segments s1 and s2, we first compute motions of segment s1 in frames 1 < t ≤F relative to its position in frame t1 = 1, Mt1t s1 = Pt s1(Pt1 s1)−1, (15) and the transformation of s2 relative to s1 (with the relative pose Ps2|s1 = (Ps1)−1Ps2), Mt1t s2|s1 = ((Pt s1)−1Pt s2)((Pt1 s1)−1.Pt1 s2)−1 (16) The parameter vectors τ t1t s2 and τ t1t s2|s1 are then extracted from the transformation matrices Ms2 and Ms2|s1 (cf. Section 3), and the mutual information is estimated as described in Section 4.2. 6 Results We have tested our algorithm both on synthetic and motion capture data. Two synthetic sequences were generated with the following steps. First, the rigid segments were positioned by randomly perturbing parameters of the corresponding kinematic tree structure. A set of feature points was then selected for each segment. At each time step point positions were computed based on the corresponding segment pose, and perturbed with Gaussian noise with zero mean and standard deviation of 1 pixel. The inputs to the algorithm were the segment poses re-estimated from the feature point coordinates. In the motion capture-based experiment, the segment poses were estimated from the marker positions. The results of the experiments are shown in the Figures 6.1, 6.2 and 6.3. The first experiment involved a simple kinematic chain with 3 segments in order to demonstrate the operation of the algorithm. The system has a rotational joint between S1 and S2 and prismatic joint between S2 and S3. The sample configurations of the articulated body are shown in the first row of the Figures 6.1. The graph computed using method from Section 4.2 and the corresponding maximum spanning tree are in Figures 6.1(d, e). The second experiment involved a humanoid torso-like synthetic model containing 5 rigid segments. It was processed in a way similar to the first experiment. The results are shown in Figure 6.2. For the human motion experiment, we have used motion capture data of a dance sequence (Figure 6.3(a-c)). The rigid segment motion was extracted from the positions of the markers tracked across 220 frames (the marker correspondence to the body locations was known). The algorithm was able to correctly recover the articulated body topology (Compare Figures 6.3(e) and 6.3(a)), when provided only with the extracted segment poses. The dance is a highly structured activity, so not all degrees of freedom were explored in this sequence, and mutual information between some unconnected segments (e.g. thighs S3 and S7) was determined to be relatively large, although this did not impact the final result. 7 Conclusions We have presented a novel general technique for recovering the underlying articulated structure from information about rigid segment motion. Our method relies on only a very weak assumption, that this structure may be represented by a tree with unknown topology. While the results presented in this paper were obtained using the Scaled Prismatic Model and non-parametric density estimator, our methodology does not rely on either modeling assumption. References [1] C. K. Chow and C. N. Liu. Approximating discrete probability distributions with dependence trees. IEEE Transactions on Information Theory, IT-14(3):462–467, May 1968. [2] Thomas H. Cormen, Charles E. Leiserson, and Ronald L. Rivern. Introduction to Algorithms. MIT Press, Cambridge, MA, 1990. [3] Joao Paolo Costeira and Takeo Kanade. A multibody factorization method for independently moving objects. International Journal of Computer Vision, 29(3):159–179, 1998. [4] T. M. Cover and J. A. Thomas. Elements of Information Theory. John Wiley & Sons, Inc., New York, 1991. [5] Luc Devroye. A Course in Density Estimation, volume 14 of Progress in Probability and Statistics. Birkhauser, Boston, 1987. (a) S1 S2 S3 (b) (c) (d) S1 S2 S3 S1 S2 S3 (e) 1 2 S S 3 S Figure 6.1: Simple kinematic chain topology recovery. The first row shows 3 sample frames from a 100 frame synthetic sequence. The adjacency matrix of the mutual information graph is shown in (d), with intensities corresponding to edge weights. The vertices in the graph correspond to the rigid segments labeled in (a). (e) is the recovered articulated topology. (a) S4 S2 S1 S3 S5 (b) (c) (d) S1 S2 S3 S4 S5 S1 S2 S3 S4 S5 (e) Figure 6.2: Humanoid torso synthetic test. The sample frames from a randomly generated 150 frame sequence are shown in (a), (b), and (c). The adjacency matrix of the mutual information graph is shown in (d), with intensities corresponding to edge weights. The vertices in the graph correspond to the rigid segments labeled in (a). (e) is the recovered articulated topology. (a) S1 S2 S3 S6 S4 S5 S7 S8 S9 (b) (c) (d) S1 S2 S3 S4 S5 S6 S7 S8 S9 S1 S2 S3 S4 S5 S6 S7 S8 S9 (e) Figure 6.3: Motion Capture based test. (a), (b), and (c) are the sample frames from a 220 frame sequence. The adjacency matrix of the mutual information graph is shown in (d), with intensities corresponding to edge weights. The vertices in the graph correspond to the rigid segments labeled in (a). (e) is the recovered articulated topology. [6] David C. Hogg. Model-based vision: A program to see a walking person. Image and Vision Computing, 1(1):5–20, 1983. [7] Yi-Ping Hung, Cheng-Yuan Tang, Sheng-Wen Shin, Zen Chen, and Wei-Song Lin. A 3d featurebased tracker for tracking multiple moving objects with a controlled binocular head. Technical report, Academia Sinica Institute of Information Science, 1995. [8] Finn Jensen. An Introduction to Bayesian Networks. Springer, 1996. [9] N. Jojic and B.J. Frey. Learning flexible sprites in video layers. In Computer Vision and Pattern Recognition, pages I:199–206, 2001. [10] Ioannis A. Kakadiaris and Dimirti Metaxas. 3d human body acquisition from multiple views. In Proc. Fifth International Conference on Computer Vision, pages 618–623, 1995. [11] Marina Meila. Learning Mixtures of Trees. PhD thesis, MIT, 1998. [12] Ivana Mikic, Mohan Triverdi, Edward Hunter, and Pamela Cosman. Articulated body posture estimation from multi-camera voxel data. In Computer Vision and Pattern Recognition, 2001. [13] Richard M. Murray, Zexiang Li, and S. Shankar Sastry. A Mathematical Introduction to Robotic Manipulation. CRC Press, 1994. [14] J. O’Brien, R. E. Bodenheimer, G. Brostow, and J. K. Hodgins. Automatic joint parameter estimation from magnetic motion capture data. In Graphics Interface’2000, pages 53–60, 2000. [15] James M. Regh and Daniel D. Morris. Singularities in articulated object tracking with 2-d and 3-d models. Technical report, Digital Equipment Corporation, 1997. [16] Hedvig Sidenbladh, Michael J. Black, and David J. Fleet. Stochastic tracking of 3d human figures using 2d image motion. In Proc. European Conference on Computer Vision, 2000. [17] Yang Song, Luis Goncalves, Enrico Di Bernardo, and Pietro Perona. Monocular perception of biological motion - detection and labeling. In Proc. International Conference on Computer Vision, pages 805–812, 1999. [18] Ying Wu, Zhengyou Zhang, Thomas S. Huang, and John Y. Lin. Multibody grouping via orthogonal subspace decomposition. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 2001.
|
2002
|
168
|
2,178
|
Learning to Perceive Transparency from the Statistics of Natural Scenes Anat Levin Assaf Zomet Yair Weiss School of Computer Science and Engineering The Hebrew University of Jerusalem 91904 Jerusalem, Israel {alevin,zomet,yweiss}@cs.huji.ac.il Abstract Certain simple images are known to trigger a percept of transparency: the input image I is perceived as the sum of two images I(x, y) = I1(x, y) + I2(x, y). This percept is puzzling. First, why do we choose the “more complicated” description with two images rather than the “simpler” explanation I(x, y) = I1(x, y) + 0 ? Second, given the infinite number of ways to express I as a sum of two images, how do we compute the “best” decomposition ? Here we suggest that transparency is the rational percept of a system that is adapted to the statistics of natural scenes. We present a probabilistic model of images based on the qualitative statistics of derivative filters and “corner detectors” in natural scenes and use this model to find the most probable decomposition of a novel image. The optimization is performed using loopy belief propagation. We show that our model computes perceptually “correct” decompositions on synthetic images and discuss its application to real images. 1 Introduction Figure 1a shows a simple image that evokes the percept of transparency. The image is typically perceived as a superposition of two layers: either a light square with a dark semitransparent square in front of it or a dark square with a light semitransparent square in front of it. Mathematically, our visual system is taking a single image I(x, y) and representing as the sum of two images: I1(x, y) + I2(x, y) = I(x, y) (1) When phrased this way, the decomposition is surprising. There are obviously an infinite number of solutions to equation 1, how does our visual system choose one? Why doesn’t our visual system prefer the “simplest” explanation I(x, y) = I1(x, y)+ 0 ? a b Figure 1: a. A simple image that evokes the percept of transparency. b. A simple image that does not evoke the percept of transparency. Figure 1b shows a similar image that does not evoke the percept of transparency. Here again there are an infinite number of solutions to equation 1 but our visual system prefers the single layer solution. Studies of the conditions for the percept of transparency go back to the very first research on visual perception (see [1] and references within). Research of this type has made great progress in understanding the types of junctions and their effects (e.g. X junctions of a certain type trigger transparency, T junctions do not). However, it is not clear how to apply these rules to an arbitrary image. In this paper we take a simple Bayesian approach. While equation 1 has an infinite number of possible solutions, if we have prior probabilities P(I1(x, y)), P(I2(x, y)) then some of these solutions will be more probable than others. We use the statistics of natural images to define simple priors and finally use loopy belief propagation to find the most probable decomposition. We show that while the model knows nothing about “T junctions” or “X junctions”, it can generate perceptually correct decompositions from a single image. 2 Statistics of natural images A remarkably robust property of natural images that has received much attention lately is the fact that when derivative filters are applied to natural images, the filter outputs tend to be sparse [5, 7]. Figure 2 illustrates this fact: the histogram of the horizontal derivative filter is peaked at zero and fall offmuch faster than a Gaussian. Similar histograms are observed for vertical derivative filters and for the gradient magnitude: |∇I|. There are many ways to describe the non Gaussian nature of this distribution (e.g. high kurtosis, heavy tails). Figure 2b illustrates the observation made by Mallat [4] and Simoncelli [8]: that the distribution is similar to an exponential density with exponent less than 1. We show the log probability for densities of the form p(x) ∝e−xα. We assume x ∈[0, 100] and plot the log probabilities so that they agree on p(0), p(100). There is a qualitative difference between distributions for which α > 1 (when the log probability is convex) and those for which α < 1 (when it becomes concave). As figure 2d shows, the natural statistics for derivative deriv filter corner operator −150 −100 −50 0 50 100 150 200 250 0 0.5 1 1.5 2 2.5 x 10 5 −0.5 0 0.5 1 1.5 2 2.5 x 10 7 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 x 10 5 a c e 0 50 100 150 −1 −0.8 −0.6 −0.4 −0.2 0 x logprob Gaussian:−x2 Laplacian: −x −X1/2 −X1/4 0 50 100 150 200 250 0 2 4 6 8 10 12 14 0 0.5 1 1.5 2 2.5 x 10 7 −14 −12 −10 −8 −6 −4 −2 0 b d f Figure 2: a. A natural image. c Histogram of filter outputs. e Histogram of corner detector outputs. d,e log histograms. filters has the qualitative nature of a distribution e−xα with α < 1. In [9] the sparsity of derivative filters was used to decompose an image sequence as a sum of two image sequences. Will this prior be sufficient for a single frame ? Note that decomposing the image in figure 1a into two layers does not change the output of derivative filters: exactly the same derivatives exist in the single layer solution as in the two layer solution. Thus we cannot appeal to the marginal histogram of derivative filters to explain the percept of transparency in this image. There are two ways to go beyond marginal histograms of derivative filters. We can either look at joint statistics of derivative filters at different locations or orientations [6] or look at marginal statistics of more complicated feature detectors (e.g. [11]). We looked at the marginal statistics of a “corner detector”. The output of the “corner detector” at a given location x0, y0 is defined as: c(x0, y0) = det( X w(x, y) µ I2 x(x, y) Ix(x, y)Iy(x, y) Ix(x, y)Iy(x, y) I2 y(x, y) ¶ ) (2) where w(x, y) is a small Gaussian window around x0, y0 and Ix, Iy are the derivatives of the image. Figures 2e,f show the histogram of this corner operator on a typical natural image. Again, note that it has the qualitative statistic of a distribution e−xα for α < 1. To get a more quantitative description of the statistics we used maximum likelihood to fit a distribution of the form P(x) = 1 Z e−axα to gradient magnitudes and corner detector histograms in a number of images. We found that the histograms shown in figure 2 are typical: for both gradients and corner detectors the exponent was less than 1 and the exponent for the corner detector was smaller than that of the gradients. Typical exponents were 0.7 for the derivative filter and 0.25 for the corner detector. The scaling parameter a of the corner detector was typically larger than that of the gradient magnitude. 3 Simple prior predicts transparency Motivated by the qualitative statistics observed in natural images we now define a probability distribution over images. We define the log probability of an image by means of a probability over its gradients: log P(Ix, Iy) = −log Z − X x,y ¡ |∇I(x, y)|α + ηc(x, y)β¢ (3) with α = 0.7, β = 0.25. The parameter η was determined by the ratio of the scaling parameters in the corner and gradient distributions. Given a candidate decomposition of an image I into I1 and I2 = I −I1 we define the log probability of the decomposition as the sum of the log probabilities of the gradients of I1 and I2. Of course this is only an approximation: we are ignoring dependencies between the gradients across space and orientation. Although this is a weak prior, one can ask: is this enough to predict transparency? That is, is the most probable interpretation of figure 1a one with two layers and the most probable decomposition of figure 1b one with a single layer? Answering this question requires finding the global maximum of equation 3. To gain some intuition we calculated the log probability of a one dimensional family of solutions. We defined s(x, y) the image of a single white square in the same location as the bottom right square in figure 1a,b. We considered decompositions of the form I1 = γs(x, y),I2 = I −I1 and evaluated the log probability for values of γ between −1 and 2. Figure 3a shows the result for figure 1a. The most probable decomposition is the one that agrees with the percept: γ = 1 one layer for the white square and another for the gray square. Figure 3b shows the result for figure 1b. The most probable decomposition again agrees with the percept: γ = 0 so that one layer is zero and the second contains the full image. 3.1 The importance of being non Gaussian Equation 3 can be verbally described as preferring decompositions where the total edge and corner detector magnitudes are minimal. Would any cost function that has this preference give the same result? Figure 3c shows the result with α = β = 2 for the transparency figure (figure 1a). This would be the optimal interpretation if the marginal histograms of edge and corner detectors were Gaussian. Now the optimal interpretation indeed contains two layers but they are not the ones that humans perceive. Thus the non Gaussian nature of the histograms is crucial for getting the transparency percept. Similar “non perceptual” decompositions are obtained with other values of α, β > 1. We can get some intuition for the importance of having exponents smaller than 1 from the following observation which considers the analog of the transparency problem with scalars. We wish to solve the equation a + b = 1 and we have a prior over positive scalars of the form P(x). Observation: The MAP solution to the scalar transparency problem is obtained with a = 1, b = 0 or a = 0, b = 1 if and only if log P(x) is concave. The proof follows directly from the definition of concavity. -1 0 1 2 60 80 100 120 140 160 -log(prob) I= I1= γ γ -1 0 1 2 100 120 140 160 180 200 -log(prob) I= I1=γ γ -1 0 1 2 100 200 300 400 500 600 700 800 -log(prob) I= I1=γ γ a b c Figure 3: a-b. negative log probability (equation 3) for a sequence of decompositions of figure 1a,b respectively. The first layer is always a single square with contrast γ and the second layer is shown in the insets. c. negative log probability (equation 3) for a sequence of decompositions of figure 1a with α = β = 2. 4 Optimization using loopy BP Finding the most likely decomposition requires a highly nonlinear optimization. We chose to discretize the problem and use max-product loopy belief propagation to find the optimum. We defined a graphical model in which every node gi corresponded to a discretization of the gradient of one layer I1 at that location gi = (gix, giy)T . For every value of gi we defined fi which represents the gradient of the second layer at that location: fi = (Ix, Iy)T −gi. Thus the two gradients fields {gi}, {fi} represent a valid decomposition of the input image I. The joint probability is given by: P(g) = 1 Z Y i Ψi(gi) Y <ijkl> Ψijkl(gi, gj, gk, gl) (4) where < ijkl > refers to four adjacent pixels that form a 2x2 local square. The local potential Ψi(gi) is based on the histograms of derivative filters: Ψi(gi) = e(−|g|α−|f|α)/T (5) where T is an arbitrary system “temperature”. The fourway potential: Ψijkl(gi, gj, gk, gl) is based on the histogram of the corner operator: Ψijkl(gi, gj, gk, gl) = e−η/T(det(gigT i +gjgT j +gkgT k +glgT l )β+det(fif T i +fjf T j +fkf T k +flf T l )β) (6) To enforce integrability of the gradient fields the fourway potential is set to zero when gi, gj, gk, gl violate the integrability constraint (cf. [3]). The graphical model defined by equation 4 has many loops. Nevertheless motivated by the recent results on similar graphs [2, 3] we ran the max-product belief propagation algorithm on it. The max-product algorithm finds a gradient field {gi} that is a local maximum of equation 4 with respect to a large neighbourhood [10]. This gradient field also defines the complementary gradient field {fi} and finally we integrate the two gradient fields to find the two layers. Since equation 4 is completely symmetric in {f} and {g} we break the symmetry by requiring that the gradient in a single location gi0 belong to layer 1. In order to run BP we need to somehow discretize the space of possible gradients at each pixel. Similar to the approach taken in [2] we use the local potentials to Input I Output I1 Output I2 Figure 4: Output of the algorithm on synthetic images. The algorithm effectively searches over an exponentially large number of possible decompositions and chooses decompositions that agree with the percept. sample a small number of candidate gradients at each pixel. Since the local potential penalizes non zero gradients, the most probable candidates are gi = (Ix, Iy) and gi = (0, 0). We also added two more candidates at each pixel gi = (Ix, 0) and gi = (0, Iy). With this discretization there are still an exponential number of possible decompositions of the image. We have found that the results are unchanged when more candidates are introduced at each pixel. Figure 4 shows the output of the algorithm on the two images in figure 1. An animation that illustrates the dynamics of BP on these images is available at www.cs.huji.ac.il/ ∼yweiss. Note that the algorithm is essentially searching exponentially many decompositions of the input images and knows nothing about “X junctions” or “T junctions” or squares. Yet it finds the decompositions that are consistent with the human percept. Will our simple prior also allow us to decompose a sum of two real images ? We first tried a one dimensional family of solutions as in figure 3. We found that for real images that have very little texture (e.g. figure 5b) the maximal probability solution is indeed obtained at the perceptually correct solution. However, nearly any other image that we tried had some texture and on such images the model failed (e.g. 5a). When there is texture in both layers, the model always prefers a one layer decomposition: the input image plus a zero image. To understand this failure, recall that the model prefers decompositions that have few corners and few edges. According to the simple “edge” and “corner” operators that we have used, real images have edges and corners at nearly every pixel so the two layer decomposition has twice as many edges and corners as the one layer decomposition. To decompose general real images we need to use more sophisticated features to define our prior. Even for images with little texture standard belief propagation with synchronous a b c d Figure 5: When we sum two arbitrary images (e.g. in a.) the model usually prefers the one layer solution. This is because of the texture that results in gradients and corners at every pixel. For real images that are relatively texture free (e.g. in b.) the model does prefer splitting into two layers (c. and d.) updates did not converge. Significant manual tweaking was required to get BP to converge. First, we manually divided the input image into smaller patches and ran BP separately on each patch. Second, to minimize discretization artifacts we used a different number of gradient candidates at each pixel and always included the gradients of the original images in the list of candidates at that pixel. Third, to avoid giving too much weight to corners and edges in textured regions, we increased the temperature at pixels where the gradient magnitude was not a local maximum. The results are shown at the bottom of 5. In preliminary experiments we have found that similar results can be obtained with far less tweaking when we use generalized belief propagation to do the optimization. 5 Discussion The percept of transparency is a paradigmatic example of the ill-posedness of vision: the number of equations is half the number of unknowns. Nevertheless our visual systems reliably and effectively compute a decomposition of a single image into two images. In this paper we have argued that this perceptual decomposition may correspond to the most probable decomposition using a simple prior over images derived from natural scene statistics. We were surprised with the mileage we got out of the very simple prior we used: even though it only looks at two operators (gradients, and cornerness) it can generate surprisingly powerful predictions. However, our experiments with real images show that this simple prior is not powerful enough. In future work we would like to add additional features. One way to do this is by defining features that look for “texture edges” and “texture corners” and measuring their statistics in real images. A second way to approach this is to use a full exponential family maximum likelihood algorithm (e.g. [11]) that automatically learned which operators to look at as well as the weights on the histograms. References [1] E.H. Adelson. Lightness perception and lightness illusions. In M. Gazzaniga, editor, The new cognitive neurosciences, 2000. [2] W.T. Freeman and E.C. Pasztor. Learning to estimate scenes from images. In M.S. Kearns, S.A. Solla, and D.A. Cohn, editors, Adv. Neural Information Processing Systems 11. MIT Press, 1999. [3] B.J. Frey, R. Koetter, and N. Petrovic. Very loopy belief propagation for unwrapping phase images. In Adv. Neural Information Processing Systems 14. 2001. [4] S. Mallat. A theory for multiresolution signal decomposition : the wavelet representation. IEEE Trans. PAMI, 11:674–693, 1989. [5] B.A. Olshausen and D. J. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381:607–608, 1996. [6] J. Portilla and E. P. Simoncelli. A parametric texture model based on joint statistics of complex wavelet coefficients. Int’l J. Comput. Vision, 40(1):49–71, 2000. [7] E.P. Simoncelli. Statistical models for images:compression restoration and synthesis. In Proc Asilomar Conference on Signals, Systems and Computers, pages 673–678, 1997. [8] E.P. Simoncelli. Bayesian denoising of visual images in the wavelet domain. In P Mller and B Vidakovic, editors, Wavelet based models, 1999. [9] Y. Weiss. Deriving intrinsic images from image sequences. In Proc. Intl. Conf. Computer Vision, pages 68–75. 2001. [10] Y. Weiss and W.T. Freeman. On the optimality of solutions of the maxproduct belief propagation algorithm in arbitrary graphs. IEEE Transactions on Information Theory, 47(2):723–735, 2001. [11] Song Chun Zhu, Zing Nian Wu, and David Mumford. Minimax entropy principle and its application to texture modeling. Neural Computation, 9(8):1627– 1660, 1997.
|
2002
|
169
|
2,179
|
Approximate Inference and Protein-Folding Chen Yanover and Yair Weiss School of Computer Science and Engineering The Hebrew University of Jerusalem 91904 Jerusalem, Israel {cheny,yweiss} @cs.huji.ac.it Abstract Side-chain prediction is an important subtask in the protein-folding problem. We show that finding a minimal energy side-chain configuration is equivalent to performing inference in an undirected graphical model. The graphical model is relatively sparse yet has many cycles. We used this equivalence to assess the performance of approximate inference algorithms in a real-world setting. Specifically we compared belief propagation (BP), generalized BP (GBP) and naive mean field (MF). In cases where exact inference was possible, max-product BP always found the global minimum of the energy (except in few cases where it failed to converge), while other approximation algorithms of similar complexity did not. In the full protein data set, maxproduct BP always found a lower energy configuration than the other algorithms, including a widely used protein-folding software (SCWRL). 1 Introduction Inference in graphical models scales exponentially with the number of variables. Since many real-world applications involve hundreds of variables, it has been impossible to utilize the powerful mechanism of probabilistic inference in these applications. Despite the significant progress achieved in approximate inference, some practical questions still remain open it is not yet known which algorithm to use for a given problem nor is it understood what are the advantages and disadvantages of each technique. We address these questions in the context of real-world protein-folding application the side-chain prediction problem. Predicting side-chain conformation given the backbone structure is a central problem in protein-folding and molecular design. It arises both in ab-initio proteinfolding (which can be divided into two sequential tasks the generation of nativelike backbone folds and the positioning of the side-chains upon these backbones [6]) and in homology modeling schemes (where the backbone and some side-chains are assumed to be conserved among the homologs but the configuration of the rest of the side-chains needs to be found). Figure 1: Cow actin binding protein (PDB code 1pne, top) and closer view of its 6 carboxyl-terminal residues (bottom-left). Given the protein backbone (black) and amino acid sequence, native side-chain conformation (gray) is searched for. Problem representation as a graphical model for those carboxyl-terminal residues shown in the bottom-right figure (nodes located at COl atom positions, edges drawn in black). In this paper, we show the equivalence between side-chain prediction and inference in an undirected graphical model. We compare the performance of BP, generalized BP and naive mean field on this problem as well as comparing to a widely used protein-folding program called SCWRL. 2 The side-chain prediction problem Proteins are chains of simpler molecules called amino acids. All amino acids have a common structure a central carbon atom (COl) to which a hydrogen atom, an amino group (N H 2 ) and a carboxyl group (COOH) are bonded. In addition, each amino acid has a chemical group called the side-chain, bound to COl. This group distinguishes one amino acid from another and gives its distinctive properties. Amino acids are joined end to end during protein synthesis by the formation of peptide bonds. An amino acid unit in a protein is called a residue. The formation of a succession of peptide bonds generate the backbone (consisting of COl and its adjacent atoms, N and CO, of each reside), upon which the side-chains are hanged (Figure 1). We seek to predict the configuration of all the side-chains relative to the backbone. The standard approach to this problem is to define an energy function and use the configuration that achieves the global minimum of the energy as the prediction. 2.1 The energy function We adopted the van der Waals energy function, used by SCWRL [3], which approximates the repulsive portion of Lennard-Jones 12-6 potential. For a pair of atoms, a and b, the energy of interaction is given by: E(a, b) = { -k2 :'0 + k~ Emax d> Ro Ro ~ d ~ k1Ro k1Ro > d (1) where Emax = 10, kl = 0.8254 and k2 = ~~k;' d denotes the distance between a and band Ro is the sum of their radii. Constant radii were used for protein's atoms (Carbon - 1.6A, Nitrogen and Oxygen - 1.3A, Sulfur - 1.7 A). For two sets of atoms, the interaction energy is a sum of the pairwise atom interactions. The energy surface of a typical protein in the data set has dozens to thousands local minima. 2.2 Rotamers The configuration of a single side-chain is represented by at most 4 dihedral angles (denoted Xl,X2,X3 and X4)' Any assignment of X angles for all the residues defines a protein configuration. Thus the energy minimization problem is a highly nonlinear continuous optimization problem. It turns out, however, that side-chains have a small repertoire of energetically preferred conformations, called rotamers. Statistical analysis of those conformations in well-determined protein structures produce a rotamer library. We used a backbone dependent rotamer library (by Dunbrack and Kurplus, July 2001 version). Given the coordinates of the backbone atoms, its dihedral angles ¢ (defined, for the ith residue, by Ci - 1 - Ni - Ci - Ci) and 'IjJ (defined by Ni - Ci - Ci - NHd were calculated. The library then gives the typical rotamers for each side-chain and their prior probabilities. By using the library we convert the continuous optimization problem into a discrete one. The number of discrete variables is equal to the number of residues and the possible values each variable can take lies between 2 and 81. 2.3 Graphical model Since we have a discrete optimization problem and the energy function is a sum of pairwise interactions, we can transform the problem into a graphical model with pairwise potentials. Each node corresponds to a residue, and the state of each node represents the configuration of the side chain of that residue. Denoting by {rd an assignment of rotamers for all the residues then: P({ri}) = !e- +E({r;}) !e -+ L;j E(r;)+E(r;,rj) Z Z 1 Z II 'lti(ri) II 'ltijh,rj) (2) i i ,j where Z is an explicit normalization factor and T is the system "temperature" (used as free parameter). The local potential 'ltih) takes into account the prior probability of the rotamer Pi(ri) (taken from the rotamer library) and the energy of the interactions between that rotamer and the backbone: \(Ii(ri) = Pi (ri)e-,j,E(ri ,backbone) (3) Equation 2 requires multiplying \(I ij for all pairs of residues i, j but note that equation 1 gives zero energy for atoms that are sufficiently far away. Thus we only need to calculate the pairwise interactions for nearby residues. To define the topology of the undirected graph, we examine all pairs of residues i, j and check whether there exists an assignment ri, rj for which the energy is nonzero. If it exists, we connect nodes i and j in the graph and set the potential to be: (4) Figure 1 shows a subgraph of the undirected graph. The graph is relatively sparse (each node is connected to nodes that are close in 3D space) but contains many small loops. A typical protein in the data set gives rise to a model with hundreds of loops of size 3. 3 Experiments When the protein was small enough we used the max-junction tree algorithm [1] to find the most likely configuration of the variables (and hence the global minimum of the energy function). Murphy's implementation of the JT algorithm in his BN toolbox for Matlab was used [10]. The approximate inference algorithms we tested were loopy belief propagation (BP), generalized BP (GBP) and naive mean field (MF). BP is an exact and efficient local message passing algorithm for inference in singly connected graphs [15]. Its essential idea is replacing the exponential enumeration (either summation or maximizing) over the unobserved nodes with series of local enumerations (a process called "elimination" or "peeling"). Loopy BP, that is applying BP to multiply connected graphical models, may not converge due to circulation of messages through the loops [12]. However, many groups have recently reported excellent results using loopy BP as an approximate inference algorithm [4, 11, 5]. We used an asynchronous update schedule and ran for 50 iterations or until numerical convergence. GBP is a class of approximate inference algorithms that trade complexity for accuracy [15]. A subset of GBP algorithms is equivalent to forming a graph from clusters of nodes and edges in the original graph and then running ordinary BP on the cluster graph. We used two large clusters. Both clusters contained all nodes in the graph but each cluster contained only a subset of the edges. The first cluster contained all edges resulting from residues, for which the difference between its indices is less than a constant k (typically, 6). All other edges were included in the second cluster. It can be shown that the cluster graph BP messages can be computed efficiently using the JT algorithm. Thus this approximation tries to capture dependencies between a large number of nodes in the original graph while maintaining computational feasibility. The naive MF approximation tries to approximate the joint distribution in equation 2 as a product of independent marginals qi(ri) . The marginals qi(ri) can be found by iterating: qi(ri) f- a\(li(ri) exp (L L qj(rj) log \(Iij(ri, rj)) (5) JENi rj where a denotes a normalization constant and Ni means all nodes neighboring i. We initialized qi(ri) to \[Ii(ri) and chose a random update ordering for the nodes. For each protein we repeated this minimization 10 times (each time with a different update order) and chose the local minimum that gave the lowest energy. In addition to the approximate inference algorithms described above, we also compared the results to two approaches in use in side-chain prediction: the SCWRL and DEE algorithms. The Side-Chain placement With a Rotamer Library (SCWRL) algorithm is considered one of the leading algorithms for predicting side-chain conformations [3]. It uses the energy function described above (equation 1) and a heuristic search strategy to find a minimal energy conformation in a discrete conformational space (defined using rotamer library). Dead end elimination (DEE) is a search algorithm that tries to reduce the search space until it becomes suitable for an exhaustive search. It is based on a simple condition that identifies rotamers that cannot be members of the global minimum energy conformation [2]. If enough rotamers can be eliminated, the global minimum energy conformation can be found by an exhaustive search of the remaining rotamers. The various inference algorithms were tested on set of 325 X-ray crystal structures with resolution better than or equal to 2A, R factor below 20% and length up to 300 residues. One representative structure was selected from each cluster of homologous structures (50% homology or more). Protein structures were acquired from Protein Data Bank site (http://www.rcsb.org/pdb). Many proteins contain Cysteine residues which tend to form strong disulfide bonds with each other. A standard technique in side-chain prediction (used e.g. in SCWRL) is to first search for possible disulfide bonds and if they exist to freeze these residues in that configuration. This essentially reduces the search space. We repeated our experiments with and without freezing the Cysteine residues. Side-chain to backbone interaction seems to be much severe than side-chain to sidechain interaction the backbone is more rigid than side-chains and its structure assumed to be known. Therefore, the parameter R was introduced into the pairwise potential equation, as follows: \[Io(ro ro) (e -,f-E(ri ,r;))* "J ", J (6) Using R > 1 assigns an increased weight for side-chain to backbone interactions over side-chain to side-chain interactions. We repeated our experiments both with R = 1 and R > 1. It worth mentioning that SCWRL implicitly adopts a weighting assumption that assigns an increased weight to side-chain to backbone interactions. 4 Results In our first set of experiments we wanted to compare approximate inference to exact inference. In order to make exact inference possible we restricted the possible rotamers of each residue. Out of the 81 possible states we chose a subset whose local probability accounted for 90% of the local probability. We constrained the size of the subset to be at least 2. The resulting graphical model retains only a small fraction of the loops occurring in the full graphical model (about 7% of the loops of size 3). However, it still contains many small loops, and in particular, dozens of loops of size 3. On these graphs we found that ordinary max-product BP always found the global minimum of the energy function (except in few cases where it failed to converge). 80 80 70 70 80 eo II! .!! 50 a. .. .!! 50 a. ~ <1l ~ <1l "' 30 ~ 20 I 10 • 0 "' 30 ~ I 20 10 I. 0 {;> " " ,,, 01> ~ {> .§> ..," ." ." <9 4> <P $' {;> " ",,, 01> ~ {>.§>..,"."." <9 4> <p.<p E(Sum-product BP) - E(Max-product BP) E(Mean field) - E(Max-product BP) 80 70 eo .. .!! 50 a. ~ <1l "' 30 ~ 100 OJ g 98 ,--OJ t 96 20 10 0 -- . . . -. - {;> " " ," 01> ~ {> .§> ..," ." ." <9 4> <p.* > c nn o 94 (,) ";J!. 92 90 E(SCWRL) - E(Max-product BP) SCWRL Sum, R=1 Sum, R>1 Max. R=1 Max. R>1 Figure 2: Sum-product BP (top-left), naive MF (top-right) and SCWRL (bottomleft) algorithms energies are always higher than or equal to max-product BP energy. Convergence rates for the various algorithms shown in bottom-right chart. Sum-product BP failed to find sum-JT conformation in 1% of the graphs only. In contrast the naive MF algorithm found the global minimum conformation for 38% of the proteins and on 17% of the runs only. The GBP algorithm gave the same result as the ordinary BP but it converged more often (e.g. 99.6% and 98.9% for sum-product GBP and BP, respectively). In the second set of experiments we used the full graphical models. Since exact inference is impossible we can only compare the relative energies found by the different approximate inference algorithms. Results are shown in Figure 2. Note that, when it converged, max-product BP always found a lower energy configuration compared to the other algorithms. This finding agrees with the observation that the max-product solution is a "neighborhood optimum" and therefore guaranteed to be better than all other assignments in a large region around it [13]. We also tried decreasing T , the system "temperature", for sum-product (in the limit of zero temperature it should approach max-product). In 96% of the time, using lower temperature (T = 0.3 instead of T = 1) indeed gave a lower energy configuration. Even at this reduced temperature, however, max-product always found a lower energy configuration. All algorithms converged in more than 90% of the cases. However, sum-product converged more often than max-product (Figure 2, bottom-right) . Decreasing temperature resulted in lower convergence rate for sum-product BP algorithm (e.g. 95.7% compared to 98.15% in full size graphs using disulfide bonds). It should be mentioned that SCWRL failed to converge on a single protein in the data set. Applying the DEE algorithm to the side-chain prediction graphical models dramatically decreased the size of the conformational search space, though, in most cases, the resulted space was still infeasible. Moreover, max-product BP was indifferent ;::; 3 ~ e::. ;::; 3 ~ e::. .. .. .. ~ 2 ~ 2 u U :::J :::J rn rn <1' 1 <1' 1 0 Xl x2 x3 x4 Xl X2 Xl X4 SCWRL buried residues success rates Xl X2 X3 X4 85.9% 62.2% 40.3% 25.5% Figure 3: Inference results - success rate. SCWRL buried residues success rate subtracted from sum-product BP (light gray), max-product BP (dark gray) and MF (black) rates when equally weighting side-chain to backbone and side-chain to side-chain clashes (left) and assigning increased weight for side-chain to backbone clashes (right). to that space reduction it failed to converge for the same models and, when converged, found the same conformation. 4.1 Success rate In comparing the performance of the algorithms, we have focused on the energy of the found configuration since this is the quantity the algorithms seek to optimize. A more realistic performance measure is: how well do the algorithms predict the native structure of the protein? The dihedral angle Xi is deemed correct when it is within 40° of the native (crystal) structure and Xl to Xi-l are correct. Success rate is defined as the portion of correctly predicted dihedral angles. The success rates of the conformations, inferred by both max- and sum-product outperformed SCWRL's (Figure 3). For buried residues (residues with relative accessibility lower than 30% [9]) both algorithms added 1 % to SCWRL's Xl success rate. Increasing the weight of side-chain to backbone interactions over side-chain to side-chain interactions resulted in better success rates (Figure 3, right). Freezing Cysteine residues to allow the formation of disulfide bonds slightly increased the success rate. 5 Discussion Recent years have shown much progress in approximate inference. We believe that the comparison of different approximate inference algorithms is best done in the context of a real-world problem. In this paper we have shown that for a realworld problem with many loops, the performance of belief propagation is excellent. In problems where exact inference was possible max-product BP always found the global minimum of the energy function and in the full protein data set, max-product BP always found a lower energy configuration compared to the other algorithms tested. SCWRL is considered one of the leading algorithms for modeling side-chain conformations. However, in the last couple of years several groups reported better results due to more accurate energy function [7], better searching algorithm [8] , or extended rotamer library [14]. As shown, by using inference algorithms we achieved low energy conformations, compared to existing algorithms. However, this leads only to a modest increase in prediction accuracy. Using an energy function, which gives a better approximation to the "true" physical energy (and particularly, assigns lowest energy to the native structure) should significantly improve the success rate. A promising direction for future research is to try and learn the energy function from examples. Inference algorithms such as BP may play an important role in the learning procedure. References [1] R. Cowell. Introduction to inference in Bayesian networks. In Michael I. Jordan, editor, Learning in Graphical Models. Morgan Kauffmann, 1998. [2] Johan Desmet, Marc De Maeyer, Bart Hazes, and Ignace Lasters. The dead-end elmination theorem and its use in protein side-chain positioning. Nature, 356:539542, 1992. [3] Roland L. Dunbrack, Jr. and Martin Kurplus. Back-bone dependent rotamer library for proteins: Application to side-chain predicrtion. J. Mol. Biol, 230:543- 574, 1993. See also http://www.fccc.edu/research/labs/dunbrack/scwrlj. [4] William T. Freeman and Egon C. Pasztor. Learning to estimate scenes from images. In M.S. Kearns, S.A. SoHa, and D.A. Cohn, editors, Adv. Neural Information Processing Systems 11. MIT Press, 1999. [5] Brendan J. Frey, Ralf Koetter, and Nemanja Petrovic. Very loopy belief propagation for unwrapping phase images. In Adv. Neural Information Processing Systems 14. MIT Press, 200l. [6] Enoch S. Huang, Patrice Koehl, Michael Levitt, Rohit V. Pappu, and Jay W. Ponder. Accuracy of side-chain prediction upon near-native protein backbones generated by ab initio folding methods. Proteins, 33(2):204- 217, 1998. [7] Shide Liang and Nick V. Grishin. Side-chain modeling with an optimized scoring function. Protein Sci, 11(2):322- 331, 2002. [8] Loren L. Looger and Homme W. HeHinga. Generalized dead-end elimination algorithms make large-scale protein side-chain structure prediction tractable: implications for protein design and structural genomics. J Mol Biol, 307(1):429- 445, 200l. [9] Joaquim Mendes, Cludio M. Soare, and Maria Armnia Carrondo. mprovement of sidechain modeling in proteins with the self-consistent mean field theory method based on an analysis of the factors influencing prediction. Biopolymers, 50(2):111- 131, 1999. [10] Kevin Murphy. The bayes net toolbox for matlab. Computing Science and Statistics, 33, 200l. [11] Kevin P. Murphy, Yair Weiss, and Micheal I. Jordan. Loopy belief propagation for approximate inference: an empirical study. In Proceedings of Uncertainty in AI, 1999. [12] Judea Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, 1988. [13] Yair Weiss and William T. Freeman. On the optimality of solutions of the maxproduct belief propagation algorithm. IEEE Transactions on Information Th eory, 47(2):723- 735, 2000. [14] Zhexin Xiang and Barry Honig. Extending the accuracy limits of prediction for sidechain conformations. J Mol Bioi, 311(2):421-430, 200l. [15] Jonathan S. Yedidia, William T. Freeman, and Yair Weiss. Understanding belief propagation and its generalization. In G. Lakemayer and B. Nebel, editors, Exploring Artificial Intelligence in the New Millennium. Morgan Kauffmann, 2002.
|
2002
|
17
|
2,180
|
Replay, Repair and Consolidation Szabolcs K´ali Peter Dayan Institute of Experimental Medicine Gatsby Computational Neuroscience Unit Hungarian Academy of Sciences University College London Budapest 1450, Hungary 17 Queen Square, London WC1N 3AR, U.K. kali@koki.hu dayan@gatsby.ucl.ac.uk Abstract A standard view of memory consolidation is that episodes are stored temporarily in the hippocampus, and are transferred to the neocortex through replay. Various recent experimental challenges to the idea of transfer, particularly for human memory, are forcing its re-evaluation. However, although there is independent neurophysiological evidence for replay, short of transfer, there are few theoretical ideas for what it might be doing. We suggest and demonstrate two important computational roles associated with neocortical indices. 1 Introduction Particularly since the analysis of subject HM,1 the suggestion that human memories would consolidate,2 has gripped experimental and theoretical communities. The idea is that storage of some sorts of knowledge (notably declarative information) involves a two-stage process, with memories moving from an initial, temporary, home (usually taken to be the hippocampus), which offers fast acting, but short-lived, plasticity, into a final, permanent resting place (usually the neocortex), whose learning and forgetting are much slower. Various sources of evidence have been adduced in favor of this proposition. First, it has been suggested that for patients (or animal subjects) who have suffered insults to the hippocampus, recent memories are more compromised than older ones, suggesting that they have yet to be consolidated to cortex.3,4 Second, the same patients suffer from anterograde amnesia (that is, they cannot lay down new memories), even though many neocortical areas are palpably functioning, and procedural storage (including aversive conditioning and skill learning) works (more) normally.5 Third, starting with the seminal work of Marr,6 who (possibly by a mis-calculation7) suggested that the hippocampus was just large enough a dynamic RAM as to store one day’s events, a variety of theoretical treatments has suggested the possible characteristics and advantages of two-stage procedures.8–10 This is widely regarded as reaching its apogee in the work of McClelland et al,11 who performed a careful computational analysis of fast and slow learning in connectionist networks. Fourth, and perhaps most compelling, an obvious substrate for replay to cortex is provided by the neurophysiologically observed12–14 reactivation during slow wave and REM sleep of patterns of (rat) hippocampal neuronal firing observed during times when the subject is awake and behaving, together with evidence of at least some coordination between hippocampal and neocortical states during this reactivation.15 The first and third of these evidentiary foundations are currently under active debate, specially for episodic memories (ie autobiographical memories for happenings). Solid evidence that hippocampal damage really spares memories for distant events compared with those for recent ones is extremely sparse, and the relevance of infra-human studies is put into question by the orders-of-magnitude differences in the memory time-scales shown between humans and animals.16 The modeling studies are also more ambiguous than they might seem, since their most convincing focus is on the tribulations of catastrophic interference.17 That is, slow learning is necessary in systems with rich distributed or population coding because changes in synaptic efficacies occasioned by incorporating new information can easily overwrite the neural substrate for the storage of old information (the hoary stability-plasticity dilemma18). This catastrophic interference can be avoided by re-storing old patterns (or something equivalent10,19) at the same time as storing new information. Thus, according to these schemes, patterns are stored wholesale in the hippocampus when they first appear, and are continually read back to cortex to cause plasticity along with the new information. However, if the hippocampus is permanently required to prevent a catastrophe, then, first, there is no true consolidation: if neocortical plasticity is not inhibited by hippocampal damage,20 then its integrity is permanently required to prevent degradation; and, second, what is the point of consolidation – couldn’t the hippocampus suffice by itself? This is particularly compelling in the case of episodes, since they are intrinsically isolated events. We came to a realization of this through development of our own model for consolidation,21 whose behavior convinced us of a flaw in our thinking. This second point lies exactly at the heart of the perspective espoused by Nadel and Moscovitch,16 amongst others. They regard the hippocampus as the final point of storage for all episodic memory, and permanently required for its recall. Of course, this idea equally well accounts for the second strand of evidence above about anterograde amnesia. If the hippocampus stores patterns permanently, what could the point be of replay? Here, we consider two roles, both associated with concerns about the pattern matching process at the heart of retrieval from the hippocampus. One is a new take on catastrophic interference, arguing that replay is necessary to keep the patterns stored in the hippocampus in register with the evolving cortical representation, so that they can still be recalled (and interpreted) correctly even though the cortical code may have changed since they were stored. The other computational role for replay is a new take on indexing, arguing that the cortical patterns that should lead to retrieval of a hippocampal memory are not only close syntactic relatives of the pattern that was originally stored, ie patterns whose actual neural code is similar, but also patterns that are close semantic relatives, ie patterns that are closely related via the network of semantic relationships that is stored in neocortex. In this scheme, the role of replay is building an index to the memory, effectively a form of recognition model.22 We first discuss briefly our existing model of consolidation,21 and its failings. Section 3 treats the repair of hippocampal indexing in the light of the vicissitudes of semantic change. Section 4 sketches our account of the semantic elaboration of the index. 2 Semantic and Episodic Memory Figure 1 shows our existing account of the interaction between the neocortex and the hippocampus in semantic and episodic memory.21 The neocortex is separated into ‘lower’ areas ( ) which are connected via bi-directional, variable, weights with an entorhinal/parahippocampal (EP) area ( ), and collectively act as a restricted Boltzmann machine (RBM), trained in an unsupervised manner, using contrastive divergence.23 It learns a model of the statistical relationships amongst the inputs, so that it can produce samples from conditional probability distributions such as
. The conventional interpretation for this is as a model of semantic memory – the generic facts of the world, stripped of information about the time and place and other circumstances under which they were learnt. However, the individual patterns on which the semantic learning is based are treated as episodic patterns, which should be recalled wholesale. One main contribution of that work was to put episodic and semantic information into such particular correspondence. 0 200 400 600 800 0 20 40 60 80 100 Time (thousand presentations) Percent recalled
!!! !!! """ """ ## ## $$ $$ %% %% && && '' '' (( (( WA xA A xB B xC C WC HC y E/P 0 20 40 60 80 100 0 20 40 60 80 100 Time (thousand presentations) Percent recalled one−shot consolidated B C A Figure 1: (A) Model architecture. All units in neocortical areas A, B, and C are connected to all units in area E/P through bidirectional, symmetric weights, but connections between units in the input layer are restricted to the same cortical area. Each neocortical area contains 100 binary units. The hippocampus (HC) is not directly implemented, but it can influence and store the patterns in EP. All communication between the HC and the input areas is via area EP. (B) The consolidation of episodic memories. Recall performance on specific (episodic) patterns as a function of time between the initial presentation of the episodic pattern and testing (or, equivalently, time between training and lesion in hippocampals) in the simulations. (C) Extinction of an episode due to semantic training, in the isolated neocortical network trained to asymptotic performance on the episodic pattern (thin line), and directly after the removal of the hippocampus from the full network, for a pattern which has been hippocampally “consolidated” for 250,000 presentations (thick line). In this previous model, the hippocampus acts as a fast-learning repository for the EP representation of patterns that have been (relatively recently) experienced, and plays two roles: aiding recall and training the neocortex. The hippocampus improves recall by performing pattern completion on the EP representations induced by partial or noisy inputs , thus finding the nearest matching stored . In turn, this, through neocortical semantic knowledge, engenders recall of an appropriate . The hippocampus trains the neocortex in an off-line (sleep) mode, reporting the patterns that it has stored to the neocortex to give the latter’s incremental plasticity the opportunity to absorb the new information. Given hippocampal damage, patterns that have been repeatedly replayed to cortex by the hippocampus (ie older patterns) have a greater chance of being recalled correctly through neocortical inference than patterns that were learned more recently, and are therefore still dependent for their recall on the integrity of the hippocampus. Figure 1B shows the basic consolidation phenomenon in this model. The upper (thin) curve shows how well on average the full model can recall whole items from a partial cue as a function of time since the item was stored; the lower (thick) curve shows the same in the case that the hippocampal contribution is eliminated immediately before testing. This is the standard inverted U-shaped curve of graded retrograde amnesia, with distant memories spared compared with recent ones. However, figure 1C reveals what is really going on. Both curves show how the neocortical network forgets particular episodic patterns as a function of continued semantic training. Thick/thin lines are with/without prior consolidation using the hippocampus. Consolidation clearly does not help the longevity of the memory – if anything, it actually impedes it. This is essentially because the cortical code changes slowly over presentations. Thus, first, the hippocampus is mandatorily required if memories are to be preserved – the forgetting curve for the normals in figure 1B is actually dominated by hippocampal forgetting. Second, the inverted U-shaped curve in figure 1B arises because testing happens immediately after hippocampal removal. The same curves plotted for successive times after removal would show catastrophic memory failure. Memories might turn out to be stabilized in the face of hippocampal damage in other ways.21 For instance, cortical plasticity might be suppressed, if the hippocampus reports unfamiliarity as a plasticizing signal. This is somewhat unlikely, since various forms of continued plasticity remain active.3,20 Alternatively, there might be synaptic stabilizing mechanisms in the cortex such that synapses come never to change. This is certainly possible, but does not explain how recall can survive changes in the cortical code. In sum, the model turns out to illustrate the key problem with standard theory of memory transfer for episodes. We are thus forced to start from the possibility that the hippocampus might indeed be a permanent repository, and reconsider the issue of replay and consolidation in the resulting light. In this new scheme, there is still a critical role for replay, but one that is focused on the indexing relationship between neocortical and hippocampal representations rather than on writing into cortex the contents of the hippocampus. 3 Maintaining Access to Episodes Consider the fate of an episode that is stored in the hippocampus. In a hierarchical network where the hippocampus is directly connected only to the topmost areas, successful recall of such an episode depends on the correspondence between low- and high-level cortical areas embodied by the neocortical network. This dependence actually has two related components. First, the high-level neocortical representation of the recall cue needs to be effective in activating the correct hippocampal memory trace; second, the high-level representation activated by hippocampal recall should effect the recall of the appropriate components of the corresponding episode in lower level areas as well. These are both aspects of indexing. The neocortical network is the substrate of neocortical learning, reflecting, for instance, refinement of the existing semantic representation, changes in input statistics, or acquisition of a new semantic domain. Such plasticity may disrupt the recall of stored episodic patterns by changing the correspondence between the input areas and EP. Thus, if the brain is still to be able to recall hippocampally stored episodes, it either needs to maintain the correspondence between the low-level and EP representations of the episodes by restricting neocortical learning (achieved in the previous model by having the hippocampus replay its old episodic patterns along with the new semantic patterns governing continued neocortical plasticity), or it needs to update the connections between the hippocampus and EP such that the hippocampally stored pattern continues to match the EP representation of the input pattern corresponding to the episode. The first of these possibilities may restrict the learning abilities of the neocortical network. However, replay can be used to allow the connections into and out of the hippocampus to track the changing neocortical representational code. In order to assess the effect of neocortical learning on the recall of previously stored episodes, either in the presence or absence of replay, the following paradigm was employed. We started training the neocortical network by presenting to the input areas random combinations of valid patterns (20 independently generated random binary patterns for each area). After a moderate amount of such general training (10,000 pattern presentations total), the EP representations of particular input patterns were associated with corresponding stored hippocampal traces, forming a set of stored episodes. The quality of recall for these episodes was then monitored while general training continued. Figure 2A shows as a function of the length of general semantic training the percentage of correct recall for the episodes stored after 10,000 presentations. The main plot is an average over all episodes; the smaller plots show some individual episodes. Clearly, neocortical learning comes to erase the route to recall, even though the episode remains perfectly stored in the hippocampus throughout. 0 50 100 150 200 0 20 40 60 80 100 Time (thousand presentations) Percent recalled 0 50 100 150 200 0 20 40 60 80 100 Time (thousand presentations) Percent recalled 0 100 200 0 5 10 15 20 Distance from stored 0 100 200 0 50 100 Time (thousand presentations) Percent recalled A C B D Figure 2: How semantic training affects episodic recall for patterns stored after the first 10,000 presentations (A) without replay and (D) with the correspondence between hippocampal and neocortical representations updated during off-line replay. The larger graphs are averages over all stored episodes, while the smaller graphs are for individual episodes. Recall was assessed by presenting partial episodic patterns (the original activations replaced by random patterns in one of the input areas), performing hippocampal pattern completion in EP if the distance from a stored EP representation was less than 20, and then performing 20 full iterations of Gibbs sampling in the neocortical network with the cue areas clamped. A resulting distance of less than 5 from the target pattern was considered a match. (B) and (C) analyze the reasons why episodic recall breaks down in (A). (B) shows how the EP representation of stored episodes drifts away from the original stored patterns. (C) shows how well recall works if it starts from the stored EP representation of the episode. Figure 2B,C indicate the reasons for this behavior. Figure 2B shows that semantic learning after the storage of the episode causes the EP representation of the episode to move away from the version with which the stored hippocampal trace is associated. The magnitude of this change is such that, eventually, even the full original episode may fail to activate the corresponding hippocampal memory trace. The effect of representational change on hippocampally directed recall in the input areas is milder in our case, as seen in Figure 2C; provided that the correct hippocampal trace does get activated, the full episode can be successfully recalled most of the time. However, this component accounts for the relatively slower initial rise of episodic recall in Figure 2A (compare with Figure 2D), as well as some of the variability between patterns in Figure 2A (data not shown). In the “replay” condition, the general training was interleaved with epochs of hippocampally initiated replay, assumed to take place during sleep. Within these epochs, the memory traces stored in the hippocampus get activated at random, which leads to the reactivation of the associated EP pattern, which in turn reactivates the input areas according to the existing semantic mapping. The resulting pattern may be different from the one that initially gave rise to the stored episode, due to subsequent changes in the neocortical connections. However, assuming that the neocortical semantic representation has not changed fundamentally since the last time that particular episode was replayed (or when it was established), the input representation resulting from replay should be close to the current low level representation of that particular episode. Indeed, maintaining this representational proximity exactly sets the requirement for the frequency of replay of the episodes. As in our previous model, we assume that the local connections within each neocortical area implement a local attractor structure, which, in the absence of feedforward activation, restricts activity patterns within that area to those that correspond to valid input patterns. These local attractors turn feedback activation which is close to a valid pattern (namely, the original episode) into an exact version of that pattern. Such an off-line reconstruction of the low-level representation of stored episodes may then support a wide variety of memory processes (including the previous model’s focus on gradually incorporating the information carried by that episode into the neocortical knowledge base11,21). Here we focus on its use for maintenance of the episodic index. To this end, starting from the reconstructed episode, the semantic correspondence between the different levels is employed in the feedforward direction in order to determine the up-to-date EP representation of the episode. This EP pattern is then associated with the stored hippocampal episode which initiated the replay, so that the hippocampal and input level representations of the episode are again in register. Figure 2B demonstrates the efficacy of replay: the hippocampally stored episode now remains tied to the (shifting) EP representation of the episode, and episodic replay stays at high levels despite substantial changes in the neocortical network. 4 Index Extension Another important potential role for replay is extending the semantic aspects of the indexing scheme. It should be possible to retrieve episodic memories on the basis of all input patterns to which they are closely related through the network of cortical semantic knowledge. At present, this can happen only if the cortex produces similar EP codes for all those input patterns that are semantically related. However, requiring that all semantic proximity be coded by syntactic proximity in essentially one single layer, is far too stringent a requirement. Rather, we should expect that the bulk of semantic information lives in synapses that are invisible to this layer, ie connections within and between lower layers, and this information must also influence indexing. One way to extend semantic indexing involves on-line sampling. That is, probabilistic updating in the cortical semantic network starting from a given input pattern is the canonical way of exploring the semantic neighborhood of an input. One can imagine doing this in a on-line manner, spurred by an input. Over sampling, the cortical pattern and its EP code change together, providing the opportunity for a match to be made between the EP activity and the contents of episodic memory. These sampling dynamics would allow the recall of semantically relevant episodes, even if their explicit code is rather distant. The role for replay in this process is to allow the semantic index to be extended through off-line rather than on-line sampling starting from the episodic patterns stored in the hippocampus. It is thus analogous to Sutton’s24 use of replay in his DYNA architecture, in which an internal model of a Markov decision process is used to erase inconsistencies in a learned value function, and also to the wake-sleep algorithm’s22 use of sleep sampling to learn a recognition model. For the latter, off-line sampling ensures that inputs can be mapped using a feedforward network, into codes associated with a generative model, rather than relying on sluggish statistical or dynamical methods for inverting the generative model, such as Gibbs sampling or its mean-field approximations. The main requirement is for a further plastic layer between EP and CA3 (presumably the perforant path) so that when replay based on an episode leads to a semantically, but not syntactically, related pattern, then the EP code for that pattern can induce hippocampal recall of the episode. Figure 3 illustrates this use of replay in a highly simplified case (subject to the limitations of the RBM). Here, there are 3 modules of units, each with possible patterns, and a semantic structure such that
(with wrap-around, so, eg, ) and independent of the choice in and . Figure 3A shows the covariance matrix of the activities of the EP units to the possible input patterns (arranged lexicographically). The relatedness of the EP representation of related patterns is clear in the rich structure of this matrix – this shows the extent of the explicit code learnt by the RBM. However, this code does not make indexing perfect. Imagine that and ! " " have been stored as episodic patterns. That is, their EP representations are stored in the hippocampus and are available for recall and replay. We may expect to retrieve from its semantic relation $# % . Figure 3B shows the explicit proximity (inverse square distance, see caption) of the EP representations of the input patterns to the EP representation of . Although # is close, so are many other patterns that are not nearly so closely semantically related. For instance, & (') and " & *+ are closer. −0.5 0 0.5 1 1.5 20 40 60 20 40 60 0 10 20 30 40 50 60 0 200 0 100 0 100 0 10 20 30 40 50 60 0 10 20 30 40 50 60 E1 E2 A B C D 111 122 131 141 221 321 421 121 111 221 114 121 111 221 211 331 441 223 121 344 332 334 444 324 failures 114 124 441 234 334 344 434 444 log scaled proximities proximity linear 100 samples 500 samples 2000 samples Figure 3: Index expansion. Plots relate to the 3-module network. Conventions: – denote the possible input patterns or their EP representations.
,
, etc. In (C), the entry for shows patterns that are not within Hamming distance of of any input pattern. For this simulation, for reasons of simulation time, the input patterns were chosen to be orthogonal; the hidden unit representations were nevertheless highly non-orthogonal; iterations of Gibbs sampling were used during RBM learning. The weights associated with the network are not over-trained. A) The covariance matrix of the EP representation of the possible patterns. The banding shows the semantic structure (see text), but, as seen in (B), only weakly. B) The proximities ( "!$#
%'&(#)+*! ) of the EP representations ( #% ) for all the patterns to that for , (the entry for #
)+* is blank; see boxed - ). The numbers refer to the patterns as in the convention described. Despite the covariance structure in (A), the syntactic representation of semantic closeness is weak: ./ is not closely related to -- , for instance. Thus, episodic recall would be imperfect. Ratio of max-min proximity (bar - ) is 4. C) Three stages of (unclamped) Gibbs sampling starting ( times each) from the hippocampally replayed EP representations of , (left column) and , (right column). Here, we determine to which (if any — thus the ‘failure’ entry ) of the possible input patterns, the sampled activities of the visible units are closest, and plot histograms of the resulting frequencies. After only few iterations, - and 0 still dominate; after more, the semantically close patterns and / dominate for , and 00 and for , . D) Logarithmically scaled proximities following delta-rule learning for the mapping from EP representations of the patterns in (C) to , and , respectively. Now, the remapped EP representations of semantically relevant inputs are vastly closer to their associated episodic memories. Ratios of max-min proximities are 14000 ( , ) and 7000 ( , ). Figure 3C shows the course of replay. The two columns show histograms of the patterns retrieved in the visible layer after 21 rounds of Gibbs sampling starting ( 1 times) from the hippocampal representation of (left) and (right). The network has learnt much about the semantic relationships, although it is far from perfect (over-training seems to make it worse, for reasons we do not understand), and equally likely patterns are not generated exactly equally often.21 The columns of these histograms show how many sampled visible patterns are not close to one of the valid inputs; this happens only rarely. During replay, the EP representation of these semantically-related patterns is then available so that a model mapping EP to an appropriate input to the hippocampal pattern matching process can be learnt. Figure 3D shows how this affects the proximities for a model trained using the delta rule. Again, left and right columns are for and ; now the semantic associates of these patterns are mapped into inputs to the hippocampal pattern matching process that are far nearer (note the logarithmic scale) to the stored representations of and , and so the episodes can be appropriately retrieved from their semantic cousins. 5 Discussion The important, but narrow, issue of whether episodic memories can ever be recalled without the hippocampus has polarized theoretical ratiocination about memory replay, a phenomenon for which there is increasing neurophysiological evidence. This polarization has hindered the field from studying the wider computational context of replay. In this paper, we have considered two particular aspects of the consolidation of the indexing relationship between semantic memory (in the neocortex) and episodic memory (in the hippocampus). We showed how replay could be used to maintain the index in the face of on-going neocortical plasticity, and to broaden it in the light of neocortical semantic knowledge that is not directly accessible through the explicit code in the upper layers of cortex. Unlike memory consolidation, neither of these involves neocortical plasticity during replay. There may yet be many other computations that can be accomplished through replay. Broadening the index poses an interesting, only incompletely answered, theoretical question about the metrics of memory. The semantic model can be seen as a sort of manifold in the space of all inputs; the episodes as particular points on the manifold; and retrieval as finding the closest episodes to a presented cue, according to a distance function that involves mapping the cue to the manifold, and mapping between points on the manifold. Despite some theoretical suggestions,25 it is not clear how the semantic model specifies these distances. Our pragmatic solution was to replay the episodes and rely on the transience of the Markov chain induced by Gibbs sampling to produce semantic cousins with which it should be related. It would be desirable to consider more systematic approaches. Our model involves interaction between a hippocampal store for episodes and a neocortical store for semantics. However, the computational issues about indexing apply with the same force if the episodes are actually stored separately elsewhere, such as in more frontal structures (McClelland, personal communication). There are equal opportunities for these areas to induce replay, and thus improve the index. What now seems unlikely, despite our best earlier efforts, is that the problems of indexing can be circumvented by storing the episodes wholly within the semantic network. By itself, this solves nothing. Acknowledgements We are very grateful to Jay McClelland for helpful discussions. Funding was from the Hungarian Academy of Sciences and the Gatsby Charitable Foundation. References [1] W. Scoville and B. Milner, J Neurol Neurosurg Psychiatry 20, 11 (1957). [2] T. Ribot, Les maladies de la memoire, Appleton-Century-Crofts, New York, 1881. [3] L. R. Squire, Psychol Rev 99, 195 (1992). [4] L. R. Squire, R. E. Clark, and B. J. Knowlton, Hippocampus 11, 50 (2001). [5] A. R. Mayes and J. J. Downes, Memory 5, 3 (1997). [6] D. Marr, Philos Trans R Soc Lond B Biol Sci 262, 23 (1971). [7] D. J. Willshaw and J. T. Buckingham, Philos Trans R Soc Lond B Biol Sci 329, 205 (1990). [8] P. Alvarez and L. R. Squire, Proc Natl Acad Sci U S A 91, 7041 (1994). [9] J. M. Murre, Memory 5, 213 (1997). [10] R. M. French, Connection Science 9, 353 (1997). [11] J. L. McClelland, B. L. McNaughton, and R. C. O’Reilly, Psychol Rev 102, 419 (1995). [12] M. A. Wilson and B. L. McNaughton, Science 265, 676 (1994). [13] W. E. Skaggs and B. L. McNaughton, Science 271, 1870 (1996). [14] K. Louie and M. A. Wilson, Neuron 29, 145 (2001). [15] A. G. Siapas and M. A. Wilson, Neuron 21, 1123 (1998). [16] L. Nadel and M. Moscovitch, Curr Opin Neurobiol 7, 217 (1997). [17] M. McCloskey and N. J. Cohen, in The psychology of learning and motivation, vol 24, edited by G. Bower, 109–165, Academic Press, New York, 1989. [18] G. A. Carpenter and S. Grossberg, Trends Neurosci 16, 131 (1993). [19] A. Robins, Connection Science 8, 259 (1996). [20] F. Vargha-Khadem et al., Science 277, 376 (1997). [21] S. K´ali and P. Dayan, in NIPS 13, edited by T. K. Leen, T. G. Dietterich, and V. Tresp, 24–30, MIT Press, Cambridge, 2001. [22] G. E. Hinton, P. Dayan, B. J. Frey, and R. M. Neal, Science 268, 1158 (1995). [23] G. E. Hinton, Neural Computation, 14 (2002). [24] R. S. Sutton, in Machine Learning: Proceedings of the Seventh International Conference, 216–224, 1990. [25] L. K. Saul, in NIPS 9, edited by M. C. Mozer, M. I. Jordan, and T. Petsche, 267–273, MIT Press, London, UK, 1997.
|
2002
|
170
|
2,181
|
Inferring a Semantic Representation of Text via Cross-Language Correlation Analysis Alexei Vinokourov John Shawe-Taylor Dept. Computer Science Royal Holloway, University of London Egham, Surrey, UK, TW20 0EX alexei@cs.rhul.ac.uk john@cs.rhul.ac.uk Nello Cristianini Dept. Statistics UC Davis, Berkeley, US nello@support-vector.net Abstract The problem of learning a semantic representation of a text document from data is addressed, in the situation where a corpus of unlabeled paired documents is available, each pair being formed by a short English document and its French translation. This representation can then be used for any retrieval, categorization or clustering task, both in a standard and in a cross-lingual setting. By using kernel functions, in this case simple bag-of-words inner products, each part of the corpus is mapped to a high-dimensional space. The correlations between the two spaces are then learnt by using kernel Canonical Correlation Analysis. A set of directions is found in the first and in the second space that are maximally correlated. Since we assume the two representations are completely independent apart from the semantic content, any correlation between them should reflect some semantic similarity. Certain patterns of English words that relate to a specific meaning should correlate with certain patterns of French words corresponding to the same meaning, across the corpus. Using the semantic representation obtained in this way we first demonstrate that the correlations detected between the two versions of the corpus are significantly higher than random, and hence that a representation based on such features does capture statistical patterns that should reflect semantic information. Then we use such representation both in cross-language and in single-language retrieval tasks, observing performance that is consistently and significantly superior to LSI on the same data. 1 Introduction Most text retrieval or categorization methods depend on exact matches between words. Such methods will, however, fail to recognize relevant documents that do not share words with a users’ queries. One reason for this is that the standard representation models (e.g. boolean, standard vector, probabilistic) treat words as if they are independent, although it is clear that they are not. A central problem in this field is to automatically model termterm semantic interrelationships, in a way to improve retrieval, and possibly to do so in an unsupervised way or with a minimal amount of supervision. For example latent semantic indexing (LSI) has been used to extract information about co-occurrence of terms in the same documents, an indicator of semantic relations, and this is achieved by singular value decomposition (SVD) of the term-document matrix. The LSI method has also been adapted to deal with the important problem of cross-language retrieval, where a query in a language is used to retrieve documents in a different language. Using a paired corpus (a set of pairs of documents, each pair being formed by two versions of the same text in two different languages), after merging each pair into a single ’document’, we can interpret frequent co-occurrence of two terms in the same document as an indication of cross-linguistic correlation [5]. In this framework, a common vector-space, including words from both languages, is created and then the training set is analysed in this space using SVD. This method, termed CL-LSI, will be briefly discussed in Section 4. More generally, many other statistical and linear algebra methods have been used to obtain an improved semantic representation of text data over LSI [6]. In this study we address the problem of learning a semantic representation of text from a paired bilingual corpus, a problem that is important both for mono-lingual and cross-lingual applications. This problem can be regarded either as an unsupervised problem with paired documents, or as a supervised monolingual problem with very complex labels (i.e. the label of an english document could be its french counterpart). In either way, the data can be readily obtained without an explicit labeling effort, and furthermore there is not the loss of information due to compressing the meaning of a document into a discrete label. We employ kernel Canonical Correlation Analysis (KCCA) [1] to learn a representation of text that captures aspects of its meaning. Given a paired bilingual corpus, this method defines two embedding spaces for the documents of the corpus, one for each language, and an obvious one-to-one correspondence between points in the two spaces. KCCA then finds projections in the two embedding spaces for which the resulting projected values are highly correlated. In other words, it looks for particular combinations of words that appear to have the same co-occurrence patterns in the two languages. Our hypothesis is that finding such correlations across a paired crosslingual corpus will locate the underlying semantics, since we assume that the two languages are ’conditionally independent’, or that the only thing they have in common is their meaning. The directions would carry information about the concepts that stood behind the process of generation of the text and, although expressed differently in different languages, are, nevertheless, semantically equivalent. To illustrate such representation we have printed the most probable (most typical) words in each language for some of the first few kernel canonical corrleation components found for bilingual 36 Canadian Parliament corpus (Hansards) (left column is English space and right column is French space): PENSIONS PLAN? AGRICULTURE? CANADIAN LANDS? FISHING INDUSTRY? pension regime wheat bl park parc fisheries pˆeches plan pensions board commission land autochtones atlantic atlantique cpp rpc farmers agriculteurs aboriginal terres operatives pˆecheurs canadians prestations newfoundland producteurs yukon ches fishermen pˆeche benefits canadiens grain canadienne marine vall newfoundland probl retirement retraite party grain government ressources fishery coop fund cotisations amendment parti valley yukon problem ans tax fonds producers conseil water nord operative industrie investment discours canadian commercialisat boards gouvernement fishing poisson income impˆot speaker neuve territories offices industry neuve finance revenu referendum ministre board marin fish terre young jeunes minister administration north eaux years ouest years ans directors modification parks territoires problems stocks rate pension quebec qubec resource parcs wheat ratives superannuation argent speech terre agreements nations coast ministre disability regimes school formistes northwest territoriales oceans sant taxes investissement system partis resources revendications west saumon mounted milliards marketing grains development ministre salmon affaiblies future prestation provinces op treaty cheurs tags facult premiums plan constitution nationale nations ouest minister secteur seniors finances throne lus territoire entente communities programme country pays money bloc work rights program gion rates avenir section nations territory office commission scientifiques jobs invalidit rendum chambre atlantic atlantique motion travailler pay resolution majorit administration programs ententes stocks conduite This representation is then used for retrieval tasks, providing better performance than existing techniques. Such directions are then used to calculate the coordinates of the documents in a ’language independent’ way. Of course, particular statistical care is needed for excluding ’spurious’ correlations. We show that the correlations we find are not the effect of chance, and that the resulting representation significantly improves performance of retrieval systems. We find that the correlation existing between certain sets of words in English and French documents cannot be explained as a random correlation. Hence we need to explain it by means of relations between the generative processes of the two versions of the documents, that we assume to be conditionally independent given the topic or content. Under such assumptions, hence, such correlations detect similarities in content between the two documents, and can be exploited to derive a semantic representation of the text. This representation is then used for retrieval tasks, providing better performance than existing techniques. We first apply the method to crosslingual information retrieval, comparing performance with a related approach based on latent semantic indexing (LSI) described below [5]. Secondly, we treat the second language as a complex label for the first language document and view the projection obtained by CL-KCCA as a semantic map for use in a multilingual classification task with very encouraging results. From the computational point of view, we detect such correlations by solving an eigenproblem, that is avoiding problems like local minima, and we do so by using kernels. The KCCA machinery will be given in Section 3 and in Section 4 we will show how to apply KCCA to cross-lingual retrieval while Section 4 describes the monolingual applications. Finally, results will be presented in Section 5. 2 Previous work The use of LSI for cross-language retrieval was proposed by [5]. LSI uses a method from linear algebra, singular value decomposition, to discover the important associative relationships. An initial sample of documents is translated by human or, perhaps, by machine, to create a set of dual-language training documents
and
. After preprocessing documents a common vector-space, including words from both languages, is created and then the training set is analysed in this space using SVD:
! "$#&%(' (1) where the ) -th column of corresponds to document ) with its first set of coordinates giving the first language features and the second set the second language features. To translate a new document (query) * to a language-independent representation one projects (folds-in) its expanded (filled up with zero components related to another language) vector representation + * into the space spanned by the , first eigenvectors .- : / *0
1 % + * . The similarity between two documents is measured as the inner product between their projections. The documents that are the most similar to the query are considered to be relevant. 3 Kernel Canonical Correlation Analysis In this study our aim is to find an appropriate language-independent representation. Suppose as for cross-lingual LSI (CL-LSI) we are given aligned texts in, for simplicity, two languages, i.e., every text in one language 3254 is a translation of text 63287 in another language or vice versa. Our hypothesis is that having the corpus 9 mapped to a highdimensional feature space : as ;=< >? and corpus 6 to : as ;=< 6@? (with A and A being respectively the kernels of the two mappings, i.e. matrices of the inner products between images of all the data points, [2]), we can learn (semantic) directions B 2 : and B 2 : in those spaces so that the projections <@B C' ;=< >D?E? and <DB F' ;=< G?9?9 of input data images from the different languages would be maximally correlated. We have thus intuitively defined a need for the notion of a kernel canonical : -correlation ( :
: : ) which is defined as
<E<@B ' ;=< G?9? ' <@B C' ;=< 6@?9?E?
<@B ' ;=< ?9? <@B F' ;=< G?9? <DB ' ;=< ?E? "! <@B ' ;=< ! ?E? (2) We search for B and B in the space spanned by the ; -images of the data points (reproducing kernel Hilbert space, RKHS [2]): B
$#&% # ;=< # ? , B 8
('*) ' ;=< ' ? . This rewrites the numerator of (2) as + <@B ' ;=< ?9? <@B ' ;=< ?9?
% % A A ) (3) where % is the vector with components % # and ) the vector with components ) ' . The problem (2) can then be reformulated as
$ , % % A A ) ./. A % .0..0. A ) .0. (4) Once we have moved to a kernel defined feature space the extra flexibility introduced means that there is a danger of overfitting. By this we mean that we can find spurious correlations by using large weight vectors to project the data so that the two projections are completely aligned. For example, if the data are linearly independent in both feature spaces we can find linear transformations that map the input data to an orthogonal basis in each feature space. It is now possible to find 1 perfect correlations between the two representations. Using kernel functions will frequently result in linear independence of the training set, for example, when using Gaussian kernels. It is clear therefore that we will need to introduce a control on the flexibility of the projection mappings B and B . To do that in the spirit of Partial Least Squares (PLS) we would add a multiple of 2-norm squared: 2 .0. B ./.
2*3 + # % # ;=< # ? ' + #54 % # 4 ;=< # 4 ?76
2 % % A % (5) in the denominator. Convexly combining PLS regularization term (5) and kCCA term .0. A % .0. : <8:9 2 ?;.0. A % .0. =< 2 ./. B .0.
<8:9 2 ? % % A % < 2 % % A %
% % A <E<8:9 2 ? A < 2?> ? % (6) we substitute its square root into denominator of (4) instead of .0. A % .0. and do the same for ) :
( , % % A A ) @ < <8A9 2 ?B./. A % .0. < 2 ./. B .0. ? <E<8A9 2 ?B./. A ) .0. < 2 .0. B .0. ? (7) Differentiating the expression under with respect to % , taking into account that CD ./. EF./.
D GHG D GHG and I I , % % A %
KJ A % , and equating the derivative to zero we obtain A A )ML <8A9 2 ?B./. A % .0. < 2 ./. B .0. &N 9 % % A A ) <9<8A9 2 ? A < 2?> ? A %
KO (8) We note that % can be normalised so that <8P9 2 ?B./. A % .0. < 2 ./. B .0.
8 . Similar operations for ) yield analogous equations that together with (8) can be written in a matrix form: QSR
R (9) where is the average per point correlation between projections <DB ' ;=< >?E? and <DB F' ;=< C?9? : % % A A ) , and Q
T A A A A T '
<E<8A9 2 ? A < 2?> ? A T T <E<8A9 2 ? A < 2?> ? A (10) Table 1: Statistics for ’House debates’ of the 36 Canadian Parliament proceedings corpus. SENTENCE PAIRS ENGLISH WORDS FRENCH WORDS TRAINING 948K 14,614K 15,657K TESTING 1 62K 995K 1067K where R
% % ) % % . Equation (9) is known as a generalised eigenvalue problem.The standard approach to the solution of (9) in the case of a symmetric is to perform incomplete Cholesky decomposition of the matrix :
% and define
R which allows us, after simple transformations, to rewrite it as a standard eigenvalue problem % Q
. We will discuss how to choose 2 in Section 5. It is easy to see that if % or ) changes sign in (9), also changes sign. Thus, the spectrum of the problem (9) has paired positive and negative values between 9 8 and 8 . 4 Applications of KCCA Cross-linguistic retrieval with KCCA. The kernel CCA procedure identifies a set of projections from both languages into a common semantic space. This provides a natural framework for performing cross-language information retrieval. We first select a number of semantic dimensions, 8
1 , with largest correlation values . To process an incoming query * we expand * into the vector representation for its language * and project it onto the canonical : -correlation components: / *0
% % * using the appropriate vector for that language, where is a 1 matrix whose columns are the first solutions of (9) for the given language sorted by eigenvalue in descending order. Here we assumed that <G;=< ? ' ;=< * ?9? is simply % * where is the training corpus in the given language:
< ? or
< ? . Using the semantic space in text categorisation. The semantic vectors in the given language
can be exported and used in some other application, for example, Support Vector Machine classification. We first find common features of the training data used to extract the semantics and the data used to train SVM classifier, cut the features that are not common and compute the new kernel which is the inner product of the projected data: AD< ' ! ?
% % ! (11) The term-term relationship matrix % can be computed only once and stored for further use in the SVM learning process and classification. 5 Experiments Experimental setup. Following [5] we conducted a series of experiments with the Hansard collection [3] to measure the ability of CL-LSI and CL-KCCA for any document from a test collection in one language to find its mate in another language. The whole collection consists of 1.3 million pairs of aligned text chunks (sentences or smaller fragments) from the 36 Canadian Parliament proceedings. In our experiments we used only the ’house debates’ part for which statistics is given in Table 1. As a testing collection we used only ’testing 1’. The raw text was split into sentences with Adwait Ratnaparkhi’s MXTERMINATOR and the sentences were aligned with I. Dan Melamed’s GSA tool (for details on the collection and also for the source see [3]). Table 2: Average accuracy of top-rank (first retrieved) English French retrieval, % (left) and average precision of English French retrieval over set of fixed recalls ( O 8 'O JC' 'O ), % (right) 100 200 300 400 full 100 200 300 400 full cl-lsi 84 91 93 95 97 73 78 80 82 82 cl-kcca 98 99 99 99 99 91 91 91 91 87 The text chunks were split into ’paragraphs’ based on ’***’ delimiters and these ’paragraphs’ were treated as separate documents. After removing stop-words in both French and English parts and rare words (i.e. appearing less than three times) we obtained
P8
8 J term-by-document ’English’ matrix and
P8 8 8 J ’French’ matrix (we also removed a few documents that appeared to be problematic when split into paragraphs). As these matrices were still too large to perform SVD and KCCA on them, we split the whole collection into 14 chunks of about 910 documents each and conducted experiments separately with them, measuring the performance of the methods each time on a 917-document test collection. The results were then averaged. We have also trained the CL-KCCA method on randomly reassociated French-English document pairs and observed accuracy of about O 8
on test data which is far lower than results on the non-random original data. It is worth noting that CL-KCCA behaves differently from CL-LSI over the full scale of the spectrum. When CL-LSI only increases its performance with more eigenvectors taken from the lower part of spectrum (which is, somewhat unexpectedly, quite different from its behaviour in the monolinguistic setting), CL-KCCA’s performance, on the contrary, tends to deteriorate with the dimensionality of the semantic subspace approaching the dimensionality of the input data space. The partial Singular Value Decomposition of the matrices was done using Matlab’s ’svds’ function and full SVD was performed using the ’kernel trick’ discussed in the previous section and ’svd’ function which took about 2 minutes to compute on Linux Pentium III 1GHz system for a selection of 1000 documents. The Matlab implementation of KCCA using the same function, ’svd’, which solves the generalised eigenvalue problem through Cholesky incomplete decomposition, took about 8 minutes to compute on the same data. Mate retrieval. The results are presented in Table 2. Only one - mate document in French was considered as relevant to each of the test English documents which were treated as queries and the relative number of correctly retrieved documents was computed (Table 2) along with average precision over fixed recalls: O 8 , O J , , O . Very similar results (omitted here) were obtained when French documents were treated as queries and English as test documents. As one can see from Table 2 CL-KCCA seems to capture most of the semantics in the first few components achieving accuracy with as little as 100 components when CL-LSI needs all components for a similar figure. Selecting the regularization parameter. The regularization parameter 2 (6) not only makes the problem (9) well-posed numerically, but also provides control over capacity of the function space where the solution is being sought. The larger values of 2 are, the less sensitive the method to the input data is, therefore, the more stable (less prone to finding spurious relations) the solution becomes. We should thus observe an increase of ”reliability” of the solution. We measure the ability of the method to catch useful signal by comparing the solutions on original input and ”random” data. The ”random” data is constructed by random reassociations of the data pairs, for example, < ' < ?E? denotes English-French parallel corpus which is obtained from the original English-French aligned collection by reshuffling the French (equivalently, English) documents. Suppose, !
A < 'E ? denotes the (positive part of) spectrum of the KCCA solution on the paired dataset < ' ? . If the method is overfitting the data it will be able to find perfect correlations and hence ./. " 9 A < 'E ?;.0.$# O ' where " is the all-one vec0 1 2 3 4 0 0.5 1 1.5 0 1 2 3 4 0 0.5 1 0 1 2 3 4 0 0.5 1 1.5 Figure 1: Quantities ./. " 9 A < ' < ?9?;.0. (left), ./. " 9 A < ' ?;.0. (middle) and .0. " 9 A < ' < ?E?;.0. (right) as functions of the regularization parameter 2 . (Graphs were obtained for the regularization schema discussed in [1]). tor. We therefore use this as a measure to assess the degree of overfitting. Three graphs in Figure 1 show the quantities .0. " 9 A < ' < ?9?B./. , ./. " 9 A < ' ?B./. , and .0. " 9 A < ' < ?E?;.0. as functions of the regularization parameter 2 . For small values of 2 the spectrum of all the tests is close to the all-one spectrum (the spectrum A < ' ? ). This indicates overfitting since the method is able to find correlations even in randomly associated pairs. As 2 increases the spectrum of the randomly associated data becomes far from all-one, while that of the paired documents remains correlated. This observation can be exploited for choosing the optimal value of 2 . From the middle and right graphs in Figure 1 this value could be derived as lying somewhere between 8 and J . For the experiments reported in this study we used the value of 8
. Pseudo query test. To perform a more realistic test we generated short queries, which are most likely to occur in search engines, that consisted of the 5 most probable words from each test document. The relevant documents were the test documents themselves in monolinguistic retrieval (English query - English document) and their mates in the crosslinguistic (English query - French document) test. Table 3 shows the relative number of correctly retrieved as top-ranked English documents for English queries (left) and the relative number of correctly retrieved documents in the top ten ranked (right). Table 4 provides analogous results but for cross-linguistic retrieval. Table 3: English English top-ranked retrieval accuracy, % (left) and English English top-ten retrieval accuracy, % (right) 100 200 300 400 full 100 200 300 400 full cl-lsi 53 60 64 66 70 82 86 88 89 91 cl-kcca 60 63 70 71 73 90 93 94 95 95 Table 4: English French top-ranked retrieval accuracy, % (left) and English-French topten retrieval accuracy, % (right) 100 200 300 400 full 100 200 300 400 full cl-lsi 30 38 42 45 49 67 75 79 81 84 cl-kcca 68 75 78 79 81 94 96 97 98 98 Text categorisation using semantics learned on a completely different corpus. The semantics (300 vectors) extracted from the Canadian Parliament corpus (Hansard) was used in Support Vector Machine (SVM) text classification [2] of Reuters-21578 corpus (Table 5). In this experimental setting the intersection of vector spaces of the Hansards, 5159 English words from the first 1000-French-English-document training chunk, and Reuters ModApt split, 9962 words from the 9602 training and 3299 test documents had 1473 words. The extracted
OO KCCA vectors from English and French parts (raw ’KCCA’ of Table 5) and 300 eigenvectors from the same data (raw ’CL-LSI’) were used in the SVM # [4] with the kernel (11) to classify the Reuters-21578 data. The experiments were averaged over 10 runs with 5% each time randomly chosen fraction of training data as the difference between bag-of-words and semantic methods is more contrasting on smaller samples. Both CL-KCCA and CL-LSI perform remarkably well when one considers that they are based on just 1473 words. In all cases CL-KCCA outperforms the bag-of-words kernel. Table 5: value, %, averaged over 10 subsequent runs of SVM classifier with original Reuters-21578 data (’bag-of-words’) and preprocessed using semantics (300 vectors) extracted from the Canadian Parliament corpus by various methods. CLASS EARN ACQ GRAIN CRUDE BAG-OF-WORDS 81 57 33 13 CL-KCCA 90 75 43 38 CL-LSI 77 52 64 40 6 Conclusions We have presented a novel procedure for extracting semantic information in an unsupervised way from a bilingual corpus, and we have used it in text retrieval applications. Our main findings are that: the correlation existing between certain sets of words in english and french documents cannot be explained as random correlations. Hence we need to explain it by means of relations between the generative processes of the two versions of the documents. The correlations detect similarities in content between the two documents, and can be exploited to derive a semantic representation of the text. The representation is then used for retrieval tasks, providing better performance than existing techniques. References [1] F. R. Bach and M. I. Jordan. Kernel indepedendent component analysis. Journal of Machine Learning Research, 3:1–48, 2002. [2] Nello Cristianini and John Shawe-Taylor. An introduction to Support Vector Machines and other kernel-based learning methods. Cambridge University Press, 2000. [3] Ulrich Germann. Aligned Hansards of the 36th Parliament of Canada. http://www.isi.edu/natural-language/download/hansard/, 2001. Release 2001-1a. [4] Thorsten Joachims. # # Support Vector Machine. http://svmlight.joachims.org, 2002. [5] M. L. Littman, S. T. Dumais, and T. K. Landauer. Automatic cross-language information retrieval using latent semantic indexing. In G. Grefenstette, editor, Cross language information retrieval. Kluwer, 1998. [6] Alexei Vinokourov and Mark Girolami. A probabilistic framework for the hierarchic organisation and classification of document collections. Journal of Intelligent Information Systems, 18(2/3):153–172,2002. Special Issue on Automated Text Categorization.
|
2002
|
171
|
2,182
|
Cluster Kernels for Semi-Supervised Learning Olivier Chapelle, Jason Weston, Bernhard Scholkopf Max Planck Institute for Biological Cybernetics, 72076 Tiibingen, Germany {first. last} @tuebingen.mpg.de Abstract We propose a framework to incorporate unlabeled data in kernel classifier, based on the idea that two points in the same cluster are more likely to have the same label. This is achieved by modifying the eigenspectrum of the kernel matrix. Experimental results assess the validity of this approach. 1 Introduction We consider the problem of semi-supervised learning, where one has usually few labeled examples and a lot of unlabeled examples. One of the first semi-supervised algorithms [1] was applied to web page classification. This is a typical example where the number of unlabeled examples can be made as large as possible since there are billions of web page, but labeling is expensive since it requires human intervention. Since then, there has been a lot of interest for this paradigm in the machine learning community; an extensive review of existing techniques can be found in [10]. It has been shown experimentally that under certain conditions, the decision function can be estimated more accurately, yielding lower generalization error [1, 4, 6] . However, in a discriminative framework, it is not obvious to determine how unlabeled data or even the perfect knowledge of the input distribution P(x) can help in the estimation of the decision function. Without any assumption, it turns out that this information is actually useless [10]. Thus, to make use of unlabeled data, one needs to formulate assumptions. One which is made, explicitly or implicitly, by most of the semi-supervised learning algorithms is the so-called "cluster assumption" saying that two points are likely to have the same class label if there is a path connecting them passing through regions of high density only. Another way of stating this assumption is to say that the decision boundary should lie in regions of low density. In real world problems, this makes sense: let us consider handwritten digit recognition and suppose one tries to classify digits 0 from 1. The probability of having a digit which in between a 0 and 1 is very low. In this article, we will show how to design kernels which implement the cluster assumption, i.e. kernels such that the induced distance is small for points in the same cluster and larger for points in different clusters. ' :.. + ..... + Figure 1: Decision function obtained by an SVM with the kernel (1). On this toy problem, this kernel implements perfectly the cluster assumption: the decision function cuts a cluster only when necessary. 2 Kernels implementing the cluster assumption In this section, we explore different ideas on how to build kernels which take into account the fact that the data is clustered. In section 3, we will propose a framework which unifies the methods proposed in [11] and [5]. 2.1 Kernels from mixture models It is possible to design directly a kernel taking into account the generative model learned from the unlabeled data. Seeger [9] derived such a kernel in a Bayesian setting. He proposes to use the unlabeled data to learn a mixture of models and he introduces the Mutual Information kernel which is defined in such way that two points belonging to different components of the mixture model will have a low dot product. Thus, in the case of a mixture of Gaussians, this kernel is an implementation of the cluster assumption. Note that in the case of a single mixture model, the Fisher kernel [3] is an approximation of this Mutual Information kernel. Independently, another extension of the Fisher kernel has been proposed in [12] which leads, in the case of a mixture of Gaussians (J.Lk, ~k) to the Marginalized kernel whose behavior is similar to the mutual information kernel, q K(x, y) = L P(klx)P(kly)x T~kly. (1) k=l To understand the behavior of the Marginalized kernel, we designed a 2D-toy problem (figure 1): 200 unlabeled points have been sampled from a mixture of two Gaussians, whose parameters have then been learned with EM applied to these points. An SVM has been trained on 3 labeled points using the Marginalized kernel (1). The behavior of this decision function is intuitively very satisfying: on the one hand, when not enough label data is available, it takes into account the cluster assumption and does not cut clusters (right cluster), but on the other hand, the kernel is flexible enough to cope with different labels in the same cluster (left side). 2.2 Random walk kernel The kernels presented in the previous section have the drawback of depending on a generative model: first, they require an unsupervised learning step, but more importantly, in a lot of real world problems, they cannot model the input distribution with sufficient accuracy. When applying the mixture of Gaussians method (presented above) to real world problems, one cannot expect the "ideal" result of figure 1. For this reason, in clustering and semi-supervised learning, there has been a lot of interest to find algorithms which do not depend on a generative model. We will present two of them, find out how they are related and present a kernel which extends them. The first one is the random walk representation proposed in [11] . The main idea is to compute the RBF kernel matrix (with the labeled and unlabeled points) Kij = exp( -llxi - Xj 112 /2(2) and to interpret it as a transition matrix of a random walk on a graph with vertices Xi, P(Xi -+ Xj) = "K'k . . After t steps L.J p tp (where t is a parameter to be determined) , the probability of going from a point Xi to a point Xj should be quite high if both points belong to the same cluster and should stay low if they are in two different clusters. Let D be the diagonal matrix whose elements are Dii = Lj K ij . The one step transition matrix is D- 1 K and after t steps it is pt = (D - 1 K)t. In [11], the authors design a classifier which uses directly those transition probabilities. One would be tempted to use pt as a kernel matrix for a SVM classifier. However, it is not possible to directly use pt as a kernel matrix since it is not even symmetric. We will see in section 3 how a modified version of pt can be used as a kernel. 2.3 Kernel induced by a clustered representation Another idea to implement the cluster assumption is to change the representation of the input points such that points in the same cluster are grouped together in the new representation. For this purpose, one can use tools of spectral clustering (see [13] for a review) Using the first eigenvectors of a similarity matrix, a representation where the points are naturally well clustered has been recently presented in [5]. We suggest to train a discriminative learning algorithm in this representation. This algorithm, which resembles kernel PCA, is the following: 1. Compute the affinity matrix, which is an RBF kernel matrix but with diagonal elements being 0 instead of 1. 2. Let D be a diagonal matrix with diagonal elements equal to the sum of the rows (or the columns) of K and construct the matrix L = D - 1/ 2KD - 1/ 2. 3. Find the eigenvectors (Vi, ... , Vk) of L corresponding the first k eigenvalues. 4. The new representation of the point Xi is (Vii' ... ' Vik) and is normalized to have length one: ip(Xi)p = Vip / 0:=;=1 Vfj)1/2. The reason to consider the first eigenvectors of the affinity matrix is the following. Suppose there are k clusters in the dataset infinitely far apart from each other. One can show that in this case, the first k eigenvalues of the affinity matrix will be 1 and the eigenvalue k + 1 will be strictly less than 1 [5]. The value of this gap depends on how well connected each cluster is: the better connected, the larger the gap is (the smaller the k + 1st eigenvalue). Also, in the new representation in Rk there will be k vectors Zl, .. . ,Zk orthonormal to each other such that each training point is mapped to one of those k points depending on the cluster it belongs to. This simple example show that in this new representation points are naturally clustered and we suggest to train a linear classifier on the mapped points. 3 Extension of the cluster kernel Based on the ideas of the previous section, we propose the following algorithm: 1. As before, compute the RBF matrix K from both labeled and unlabeled points (this time with 1 on the diagonal and not 0) and D, the diagonal matrix whose elements are the sum of the rows of K. 2. Compute L = D- 1/ 2 K D- 1/ 2 and its eigendecomposition L = U AUT. 3. Given a transfer function <p, let :Xi = <p(Ai), where the Ai are the eigenvalues of L, and construct L = U AuT. 4. Let iJ be a diagonal matrix with iJii = 1/ Lii and compute K = iJ1/2 LiJ1/2. The new kernel matrix is K. Different transfer function lead to different kernels: Linear <p(A) = A. In this case L = L and iJ = D (since the diagonal elements of K are 1). It turns out that K = K and no transformation is performed. Step <p(A) = 1 if A 2: Acut and 0 otherwise. If Acut is chosen to be equal to the k-th largest eigenvalue of L, then the new kernel matrix K is the dot product matrix in the representation of [5] described in the previous section. Linear-step Same as the step function, but with <p(A) = A for A 2: Acut. This is closely related to the approach consisting in building a linear classifier in the space given by the first Kernel PCA components [8]: if the normalization matrix D and iJ were equal to the identity, both approaches would be identical. Indeed, if the eigendecomposition of K is K = U AUT, the coordinates of the training points in the kernel PCA representation are given by the matrix U A 1/2 . Polynomial <p(A) At. In this case, L Lt and K iJ1 /2 D1/2 (D- 1 K)t D-1/2 iJ1/2 . The matrix D-1 K is the transition matrix in the random walk described in section 2.2 and K can be interpreted as a normalized and symmetrized version of the transition matrix corresponding to a t step random walk. This makes the connection between the idea of the random walk kernel of section 2.2 and a linear classifier trained in a space induced by either the spectral clustering algorithm of [5] or the Kernel PCA algorithm. How to handle test points If test points are available during training and if they are also drawn from the same distribution as the training points (an assumption which is commonly made), then they should be considered as unlabeled points and the matrix K described above should be built using training, unlabeled and test points. However, it might happen that test points are not available during training. This is a problem, since our method produces a new kernel matrix, but not an analytic form of the effective new kernel that could readily be evaluated on novel test points. In this case, we propose the following solution: approximate a test point x as a linear combination of the training and unlabeled points, and use this approximation to express the required dot product between the test point and other points in the feature space. More precisely, let aD = argm~n 11<p(X) - n~u lli<P(Xi)II = K- 1v Linear (Normal SVM) --e-- Polynomial - Step -- Pol - sle 0.2 0.15 0.' 0.OS'-::'------:------:------:C'S:------=3C:2 --64:':--='28 Nb of labeled points Figure 2: Test error on a text classification problem for training set size varying from 2 to 128 examples. The different kernels correspond to different kind of transfer functions. with Vi = K(x, Xi)l . Here, <I> is the feature map corresponding to K, i.e., K(x, x') = (<I>(x) . <I>(x/)). The new dot product between the test point x and the other points is expressed as a linear combination of the dot products of k, 0 - 1 K(X,Xi) = (Ka )i = (KK vk Note that for a linear transfer function, k = K, and the new dot product is the standard one. 4 Experiments 4.1 Influence of the transfer function We applied the different kernel clusters of section 3 to the text classification task of [11], following the same experimental protocol. There are two categories mac and windows with respectively 958 and 961 examples of dimension 7511. The width of the RBF kernel was chosen as in [11] giving a = 0.55. Out of all examples, 987 were taken away to form the test set. Out of the remaining points, 2 to 128 were randomly selected to be labeled and the other points remained unlabeled. Results are presented in figure 2 and averaged over 100 random selections of the labeled examples. The following transfer functions were compared: linear (i.e. standard SVM), polynomial <p(A) = A5 , step keeping only the n + 10 where n is the number of labeled points, and poly-step defined in the following way (with 1 2 Ai 2 A2 2 ... ), i :S n + 10 i > n + 10 For large sizes of the (labeled) training set, all approaches give similar results. The interesting case are small training sets. Here, the step and poly-step functions work very well. The polynomial transfer function does not give good results for very small training sets (but nevertheless outperforms the standard SVM for medium sizes). This might be due to the fact that in this example, the second largest eigenvalue is 0.073 (the largest is by construction 1). Since the polynomial transfer function tends 1 We consider here an RBF kernel and for this reason the matrix K is always invertible. to push to 0 the small eigenvalues, it turns out that the new kernel has "rank almost one" and it is more difficult to learn with such a kernel. To avoid this problem, the authors of [11] consider a sparse affinity matrix with non-zeros entries only for neighbor examples. In this way the data are by construction more clustered and the eigenvalues are larger. We verified experimentally that the polynomial transfer function gave better results when applied to a sparse affinity matrix. Concerning the step transfer function, the value of the cut-off index corresponds to the number of dimensions in the feature space induced by the kernel, since the latter is linear in the representation given by the eigendecomposition of the affinity matrix. Intuitively, it makes sense to have the number of dimensions increase with the number of training examples, that is the reason why we chose a cutoff index equal to n + 10. The poly-step transfer function is somewhat similar to the step function, but is not as rough: the square root tends to put more importance on dimensions corresponding to large eigenvalues (recall that they are smaller than 1) and the square function tends to discard components with small eigenvalues. This method achieves the best results. 4.2 Automatic selection of the transfer function The choice of the poly-step transfer function in the previous choice corresponds to the intuition that more emphasis should be put on the dimensions corresponding to the largest eigenvalues (they are useful for cluster discrimination) and less on the dimensions with small eigenvalues (corresponding to intra-cluster directions). The general form of this transfer function is i ~ r i > r ' (2) where p, q E lR and r E N are 3 hyperparameters. As before, it is possible to choose qualitatively some values for these parameters, but ideally, one would like a method which automatically chooses good values. It is possible to do so by gradient descent on an estimate of the generalization error [2]. To assess the possibility of estimating accurately the test error associated with the poly-step kernel, we computed the span estimate [2] in the same setting as in the previous section. We fixed p = q = 2 and the number of training points to 16 (8 per class). The span estimate and the test error are plotted on the left side of figure 3. Another possibility would be to explore methods that take into account the spectrum of the kernel matrix in order to predict the test error [7]. 4.3 Comparison with other algorithms We summarized the test errors (averaged over 100 trials) of different algorithms trained on 16 labeled examples in the following table. The transductive SVM algorithm consists in maximizing the margin on both labeled and unlabeled. To some extent it implements also the cluster assumption since it tends to put the decision function in low density regions. This algorithm has been successfully applied to text categorization [4] and is a state-of-the-art algorithm for 0 .25,-----~-~-~-r=_ =:=;:' T'= "= '''O== , ==;] --e- S an estimale 0.2 0.22 ,----~~-~~-7_ ~T'= "= '''== O, ==]l -e- S an estimate 0.21 0.2 0.19 0.18 0 1 5 "~ ~ ~ 0.17 0.16 0.1 0.15 10 15 20 25 30 10 12 14 16 16 20 Figure 3: The span estimate predicts accurately the minimum of the test error for different values of the cutoff index r in the poly-step kernel (2). Left: text classification task, right: handwritten digit classification performing semi-supervised learning. The result of the Random walk kernel is taken directly from [11]. Finally, the cluster kernel performance has been obtained with p = q = 2 and r = 10 in the transfer function 2. The value of r was the one minimizing the span estimate (see left side of figure 3). Future experiments include for instance the Marginalized kernel (1) with the standard generative model used in text classification by Naive Bayes classifier [6]. 4.4 Digit recognition In a second set of experiments, we considered the task of classifying the handwritten digits 0 to 4 against 5 to 9 of the USPS database. The cluster assumption should apply fairly well on this database since the different digits are likely to be clustered. 2000 training examples have been selected and divided into 50 subsets on 40 examples. For a given run, one of the subsets was used as the labeled training set, whereas the other points remained unlabeled. The width of the RBF kernel was set to 5 (it was the value minimizing the test error in the supervised case). The mean test error for the standard SVM is 17.8% (standard deviation 3.5%), whereas the transductive SVM algorithm of [4] did not yield a significant improvement (17.6% ± 3.2%). As for the cluster kernel (2), the cutoff index r was again selected by minimizing the span estimate (see right side of figure 3). It gave a test error of 14.9% (standard deviation 3.3%). It is interesting to note in figure 3 the local minimum at r = 10, which can be interpreted easily since it corresponds to the number of different digits in the database. It is somehow surprising that the transductive SVM algorithm did not improve the test error on this classification problem, whereas it did for text classification. We conjecture the following explanation: the transductive SVM is more sensitive to outliers in the unlabeled set than the cluster kernel methods since it directly tries to maximize the margin on the unlabeled points. For instance, in the top middle part of figure 1, there is an unlabeled point which would have probably perturbed this algorithm. However, in high dimensional problems such as text classification, the influence of outlier points is smaller. Another explanation is that this method can get stuck in local minima, but that again, in higher dimensional space, it is easier to get out of local minima. 5 Conclusion In a discriminative setting, a reasonable way to incorporate unlabeled data is through the cluster assumption. Based on the ideas of spectral clustering and random walks, we proposed a framework for constructing kernels which implement the cluster assumption: the induced distance depends on whether the points are in the same cluster or not. This is done by changing the spectrum of the kernel matrix. Since there exist several bounds for SVMs which depend on the shape of this spectrum, the main direction for future research is to perform automatic model selection based on these theoretical results. Finally, note that the cluster assumption might also be useful in a purely supervised learning task. Acknowledgments The authors would like to thank Martin Szummer for helpful discussion on this topic and for having provided us with his database. References [1] A. Blum and T. Mitchell. Combining labeled and unlabeled data with co-training. In COLT: Proceedings of the Workshop on Computational Learning Theory. Morgan Kaufmann Publishers, 1998. [2] O. Chapelle, V. Vapnik, O. Bousquet, and S. Mukherjee. Choosing multiple parameters for support vector machines. Machine Learning, 46(1-3):131-159, 2002. [3] T. Jaakkola and D. Haussler. Exploiting generative models in discriminative classifiers. In Advances in Neural Information Processing, volume 11, pages 487-493. The MIT Press, 1998. [4] T. Joachims. Transductive inference for text classification using support vector machines. In Proceedings of the 16th International Conference on Machine Learning, pages 200- 209. Morgan Kaufmann, San Francisco, CA, 1999. [5] A. Y. Ng, M. I. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm. In Advances in Neural Information Processing Systems, volume 14, 200l. [6] K Nigam, A. K McCallum, S. Thrun, and T. M. Mitchell. Learning to classify text from labeled and unlabeled documents. In Proceedings of AAAI-9S, 15th Conference of the American Association for Artificial Intelligence, pages 792- 799, Madison, US, 1998. AAAI Press, Menlo Park, US. [7] B. Scholkopf, J. Shawe-Taylor, A. J. Smola, and R. C. Williamson. Generalization bounds via eigenvalues of the Gram matrix. Technical Report 99-035, NeuroColt, 1999. [8] B. Scholkopf, A. Smola, and K-R. Muller. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10:1299- 1310, 1998. [9] M. Seeger. Covariance kernels from Bayesian generative models. In Advances in Neural Information Processing Systems, volume 14, 200l. [10] M. Seeger. Learning with labeled and unlabeled data. Technical report, Edinburgh University, 200l. [11] M. Szummer and T. Jaakkola. Partially labeled classification with markov random walks. In Advances in Neural Information Processing Systems, volume 14, 200l. [12] K Tsuda, T. Kin, and K Asai. Marginalized kernels for biological sequences. Bioinformatics , 2002. To appear. Also presented at ICMB 2002. [13] Y. Weiss. Segmentation using eigenvectors: A unifying view. In International Conference on Computer Vision, pages 975- 982, 1999.
|
2002
|
172
|
2,183
|
A Hierarchical Bayesian Markovian Model for Motifs in Biopolymer Sequences Eric P. Xing, Michael I. Jordan, Richard M. Karp and Stuart Russell Computer Science Division University of California, Berkeley Berkeley, CA 94720 epxing,jordan,karp,russell @cs.berkeley.edu Abstract We propose a dynamic Bayesian model for motifs in biopolymer sequences which captures rich biological prior knowledge and positional dependencies in motif structure in a principled way. Our model posits that the position-specific multinomial parameters for monomer distribution are distributed as a latent Dirichlet-mixture random variable, and the position-specific Dirichlet component is determined by a hidden Markov process. Model parameters can be fit on training motifs using a variational EM algorithm within an empirical Bayesian framework. Variational inference is also used for detecting hidden motifs. Our model improves over previous models that ignore biological priors and positional dependence. It has much higher sensitivity to motifs during detection and a notable ability to distinguish genuine motifs from false recurring patterns. 1 Introduction The identification of motif structures in biopolymer sequences such as proteins and DNA is an important task in computational biology and is essential in advancing our knowledge about biological systems. For example, the gene regulatory motifs in DNA provide key clues about the regulatory network underlying the complex control and coordination of gene expression in response to physiological or environmental changes in living cells [11]. There have been several lines of research on statistical modeling of motifs [7, 10], which have led to algorithms for motif detection such as MEME [1] and BioProspector [9] Unfortunately, although these algorithms work well for simple motif patterns, often they are incapable of distinguishing what biologists would recognize as a true motif from a random recurring pattern [4], and provide no mechanism for incorporating biological knowledge of motif structure and sequence composition. Most motif models assume independence of position-specific multinomial distributions of monomers such as nucleotides (nt) and animo acids (aa). Such strategies contradict our intuition that the sites in motifs naturally possess spatial dependencies for functional reasons. Furthermore, the vague Dirichlet prior used in some of these models acts as no more than a smoother, taking little consideration of the rich prior knowledge in biologically identified motifs. In this paper we describe a new model for monomer distribution in motifs. Our model is based on a finite set of informative Dirichlet distributions and a (first-order) Markov model for transitions between Dirichlets. The distribution of the monomers is a continuous mixture of position-specific multinomials which admit a Dirichlet prior according to the hidden Markov states, introducing both multi-modal prior information and dependencies. We also propose a framework for decomposing the general motif model into a local alignment model for motif pattern and a global model for motif instance distribution, which allows complex models to be developed in a modular way. To simplify our discussion, we use DNA motif modeling as a running example in this paper, though it should be clear that the model is applicable to other sequence modeling problems. 2 Preliminaries DNA motifs are short (about 6-30 bp) stochastic string patterns (Figure 1) in the regulatory sequences of genes that facilitate control functions by interacting with specific transcriptional regulatory proteins. Each motif typically appears once or multiple times in the control regions of a small set of genes. Each gene usually harbors several motifs. We do not know the patterns of most motifs, in which gene they appear and where they appear. The goal of motif detection is to identify instances of possible motifs hidden in sequences and learn a model for each motif for future prediction. A regulatory DNA sequence can be fully specified by a character string
A,T,C,G , and an indicator string that signals the locations of the motif occurrences. The reason to call a motif a stochastic string pattern rather than a word is due to the variability in the “spellings” of different instances of the same motif in the genome. Conventionally, biologists display a motif pattern (of length ) by a multi-alignment of all its instances. The stochasticity of motif patterns is reflected in the heterogeneity of nucleotide species appearing in each column (corresponding to a position or site in the motif) of the multi-alignment. We denote the multi-alignment of all instances of a motif specified by the indicator string in sequence by . Since any can be characterized by the nucleotide counts for each column, we define a counting matrix (or ), where each column "!#$!%
&!('*) is an integer vector with four elements, giving the number of occurrences of each nucleotide at position + of the motif. (Similarly we can define the counting vector , for the whole sequence .) With these settings, one can model the nt-distribution of a position + of the motif by a position-specific multinomial distribution, -*!./#0-*!12
-*!3'*) . Formally, the problem of inferring 45 6 7
698 7 and : -*
-*; (often called a position-weight matrix, or PWM), given a sequence set < =6 >7
=698 7 , is motif detection in a nutshell 1. 0 20 40 60 0 1 2 abf1 (21) 0 20 40 60 0 1 2 gal4 (14) 0 20 40 60 0 1 2 gcn4 (24) 0 20 40 60 0 1 2 gcr1 (17) 0 20 40 60 0 1 2 mat−a2 (12) 0 20 40 60 0 1 2 mcb (16) 0 20 40 60 0 1 2 mig1 (11) 0 20 40 60 0 1 2 crp (24) Figure 1: Yeast motifs (solid line) with ? 30 bp flanking regions (dashed line). The @ axis indexes position and the A axis represents the information content BDCFEHG0IKJ%L of the multinomial distribution IJ of nt at position M . Note the two typical patterns: the U-shape and the bell-shape. Θ x x t t t’ t’ y y q q θ θ M M l l l’ l’ y y m,l m,l’ Figure 2: (Left) A general motif model is a Bayes-ian multinet. Conditional on the value of @ , A admits different distributions (round-cornered boxes) parameterized by N . (Right) The HMDM model for motif instances specified by a given @ . Boxes are plates representing replicates. 1Multiple motif detection can be formulated in a similar way, but for simplicity, we omit this elaboration. See full paper for details. Also for simplicity, we omit the superscript O (sequence index) of variable @ and A in wherever it is unnecessary. 3 Generative models for regulatory DNA sequences 3.1 General setting and related work Without loss of generality, assume that the occurrences of motifs in a DNA sequence, as indicated by , are governed by a global distribution : ; for each type of motif, the nucleotide sequence pattern shared by all its instances admits a local alignment model : ! ! . (Usually, the background non-motif sequences are modeled by a simple conditional model, 0 0 :
, where the background nt-distribution parameters : are assumed to be learned a priori from the entire sequence and supplied as constants in the motif detection process.) The symbols : , : ! , , ! stand for the parameters and model classes in the respective submodels. Thus, the likelihood of a regulatory sequence is: : : : !>/! 0 : : !>/! 0 : (1) where ! 0 . Note that : ! here is not necessarily equivalent to the position-specific multinomial parameters : in Eq. 2 below, but is a generic symbol for the parameters of a general model of aligned motif instances. The model 0 : captures properties such as the frequencies of different motifs and the dependencies between motif occurrences. Although specifying this model is an important aspect of motif detection and remains largely unexplored, we defer this issue to future work. In the current paper, our focus is on capturing the intrinsic properties within motifs that can help to improve sensitivity and specificity to genuine motif patterns. For this the key lies in the local alignment model #" 0 : !/! , which determines the PWM of the motif. Depending on the value of the latent indicator $ (a motif or not at position % ), $ admits different probabilistic models, such as a motif alignment model or a background model. Thus sequence is characterized by a Bayesian multinet [6], a mixture model in which each component of the mixture is a specific nt-distribution model corresponding to sequences of a particular nature. Our goal in this paper is to develop an expressive local alignment model #" : !>/! capable of capturing characteristic site-dependencies in motifs. In the standard product-multinomial (PM) model for local alignment, the columns of a PWM are assumed to be independent [9]. Thus the likelihood of a multi-alignment is: : D ; & !(' ' & ) ' * -*! ),+-. / (2) Although a popular model for many motif finders, PM nevertheless is sensitive to noise and random or trivial recurrent patterns, and is unable to capture potential site-dependencies inside the motifs. Pattern-driven auxiliary submodels (e.g., the fragmentation model [10]) or heuristics (e.g., split a ’two-block’ motif into two coupled sub-motifs [9, 1]) have been developed to handle special patterns such as the U-shaped motifs, but they are inflexible and difficult to generalize. Some of the literature has introduced vague Dirichlet priors for in the PM [2, 10], but they are primarily used for smoothing rather than for explicitly incorporating prior knowledges about motifs. We depart from the PM model and introduce a dynamic hierarchical Bayesian model for motif alignment , which captures site dependencies inside the motif so that we can predict biologically more plausible motifs, and incorporate prior knowledge of nucleotide frequencies of general motif sites. In order to keep the local alignment model our main focus as well as simplifying the presentation, we adopt an idealized global motif distribution model called “one-per-sequence” [8], which, as the name suggests, assumes each sequence harbors one motif instance (at an unknown location). Generalization to more expressive global models is straightforward and is described in the full paper. 3.2 Hidden Markov Dirichlet-Multinomial (HMDM) Model In the HMDM model, we assume that there are underlying latent nt-distribution prototypes, according to which position-specific multinomial distributions of nt are determined, and that each prototype is represented by a Dirichlet distribution. Furthermore, the choice of prototype at each position in the motif is governed by a first-order Markov process. More precisely, a multi-alignment ; containing motif instances is generated by the following process. First we sample a sequence of prototype indicators H 2
; from a first-order Markov process with initial distribution and transition matrix . Then we repeat the following for each column + : (1) A component from a mixture of Dirichlets
, where each
' , is picked according to indicator ! . Say we picked . (2) A multinomial distribution -*! is sampled according to , the probability defined by Dirichlet component over all such distributions. (3) All the nucleotides in column + are generated i.i.d. according to Multi -*! . The complete likelihood of motif alignment ; characterized by counting matrix is: : ! ! $ 0 ; & !('
& ' . ' & ) ' -*! ) ! #" . 6%$ & /' / .)( 7
& ' " ; ( & !('
& ) ' ) . / .+* " (3) The major role of HMDM is to impose dynamic priors for modeling data whose distributions exhibit temporal or spatial dependencies. As Figure 2(b) makes clear, this model is not a simple HMM for discrete sequences. In such a model the transition would be between the emission models (i.e., multinomials) themselves, and the output at each time would be a single data instance in the sequence. In HMDM, the transitions are between different priors of the emission models, and the direct output of the HMM is the parameter vector of a generative model, which will be sampled multiple times at each position to generate random instances. This approach is especially useful when we have either empirical or learned prior knowledge about the dynamics of the data to be modeled. For example, for the case of motifs, biological evidence show that conserved positions (manifested by a low-entropy multinomial nt-distribution) are likely to concatenate, and maybe so do the less conserved positions. However, it is unlikely that conserved and less conserved positions are interpolated [4]. This is called site clustering, and is one of the main motivations for the HMDM model. 4 Inference and Learning 4.1 Variational Bayesian Learning In order to do Bayesian estimation of the motif parameter - , and to predict the locations of motif instances via , we need to be able to compute the posterior distribution 0 , which is infeasible in a complex motif model. Thus we turn to variational approximation [5]. We seek to approximate the joint posterior over parameters and hidden states - with a simpler distribution ,0-" D-,/. 0-)2, 0= , where ,/. and ,0. can be, for the time being, thought of as free distributions to be optimized. Using Jensen’s inequality, we have the following lower found on the log likelihood: 1!2 0 43 576 6 8, . -2:9 576 ;, = 1<2 - , 0 = 1<2 - ,0. 0-"2#> (4) KL ?,0-) =A@ 0-) = Thus, maximizing the lower bound of the log likelihood (call it B , C, . ) with respect to free distributions , and , . is equivalent to minimizing the KL divergence between the true joint posterior and its variational approximation. Keeping either , or ,/. fixed and maximizing B with respect to the other, we obtain the following coupled updates: , 0= # 1!2 0 - ) &
(5) , . -2 - # 1!2 0 -" ) (6) In our motif model, the prior and the conditional submodels form a conjugate-exponential pair (Dirichlet-Multinomial). It can be shown that in this case we can essentially recover the same form of the original conditional and prior distributions in their variational approximations except that the parameterization is augmented with appropriate Bayesian and posterior updates, respectively: , 0 0 D 0 (7) ,0. 0-)2 0-) 0 (8) where 0D # ) &
( is the natural parameter) and # 0 ) . As Eqs. 7 and 8 make clear, the locality of inference and marginalization on the latent variables is preserved in the variational approximation, which means probabilistic calculations can be performed in the prior and the conditional models separately and iteratively. For motif modeling, this modular property means that the motif alignment model and motif distribution model can be treated separately with a simple interface of the posterior mean for the motif parameters and expected sufficient statistics for the motif instances. 4.2 Inference and learning According to Eq. 8, we replace the counting matrix in Eq. 3, which is the output of the HMDM model, by the expected counting matrix # =) obtained from inference in the global distribution model (we will handle this later, thanks to the locality preservation property of inference in variational approximations), and proceed with the inference as if we have “observations” # =) . Integrating over - , we have the marginal distribution: #$ ) ; ( & !(' ! ' ! ; & !(' #$ ! ) ! (9) a standard HMM with emission probability: G J J! #" L $ G&% ' L $ G( J*)+,% ' L . /1032 $ G / J 4) ' 5 / L $ G ' 5 / L 6 (10) We can compute the posterior probability of the hidden states *!(#$ ) and the matrix of co-occurrence probabilities ! K! ' 9# =) using standard forward-backward algorithm. We next compute the expectation of the natural parameters (which is 1<2 - for multinomial parameters). Given the “observations” # =) , the posterior mean is computed as follows: 7 8 G0IKJ 5 / L 9;:=<?> .A@CB IKJ 5 / G % IKJ?D J& ' D DE L,F % IKJ G < 0=2IH G J LKJ,L G ' 5 / )+ / J L
CML G&% ' 1)N( JO L&PQD (11) where R ! is the posterior probability of the hidden state (an output of the forwardbackward algorithm) and S 0 UTWV X1YZ 6 7 T [Z]\ 6 7 Z 6 7 is the digamma function. Following Eq. 7, given the posterior means of the multinomial parameters, computing the expected counting matrix # =) under the the one-per-sequence global model for sequence set =6 >7 =6 7 is straightforward based on Eq. 2 and we simply give the final results: # "! ) ) ^ 8 _ ' Q` ( ; ' $' , 6 _ 7 %a 6 _ 7 $ ' ! ( &b (12) where G L
2 . J 0 . / 0=2 7 I J 5 / I 5 / ` * . & / "!$#
2 < J 0% < / 0=2'& G A (*) J D,+ LKJ 7 8 G0IKJ 5 / L
C 8 G0I 5 / L&P.(13) Bayesian estimates of the multinomial parameters for the position-specific nt-distribution of the motif are obtained via fixed-point iteration under the following EM-like procedure: / Variational E step: Compute the expected sufficient statistic, the count matrix # =) , via inference in the global motif model given ! ) . / Variational M step: Compute the expected natural parameter -*! ) via inference in the local motif alignment model given # =) . This basic inference and learning procedure provides a framework that scales readily to more complex models. For example, the motif distribution model 0= can be made more sophisticated so as to model complex properties of multiple motifs such as motif-level dependencies (e.g., co-occurrence, overlaps and concentration within regulatory modules) without complicating the inference in the local alignment model. Similarly, the motif alignment model can also be more expressive (e.g., a mixture of HMDMs) without interfering with inference in the motif distribution model. 5 Experiments We test the HMDM model on a motif collection from The Promoter Database of Saccharomyces cerevisiae (SCPD). Our dataset contains twenty motifs, each has 6 to 32 instances all of which are identified via biological experiments. We begin with an experiment showing how HMDM can capture intrinsic properties of the motifs. The posterior distribution of the position-specific multinomial parameters - , reflected in the parameters of the Dirichlet mixtures learned from data, can reveal the ntdistribution patterns of the motifs. Examining the transition probabilities between different Dirichlet components further tells us the about dependencies between adjacent positions (which indirectly reveals the “shape” information). We set the total number of Dirichlet components to be 8 based on an intelligent guess (using biological intuition), and Figure 3(a) shows the Dirichlet parameters fitted from the dataset via empirical Bayes estimation. Among the 8 Dirichlet components, numbers 1-4 favor a pure distribution of single nucleotides A, T, G, C, respectively, suggesting they correspond to “homogeneous” prototypes. Whereas numbers 7 and 8 favor a near uniform distribution of all 4 nt-types, hence “heterogeneous” prototypes. Components 5 and 6 are somewhat in between. Such patterns agree well with the biological definition of motifs. Interestingly, from the learned transition model of the HMM (Figure 3(b)), it can be seen that the transition probability from a homogeneous prototype to a heterogeneous prototype is significantly less than that between two homogeneous or two heterogeneous prototypes, confirming an empirical speculation in biology that motifs have the so-called site clustering property [4]. A T G C 0 5 10 A T G C 0 5 10 A T G C 0 5 10 A T G C 0 5 10 A T G C 0 5 10 A T G C 0 5 10 A T G C 0 5 10 A T G C 0 5 10 Posterior Dirichlet parameters 2 4 6 8 2 4 6 8 1 2 0 0.2 0.4 0.6 0.8 1 gal4 (hit) 1 2 0 0.2 0.4 0.6 0.8 1 gal4 (mis−hit) 1 2 0 0.2 0.4 0.6 0.8 1 abf1 (hit) 1 2 0 0.2 0.4 0.6 0.8 1 abf1 (mis−hit) (a) (b) (c) Figure 3: (a) Dirichlet hyperparameters. (b) Markov transition matrix. (c) Boxplots of hit and mishit rate of HMDM(1) and PM(2) on two motifs used during HMDM training. Are the motif properties captured in HMDM useful in motif detection? We first examine an HMDM trained on the complete dataset for its ability to detect motifs used in training in the presence of a “decoy”: a permuted motif. By randomly permuting the positions in the motif, the shapes of the “U-shaped” motifs (e.g., abf1 and gal4) change dramatically.2 We insert each instance of motif/decoy pair into a 300-500 bp random background sequence at random position and .3 We allow a 3 bp offset as a tolerance window, and score a hit when = (and a mis-hit when
= ), where is the position where a motif instance is found. The (mis)hit rate is the proportion of (mis)hits to the total number of motif instances to be found in an experiment. Figure 3(c) shows a boxplot of the hit and mishit rate of HMDM on abf1 and gal4 over 50 randomly generated experiments. Note the dramatic contrast of the sensitivity of the HMDM to true motifs compared to that of the PM model (which is essentially the MEME model). 1 2 3 4 0 0.2 0.4 0.6 0.8 1 abf1 1 2 3 4 0 0.2 0.4 0.6 0.8 1 gal4 1 2 3 4 0 0.2 0.4 0.6 0.8 1 gcn4 1 2 3 4 0 0.2 0.4 0.6 0.8 1 gcr1 1 2 3 4 0 0.2 0.4 0.6 0.8 1 mat−a2 1 2 3 4 0 0.2 0.4 0.6 0.8 1 mcb 1 2 3 4 0 0.2 0.4 0.6 0.8 1 mig1 1 2 3 4 0 0.2 0.4 0.6 0.8 1 crp 1 2 3 4 0 0.2 0.4 0.6 0.8 1 abf1 1 2 3 4 0 0.2 0.4 0.6 0.8 1 gal4 1 2 3 4 0 0.2 0.4 0.6 0.8 1 gcn4 1 2 3 4 0 0.2 0.4 0.6 0.8 1 gcr1 1 2 3 4 0 0.2 0.4 0.6 0.8 1 mat−a2 1 2 3 4 0 0.2 0.4 0.6 0.8 1 mcb 1 2 3 4 0 0.2 0.4 0.6 0.8 1 mig1 1 2 3 4 0 0.2 0.4 0.6 0.8 1 crp (a) true motif only (b) true motif + decoy Figure 4: Motif detection on an independent test dataset (the 8 motifs in Figure 1(a)). Four models used are indexed as: 1. HMDM(bell); 2. HMDM(U); 3. HMDM-mixture; 4. PM. Boxplot of hit-rate is for 80 randomly generated experiments (the center of the notch is the median). How well does HMDM generalize? We split our data into a training set and a testing set, and further divide the training set roughly based on bell-shaped and U-shaped patterns to train two different HMDMs, respectively, and a mixture of HMDMs. In the first motif finding task, we are given sequences each of which has only one true motif instance at a random position. The results are given in Figure 4(a). We see that for 4 motifs, using an HMDM or the HMDM-mixtures significantly improves performance over PM model. In three other cases they are comparable, but for motif mcb, all HMDM models lose. Note that mcb is very “conserved,” which is in fact “atypical” in the training set. It is also very short, which diminishes the utility of an HMM. Another interesting observation from Figure 4(a) is that even when both HMDMs perform poorly, the HMDM-mixtures can still perform well (e.g., mat-a2), presumably because of the extra flexibility provided by the mixture model. The second task is more challenging and biologically more realistic, where we have both the true motifs and the permuted “decoys.” We show only the hit-rate over 80 experiments in Figure 4(b). Again, in most cases HMDM or the HMDM mixture outperforms PM. 6 Conclusions We have presented a generative probabilistic framework for modeling motifs in biopolymer sequences. Naively, categorical random variables with spatial/temporal dependencies can be modeled by a standard HMM with multinomial emission models. However, the limited flexibility of each multinomial distribution and the concomitant need for a potentially large number of states to model complex domains may require a large parameter count and lead to overfitting. The infinite HMM [3] solve this issue by replacing the emission model with a Dirichlet process which provides potentially infinite flexibility. However, this approach is purely data-driven and provides no mechanism for explicitly capturing multi-modality 2By permutation we mean each time the same permuted order is applied to all the instances of a motif so that the multinomial distribution of each position is not changed but their order changed. 3We resisted the temptation of using biological background sequences because we would not know if and how many other motifs are in such sequences, which renders them ill-suited for purposes of evaluation. in the emission and the transition models or for incorporating informative priors. Furthermore, when the output of the HMM involves hidden variables (as for the case of motif detection), inference and learning is further complicated. HMDM assumes that positional dependencies are induced at a higher level among the finite number of informative Dirichlet priors rather than between the multinomials themselves. Within such a framework, we can explicitly capture the multi-modalities of the multinomial distributions governing the categorical variable (such as motif sequences at different positions) and the dependencies between modalities, by learning the model parameters from training data and using them for future predictions. In motif modeling, such a strategy was used to capture different distribution patterns of nucleotides (homogeneous and heterogeneous) and transition properties between patterns (site clustering). Such a prior proves to be beneficial in searching for unseen motifs in our experiment and helps to distinguish more probable motifs from biologically meaningless random recurrent patterns. Although in the motif detection setting the HMDM model involves a complex missing data problem in which both the output and the internal states of the HMDM are hidden, we show that a variational Bayesian learning procedure allows probabilistic inference in the prior model of motif sequence patterns and in the global distribution model of motif locations to be carried out virtually separately with a Bayesian interface connecting the two processes. This divide and conquer strategy makes it much easier to develop more sophisticated models for various aspects of motif analysis without being overburdened by the somewhat daunting complexity of the full motif problem. References [1] T. L. Bailey and C. Elkan. Unsupervised learning of multiple motifs in biopolymers using EM. Machine Learning, 21:51–80, 1995. [2] T. L. Bailey and C. Elkan. The value of prior knowledge in discovering motifs with MEME. In Proc. of the 3rd International Conf. on Intelligent Systems for Molecular Biology, 1995. [3] M. J. Beal, Z. Ghahramani, and C. E. Rasmussen. The infinite hidden Markov model. In Proc. of 14th Conference on Advances in Neural Information Processing Systems, 2001. [4] M. Eisen. Structural properties of transcription factor-DNA interactions and the inference of sequence specificity. manuscript in preparation. [5] Z. Ghahramani and M.J. Beal. Propagation algorithms for variational Bayesian learning. In Proc. of 13th Conference on Advances in Neural Information Processing Systems, 2000. [6] D. Heckerman, D. Geiger, and D. M. Chickering. Learning Bayesian networks: the combination of knowledge and statistics data. Machine Learning, 20:197–243, 1995. [7] C. Lawrence and A. Reilly. An expectation maximization (EM) algorithm for the identification and characterization of common sites in unaligned biopolymer sequences. Proteins, 7:41–51, 1990. [8] C.E. Lawrence, S.F. Altschul, M.S. Boguski, J.S. Liu, A.F. Neuwald, and J.C. Wootton. Detecting subtle sequence signals: A Gibbs sampling strategy for multiple alignment. Science, 262:208–214, 1993. [9] J. Liu, X. Liu, and D.L. Brutlag. Bioprospector: Discovering conserved DNA motifs in upstream regulatory regions of co-expressed genes. In Proc. of PSB, 2001. [10] J.S. Liu, A.F. Neuwald, and C.E. Lawrence. Bayesian models for multiple local sequence alignment and Gibbs sampling strategies. J. Amer. Statistical Assoc, 90:1156–1169, 1995. [11] A. M. Michelson. Deciphering genetic regulatory codes: A challenge for functional genomics. Proc. Natl. Acad. Sci. USA, 99:546–548, 2002.
|
2002
|
173
|
2,184
|
Speeding up the Parti-Game Algorithm Maxim Likhachev School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 maxim+@cs.cmu.edu Sven Koenig College of Computing Georgia Institute of Technology Atlanta, GA 30312-0280 skoenig@cc.gatech.edu Abstract In this paper, we introduce an efficient replanning algorithm for nondeterministic domains, namely what we believe to be the first incremental heuristic minimax search algorithm. We apply it to the dynamic discretization of continuous domains, resulting in an efficient implementation of the parti-game reinforcement-learning algorithm for control in high-dimensional domains. 1 Introduction We recently developed Lifelong Planning A* (LPA*), a search algorithm for deterministic domains that combines incremental and heuristic search to reduce its search time [1]. Incremental search reuses information from previous searches to find solutions to series of similar search tasks faster than is possible by solving each search task from scratch [2], while heuristic search uses distance estimates to focus the search and solve search problems faster than uninformed search. In this paper, we extend LPA* to nondeterministic domains. We believe that the resulting search algorithm, called Minimax LPA*, is the first incremental heuristic minimax search algorithm. We apply it to the dynamic discretization of continuous domains, resulting in an efficient implementation of the popular parti-game algorithm [3]. Our first experiments suggest that this implementation of the parti-game algorithm can be an order of magnitude faster in two-dimensional domains than one with uninformed search from scratch and thus might allow the parti-game algorithm to scale up to larger domains. There also exist other ways of decreasing the amount of search performed by the parti-game algorithm. We demonstrate some advantages of Minimax LPA* over Prioritized Sweeping [4] in [5] but it is future work to compare it with the algorithms developed in [6]. 2 Parti-Game Algorithm The objective of the parti-game algorithm is to move an agent from given start coordinates to given goal coordinates in continuous and potentially high-dimensional domains with obstacles of arbitrary shapes. It is popular because it is simple, efficient, and applies to a broad range of control problems. To solve these problems, one can first discretize the domains and then use conventional search algorithms to determine plans that move the agent to the goal coordinates. However, uniform discretizations can prevent one from finding a plan if (a) S1 S0 A S3 S2 S5 S4 S7 Sgoal S9 S8 S11 S10 (b) 6 s8 s0 s9 s1 s2 s4 s5 sgoal s7 h=6 g=∞ rhs=24 s3 h=0 g=18 rhs=18 h=6 g=∞ rhs=18 h=6 g=12 rhs=12 h=12 g=6 rhs=6 h=18 g=∞ rhs=12 h=6 g=12 rhs=12 h=6 g=6 rhs=6 h=12 g=0 rhs=0 h=18 g=∞ rhs=6 6 6 6 6 6 6 6 6 6 s10 s11 h=24 g=∞ rhs=∞ h=24 g=∞ rhs=∞ 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 gd=12 gd=6 gd=0 gd=6 gd=12 gd=18 gd=18 gd=12 gd=6 gd=12 gd=18 gd=24 (c) 6 s8 s0 s9 s1 s2 s4 s5 sgoal s7 h=6 g=24 rhs=24 s3 h=0 g=30 rhs=30 h=6 g=18 rhs=18 h=6 g=12 rhs=12 h=12 g=6 rhs=6 h=18 g=12 rhs=12 h=6 g=12 rhs=12 h=6 g=6 rhs=6 h=12 g=0 rhs=0 h=18 g=6 rhs=6 6 6 6 6 6 6 6 6 6 s10 s11 h=24 g=∞ rhs=18 h=24 g=∞ rhs=12 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 ∞ ∞ gd=12 gd=6 gd=0 gd=6 gd=12 gd=18 gd=18 gd=12 gd=6 gd=12 gd=30 gd=24 (d) 6 s8 s0 s9 s1 s2 s4 s5 sgoal s7 h=6 g=24 rhs=24 s3 h=0 g=∞ rhs=∞ h=6 g=18 rhs=18 h=6 g=12 rhs=12 h=12 g=6 rhs=6 h=18 g=12 rhs=12 h=6 g=12 rhs=12 h=6 g=6 rhs=6 h=12 g=0 rhs=0 h=18 g=6 rhs=6 6 6 6 6 6 6 6 6 6 s10 s11 h=24 g=18 rhs=18 h=24 g=12 rhs=12 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 ∞ ∞ gd=12 gd=6 gd=0 gd=6 gd=12 gd=18 gd=18 gd=12 gd=6 gd=12 gd=∞ gd=24 ∞ (e) S’1 S0 AS’3 S’2 S’5 S4 S7 Sgoal S9 S8 S11 S10 A S’’1 S’’2 S’’3 S’’5 (f) s7 h=12 g=∞ rhs=6 s0 s’’1 sgoal h=6 g=∞ rhs=∞ h=6 g=∞ rhs=18 h=12 g=0 rhs=0 s’1 h=6 g=∞ rhs=24 3 3 5 5 6 6 s’2 s’’3 h=3 g=∞ rhs=24 h=6 g=∞ rhs=12 s’3 h=0 g=15 rhs=15 s’’2 h=3 g=12 rhs=12 3 3 3 3 3 3 6 6 6 6 6 6 s4 s’’5 h=6 g=∞ rhs=∞ h=6 g=6 rhs=6 s’5 h=6 g=∞ rhs=11 3 3 5 5 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 s8 s9 h=18 g=∞ rhs=∞ h=18 g=∞ rhs=6 6 6 6 6 6 6 s10 s11 h=24 g=∞ rhs=∞ h=24 g=∞ rhs=∞ 6 6 6 6 6 6 Figure 1: Example behavior of the parti-game algorithm they are too coarse-grained (for example, because the resolution prevents one from noticing small gaps between obstacles) and results in large state spaces that cannot be searched efficiently if they are too fine-grained. The parti-game algorithm solves this dilemma by starting with a coarse discretization and refines it during execution only when and where it is needed (for example, around obstacles), resulting in a nonuniform discretization. We use a simple two-dimensional robot navigation domain to illustrate the behavior of the parti-game algorithm. Figure 1(a) shows the initial discretization of our example domain into 12 large cells together with the start coordinates of the agent (A) and the goal region (cell containing ). Thus, it can always attempt to move towards the center of each adjacent cell (that is, cell that its current cell shares a border line with). The agent can initially attempt to move towards the centers of either , , or , as shown in the figure. Figure 1(b) shows the state space that corresponds to the discretized domain under this assumption. Each state corresponds to a cell and each action corresponds to a movement option. The parti-game algorithm initially ignores obstacles and makes the optimistic (and sometimes wrong) assumption that each action deterministically reaches the intended state, for example, that the agent indeed reaches if it is somewhere in
and moves towards the center of . The costs of an action outcome approximates the Euclidean distance from the center of the old cell of the agent to the center of its new cell.1 (The cost of the action 1We compute both the costs of action outcomes and the heuristics of states using an imaginary uniform grid, shown in gray in Figures 1(a) and (e), whose cell size corresponds to the resolution limit of the parti-game algorithm. The cost of an action outcome is then computed as the maximum of the absolute values of the differences of the x and y coordinates between the imaginary grid cell A t1 t2 t0 A A O1 O2 O3 Figure 2: Example of a nondeterministic action outcome is infinity if the old and new cells are identical since the action then cannot be part of a plan that minimizes the worst-case plan-execution cost from the current state of the agent to .) The parti-game algorithm then determines whether the minimax goal distance of the current state of the agent is finite. If so, the parti-game algorithm repeatedly chooses the action that minimizes the worst-case plan-execution cost, until the agent reaches or observes additional action outcomes. The minimax goal distance of
is
and the agent minimizes the worst-case plan-execution cost by moving from
towards the centers of either or . Assume that it decides to move towards the center of . The agent always continues to move until it either gets blocked by an obstacle or enters a new cell. It immediately gets blocked by the obstacle in
. When the agent observes additional action outcomes it adds them to the state space. Thus, it now assumes that it can end up in either or
if it is somewhere in
and moves towards the center of . The same scenario repeats when the agent first attempts to move towards the center of and then attempts to move towards the center of but gets blocked twice by the obstacle in
. Figure 1(c) shows the state space after the attempted moves towards the centers of and , and Figure 1(d) shows the state space after the attempted move towards the center of . The minimax goal distance of is now
. We say that
is unsolvable since an agent in
is not guaranteed to reach with finite plan-execution cost. In this case, the parti-game algorithm refines the discretization by splitting all solvable cells that border unsolvable cells and all unsolvable cells that border solvable cells. Each cell is split into two cells perpendicular to its longest axis. (The axis of the split is chosen randomly for square cells.) Figure 1(e) shows the new discretization of the domain. The parti-game algorithm then removes those states (and their actions) from the state space that correspond to the old cells and adds states (and actions) for the new cells, again making the optimistic assumption that each action for the new states deterministically reaches the intended state. This ensures that the minimax goal distance of becomes finite. Figure 1(f) shows the resulting state space. The parti-game algorithm now repeats the process until either the agent reaches or the domain cannot be discretized any further because the resolution limit is reached. If all actions either did indeed deterministically reach their intended states or did not change the state of the agent at all (as in the example from Figure 1), then the parti-game algorithm could determine the minimax goal distances of the states with a deterministic search algorithm after it has removed all actions that have an action outcome that leaves the state unchanged (since these actions cannot be part of a plan with minimal worst-case planexecution cost). However, actions can have additional outcomes, as Figure 2 illustrates. For example, an agent cannot only end up in "! and but also in if it moves from somewhere in towards the center of ! . The parti-game algorithm therefore needs to determine the minimax goal distances of the states with a minimax search algorithm. Furthermore, the parti-game algorithm repeatedly determines plans that minimize the worst-case planthat contains the center of the new and old state of the agent. Similarly, the heuristic of a state is computed as the maximum of the absolute differences of the x and y coordinates between the imaginary grid cell that contains the center of the state of the agent and the imaginary grid cell that contains the center of the state in question. Note that the grid is imaginary and never needs to be constructed. Furthermore, it is only used to compute the costs and heuristics and does not restrict either the placement of obstacles or the movement of the agent. The pseudocode uses the following functions to manage the priority queue : U.Top returns a state with the smallest priority of all states in . U.TopKey returns the smallest priority of all states in . (If is empty, then U.TopKey returns .) U.Pop deletes the state with the smallest priority in and returns the state. U.Insert
inserts into with priority . U.Update
changes the priority of in to . (It does nothing if the current priority of already equals .) Finally, U.Remove removes from . procedure CalculateKey 01 return
!"#%$&'( )!*"*+,%-.
/
!#0 ; procedure Initialize 02 1 3254 ; 03 for all 6879 :23;2 ; 04 ; !<="> ?/@25A ; 05 U.Insert <="> ?
CalculateKey <=B>? # ; procedure UpdateState C% 06 if CED 2 <="> ? !C%@2 >!FGIH )J K"LNMO FP )(( H )'Q > J R C@
#S'
O '$ O T ; 07 if CU6 I U.Remove C% ; 08 if C%1D 2 !C%# U.Insert C@
CalculateKey C%# ; procedure ComputePlan 09 while U.TopKey :V W CalculateKey ()!*"*"+,-" OR !"()!*"*"+,-"1D 2()!*B*+,-"# 10 XCY2 U.Pop ; 11 if C%IZ3 !C%# /* C is locally overconsistent */ 12 C%;2 !C% ; 13 for all 968[1.\B]C' UpdateState ; 14 else /* C is locally underconsistent */ 15 C%;25 ; 16 for all 968[1.\B]C'%^ C' UpdateState ; procedure Main() 17 ; ( )!*"*+,2 M > *"- ; 18 Initialize ; 19 ComputePlan ; 20 while ()!*"*"+,D 23 <="> ? 21 /* if !( )!*"*+,-":25& then the agent is not guaranteed to reach .<="> ? with finite plan-execution cost */ 22 Execute K_T`a >NFGaH M ()!*"*"+,J K"L M FP )(( H M ( )!*"*+,%Q > J R ( )!*"*+,%
S'
%$ET ; 23 Set ( )!*"*+,%- to the current state of the agent after the action execution; 24 Scan for changed action costs; 25 if any action costs have changed 26 for all actions with changed action costs RCb
S'
c 27 Update the action cost RC@
/S%
cN ; 28 UpdateState C% ; 29 for all 96 30 U.Update
CalculateKey # ; 31 ComputePlan ; Figure 3: Minimax LPA* execution cost from
to . It is therefore important to make the searches fast. In the next sections, we describe Minimax LPA* and how to implement the parti-game algorithm with it. Figures 1(b), (c), (d) and (f) show the state spaces for our example directly after the parti-game algorithm has used Minimax LPA* to determine the minimax goal distance of
. All expanded states (that is, all states whose minimax goal distances have been computed) are shown in gray. Minimax LPA* speeds up the searches by reusing information from previous searches, which is the reason why it expands only three states in Figure 1(d). Minimax LPA* also speeds up the searches by using heuristics to focus them, which is the reason why it expands only four states in Figure 1(f). 3 Minimax LPA* Minimax LPA* repeatedly determines plans that minimize the worst-case plan-execution cost from
to as the agent moves towards in nondeterministic domains where the costs of actions increase or decrease over time. It generalizes two incremental search algorithms, namely our LPA* [1] and DynamicSWSF-FP [7]. Figure 3 shows the algorithm, that we describe in the following. Numbers in curly braces refer to the line numbers in the figure. 3.1 Notation denotes the finite set of states. is the start state, and is the goal state. is the set of actions that can be executed in .
is the set of successor states that can result from the execution of in .
for some is the set of successor states of . !"# $ % for some & ' is the set of predecessor states of ( . The agent incurs cost )+*
, . if the execution of / in 0 results in 1 . ) 2 is the minimax goal distance of 0 , defined as the solution of the system of equations: ) if , and 3465 798;: =< 3?>@ O 7
A
: CB <
;D for all E with &F .
is the current state of the agent, and the minimal worst-case plan-execution cost from
to is . 3.2 Heuristics and Variables Minimax LPA* searches backward from to
and uses heuristics to focus its search. The heuristics need to be non-negative and satisfy G
) and G 9 H IG
;D J for all
K ; and 2 with ; L K . In other words, the heuristics G
, approximate the best-case plan-execution cost from to . Minimax LPA* maintains two variables for each state that it encounters during the search. The g-value of a state estimates its minimax goal distance. It is carried forward from one search to the next one and can be used after each search to determine a plan that minimizes the worst-case plan-execution cost from
to . The rhs-value of a state also estimates its minimax goal distance. It is a one-step lookahead value based on the g-values of its successors and thus potentially better informed than its g-value. It always satisfies the following relationship (Invariant 1): " G ) if , and " G 3465 ,798;: =< 3?>@ O 7
A
: CB < 9C ;D for all with MF . A state is called locally consistent iff its g-value is equal to its rhs-value. Minimax LPA* also maintains a priority queue N that always contains exactly the locally inconsistent states (Invariant 2). Their priorities are always identical to their current keys (Invariant 3), where the key O of is the pair P 34Q5 " G "RD G
ST34Q5 " G VU , as calculated by CalculateKey(). The keys are compared according to a lexicographic ordering. 3.3 Algorithm Minimax LPA* operates as follows. The main function Main() first calls Initialize() 18 to set the g-values and rhs-values of all states to 03 . The only exception is the rhsvalue of , that is set to zero 04 . Thus, is the only locally inconsistent state and is inserted into the otherwise empty priority queue 02, 05 . (Note that, in an actual implementation, Minimax LPA* needs to initialize a state only once it encounters it during the search and thus does not need to initialize all states up front. This is important because the number of states can be large and only a few of them might be reached during the search.) Then, Minimax LPA* calls ComputePlan() to compute a plan that minimizes the worst-case plan-execution cost from
to 19 . If the agent has not reached yet 20 , it executes the first action of the plan 22 and updates
23 . It then scans for changed action costs 24 . To maintain Invariants 1, 2, and 3, it calls UpdateState() if some action costs have changed 28 to update the rhs-values and keys of the states potentially affected by the changed action costs as well as their membership in the priority queue if they become locally consistent or inconsistent. It then recalculates the priorities of all states in the priority queue 29-30 . This is necessary because the heuristics change when the agent moves, since they are computed with respect to . This only procedure Main() 17’ :()!*"*"+,-12 M > *B- ; 18’ while ( ()!*"*"+,-9D 2<="> ? ) 19’ Refine the discretization, if possible (initially: construct the first discretization); 20’ Construct the state space that corresponds to the current discretization; 21’ Initialize(); 22’ ComputePlan(); 23’ if ( !( )!*"*+,%-;2 ) stop with no solution; 24’ while ( ( )!*"*+,%D 2 <=">? AND ! ( )!*"*+, D 25 ) 25’ *+ = ) M 2( )!*"*+,- ; 26’ Execute S 25K_T`; > O FGIH M ( )!*"*+,%J K"L M O F%P )(( H M ( )!*"*+,%Q > O J R ()!*B*+,
S O
O '$ O # ; 27’ Set ()!*"*"+,- to the new state of the agent after the action execution; 28’ if ( )!*"*+,%D 687bCRBR *+ = ) M
/S# 29’ 7bCRBR *+ = ) M
S!:27@CRBR *+ = ) M
S^ ( )!*"*+,% ; 30’ 7bCRBR *+ = ) M :257bCRBR *+ = ) M %^ ( )!*"*+,% ; 31’ [1.\B]()!*B*+,-":25[1 \]"()!*"*"+,-"'^ *+ = ) M ; 32’ UpdateState( *+ = ) M ); 33’ for all 96 34’ U.Update( , CalculateKey( )); 35’ ComputePlan(); Figure 4: Parti-game algorithm using Minimax LPA* changes the priorities of the states in the priority queue but not which states are locally consistent and thus in the priority queue. Finally, it recalculates a plan 31 and repeats the process. ComputePlan() operates as follows. It repeatedly removes the locally inconsistent state with the smallest key from the priority queue 10 and expands it 11-16 . It distinguishes two cases. A state is called locally overconsistent iff its g-value is larger than it rhs-value. We can prove that the rhs-value of a locally overconsistent state that is about to be expanded is equal to its minimax goal distance. ComputePlan() therefore sets the g-value of the state to its rhs-value 12 . A state is called locally underconsistent iff its g-value is smaller than it rhs-value. In this case, ComputePlan() sets the g-value of the state to infinity 15 . In either case, ComputePlan() ensures that Invariants 1, 2 and 3 continue to hold 13, 16 . It terminates as soon as
is locally consistent and its key is less than or equal to the keys of all locally inconsistent states. Theorem 1 ComputePlan of Minimax LPA* expands each state at most twice and thus terminates. Assume that, after ComputePlan() terminates, one starts in R"C..\
and always executes an action in the current state that minimizes O 6 7bCRR
S
! " until #.S!$ is reached (ties can be broken arbitrarily). Then, the plan-execution cost is no larger than the minimax goal distance of R"C .\ % . We can also prove several additional theorems about the efficiency of Minimax LPA*, including the fact that it only expands those states whose g-values are not already correct [5]. To reduce its search time, we optimize Minimax LPA* in several ways, for example, to avoid unnecessary re-computations of the rhs-values [5]. We use these optimizations in the experiments. A more detailed description, the intuition behind Minimax LPA*, examples of its operation, and additional theorems and their proofs can be found in [5]. 4 Using Minimax LPA* to Implement the Parti-Game Algorithm Figure 4 shows how Minimax LPA* can be used to implement the parti-game algorithm in a more efficient way than with uninformed search from scratch, using some of the functions from Figure 3. Initially, the parti-game algorithm constructs a first (coarse) discretization of the terrain 19’ , constructs the corresponding state space (which includes setting
to the state of the agent, to the state that includes the goal coordinates, and 9C , , and !"# according to the optimistic assumption that each action deterministically reaches the intended state) 20’ , and uses ComputePlan() to find a first plan from scratch 21’-22’ . If the minimax goal distance of
is infinity, then it stops unsuccessfully 23’ . Otherwise, it repeatedly executes the action that minimizes the worst-case plan-executioncost 26’-27’ . If it observes an unknown action outcome 28’ , then it records it 29’-31’ , ensures that Invariants 1, 2 and 3 continue to hold 32’-34’ , uses ComputePlan() to find a new plan incrementally 35’ , and then continues to execute actions until either
is unsolvable or the agent reaches 24’ . In the former case, it refines the discretization 19’ , uses ComputePlan() to find a new plan from scratch rather than incrementally (because the discretization changes the state space substantially) 20’-22’ , and then repeats the process. The heuristic of a state in our version of the parti-game algorithm approximates the Euclidean distance from the center of the current cell of the agent to the center of the cell that corresponds to the state in question. The resulting heuristics have the property that we described in Section 3.2. Figures 1(b), (c), (d) and (f) show the heuristics, g-values and rhs-values of all states directly after the call to ComputePlan(). All expanded states are shown in gray, and all locally inconsistent states (that is, states in the priority queue) are shown in bold. It happens quite frequently that
is unsolvable and the parti-game algorithm thus has to refine the discretization. If
is unsolvable, Minimax LPA* expands a large number of states because it has to disprove the existence of a plan rather than find one. We speed up Minimax LPA* for the special case where
is unsolvable but every other state is solvable since it occurs about half of the time when
is unsolvable. If states other than
become unsolvable, some of them need to be predecessors of
. To prove that
is unsolvable but every other state is solvable, Minimax LPA* can therefore show that all predecessors of are solvable but itself is not. To show that all predecessors of
are solvable, Minimax LPA* checks that they are locally consistent, their keys are no larger than U.TopKey(), and their rhs-values are finite. To show that
is unsolvable, Minimax LPA* checks that the rhs-value of
is infinite. We use this optimization in the experiments. 5 Experimental Results An implementation of the parti-game algorithm can use search from scratch or incremental search. It can also use uninformed search (using the zero heuristic) and informed search (using the heuristic that we used in the context of the example from Figure 1). We compare the four resulting combinations. All of them use binary heaps to implement the priority queue and the same optimizations but the implementations with search from scratch do not contain any code needed only for incremental search. Since all implementations move the agent in the same way, we compare their number of state expansions, their total run times, and their total search times (that is, the part of the run times spent in the search routines), averaged over 25 two-dimensional terrains of size )9)
) )9)
) with 30 percent obstacle density, where the resolution limit is one cell. In each case, the goal coordinates are in the center of the terrain, and the start coordinates are in the vertical center and ten percent to the right of the left edge. We also report the average of the ratios of the three measures for each of the four implementations and the one with incremental heuristic search (which is different from the ratio of the averages), together with their 95-percent confidence intervals. Implementation of PartiRatio Ratio Game Algorithm with . . . Expansions Expansions Run Time (Search Time) Run Time (Search Time) Uninformed from Scratch 69,527,969 20.55 4.12 39 min 51 sec (37 min 43 sec) 11.83 3.52 (15.29 3.61) Informed from Scratch 31,303,253 8.06 2.59 22 min 58 sec (20 min 49 sec) 6.08 2.50 ( 7.20 2.70) Uninformed Incremental 2,628,879 1.23 0.03 1 min 54 sec ( 1 min 41 sec) 1.04 0.02 ( 1.19 0.05) Informed Incremental 2,172,430 1.00 0.00 1 min 45 sec ( 1 min 28 sec) 1.00 0.00 ( 1.00 0.00) The average number of searches, measured by calls to ComputePlan(), is 29,885 until the agent reaches . The table shows that the search times of the parti-game algorithm are substantial due to the large number of searches performed (even though each search is fast), and that the searches take up most of its run time. Thus, speeding up the searches is important. The table also shows that incremental and heuristic search individually speed up the parti-game algorithm and together speed it up even more. The implementations of the parti-game algorithm in [3] and [6] make slightly different assumptions from ours, for example, minimize state transitions rather than cost. Al-Ansari reports that the original implementation of the parti-game algorithm with value iteration performs about 80 percent and that his implementation with a simple uninformed incremental search method performs about 15 percent of the state expansions of the implementation with uninformed search from scratch [6]. Our results show that our implementation with Minimax LPA* performs about 5 percent of the state expansions of the implementation with uninformed search from scratch. While these results are not directly comparable, we have also first results where we ran the original implementation with value iteration and our implementation with Minimax LPA* on a very similar environment and the original implementation expanded one to two orders of magnitude more states than ours even though its number of searches and its final number of states was smaller. However, these results are very preliminary since the time per state expansion is different for the different implementations and it is future work to compare the various implementations of the parti-game algorithm in a common testbed. References [1] S. Koenig and M. Likhachev. Incremental A*. In T. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14, Cambridge, MA, 2002. MIT Press. [2] D. Frigioni, A. Marchetti-Spaccamela, and U. Nanni. Fully dynamic algorithms for maintaining shortest paths trees. Journal of Algorithms, 34(2):251–281, 2000. [3] A. Moore and C. Atkeson. The parti-game algorithm for variable resolution reinforcement learning in multidimensional state-spaces. Machine Learning, 21(3):199–233, 1995. [4] A. Moore and C. Atkeson. Prioritized sweeping: Reinforcement learning with less data and less time. Machine Learning, 13(1):103–130, 1993. [5] M. Likhachev and S. Koenig. Speeding up reinforcement learning with incremental heuristic minimax search. Technical Report GIT-COGSCI-2002/5, College of Computing, Georgia Institute of Technology, Atlanta (Georgia), 2002. [6] M. Al-Ansari. Efficient Reinforcement Learning in Continuous Environments. PhD thesis, College of Computer Science, Northeastern University, Boston (Massachusetts), 2001. [7] G. Ramalingam and T. Reps. An incremental algorithm for a generalization of the shortest-path problem. Journal of Algorithms, 21:267–305, 1996.
|
2002
|
174
|
2,185
|
Kernel-based Extraction of Slow Features: Complex Cells Learn Disparity and Translation Invariance from Natural Images Alistair Bray and Dominique Martinez* CORTEX Group, LORIA-INRIA, Nancy, France bray@loria.fr, dmartine@loria.jr Abstract In Slow Feature Analysis (SFA [1]), it has been demonstrated that high-order invariant properties can be extracted by projecting inputs into a nonlinear space and computing the slowest changing features in this space; this has been proposed as a simple general model for learning nonlinear invariances in the visual system. However, this method is highly constrained by the curse of dimensionality which limits it to simple theoretical simulations. This paper demonstrates that by using a different but closely-related objective function for extracting slowly varying features ([2, 3]), and then exploiting the kernel trick, this curse can be avoided. Using this new method we show that both the complex cell properties of translation invariance and disparity coding can be learnt simultaneously from natural images when complex cells are driven by simple cells also learnt from the image. The notion of maximising an objective function based upon the temporal predictability of output has been progressively applied in modelling the development of invariances in the visual system. F6ldiak used it indirectly via a Hebbian trace rule for modelling the development of translation invariance in complex cells [4] (closely related to many other models [5,6,7]); this rule has been used to maximise invariance as one component of a hierarchical system for object and face recognition [8]. On the other hand, similar functions have been maximised directly in networks for extracting linear [2] and nonlinear [9, 1] visual invariances. Direct maximisation of such functions have recently been used to model complex cells [10] and as an alternative to maximising sparseness/independence in modelling simple cells [11]. Slow Feature Analysis [1] combines many of the best properties of these methods to provide a good general nonlinear model. That is, it uses an objective function that minimises the first-order temporal derivative of the outputs; it provides a closedform solution which maximises this function by projecting inputs into a nonlinear http://www.loria.fr/equipes/cortex/ space; it exploits sphering (or PCA-whitening) of the data to ensure that all outputs have unit variance and are uncorrelated. However, the method suffers from the curse of dimensionality in that the nonlinear feature space soon becomes very large as the input dimension grows, and yet this feature space must be represented explicitly in order for the essential sphering to occur. The alternative that we propose here is to use the objective function of Stone [2, 9], that maximises output variance over a long period whilst minimising variance over a shorter period; in the linear case, this can be implemented by a biologically plausible mixture of Hebbian and anti-Hebbian learning on the same synapses [2]. In recent work, Stone has proposed a closed-form solution for maximising this function in the linear domain of blind source separation that does not involve data-sphering. This paper describes how this method can be kernelised. The use of the "kernel trick" allows projection of inputs into a nonlinear kernel induced feature space of very high (possibly infinite) dimension which is never explicitly represented or accessed. This leads to an efficient method that maps to an architecture that could be biologically implemented either by Sigma-Pi neurons, or fixed REF networks (as described for SFA [1]). We demonstrate that using this method to extract features that vary slowly in natural images leads to the development of both the complex-cell properties of translation invariance and disparity coding simultaneously. 1 Finding Slow Features with kernels Given I time-series vectors X i<l where each n-dimensional vector Xi is a linear mixture of n unknown but temporally predictable parameters at time i, the problem in [3] is to find an n-dimensional weight vector w so that the output Yi = w T Xi at each i is a scaled version of a particular parameter. Many quasi-invariant parameters underlying perceptual data exhibit these properties of short-term predictability and long-term variability. Accordingly, an objective function F can be defined as the ratio between the long-term variance V and the short-term variance S of the output sequence i.e. F _ V _ L.i Yi 2 ~2 S L.i Yi (1) where Yii and 'iIi represent the output at i centered using long- and short-term means. The aim is to find the parameters that maximize F, which can be rewritten as: w T Cw 1 _ -T ~ 1 ~ ~T F = where C = - "'"' XiX' and C = - "'"' XiX' TC I ~, I ~ , w W i i where C and Care nxn covariance matrices estimated from the I inputs. F is a version of the Rayleigh quotient and the problem to be solved is, in analogy to PCA, the right-handed generalized symmetric eigenproblem: Cw=).Cw (2) where A is the largest eigenvalue and W the corresponding eigenvector. In this case, the component extracted y = w T x corresponds to the most predictable component with F = A. Most importantly, more than one component can be extracted by considering successive eigenvalues and eigenvectors which are orthogonal in the metrics C and 0, i.e. WfCWj = 0 and wfCwj = 0 for i -::/:- j. To make this algorithm nonlinear we can first project the data x into some highdimensional feature space via a nonlinear mapping ¢, and then find the weight vector W that maximizes F in this space. In this case, to optimise Eq. (2) the covariance matrices must be estimated in the feature space as where ¢(Xi) and ¢(Xi) represent the data centered in the feature space. The problem with this straight-forward approach is that the dimensionality of the feature space quickly becomes huge as the input dimension increases [1]. To prevent this we use the kernel trick: to avoid working with the mapped data directly, we assume that the solution W can be written as an expansion in terms of mapped training data: W = 2:~= 1 ai¢(xi). We can now rewrite the numerator (likewise denominator) in Fas where a = (al··· adT and K is a (lxl) matrix with entries defined as Kij ¢(Xi)T ¢(Xj). F can now be written as: F= aTj(j(Ta aTK KTa (3) To avoid explicitly computing dot products in the feature space, we introduce kernel functions defined as k(x, y) = ¢(x)T ¢(y), which means we just have to evaluate kernels in the input space. Any kernel involved in Support Vector Machines can be used, e.g. linear, polynomial, RBF or sigmoid. By now defining the kernel matrix K with entries (4) we can arrive at the corresponding eigenproblem: (5) where A is again the corresponding largest eigenvalue equal to F. As for the linear case, more than one source can be extracted by considering successive eigenvalues and eigenvectors. In order to recover a temporal component, we need only to compute the nonlinear projection y = w T ¢>(x) of a new input x onto w which is equivalent to y = 2:!=l Qik(Xi'X). Finding a sparse solution If the eigen problem is solved on the entire training set then this algorithm also suffers from the curse of dimensionality, since the matrices (lxl) easily become computationally intractable. A sparse solution using a small subset p of the training data in the expansion is therefore essential: this is called the basis set BS. The output is now y = 2:iEBS Qik(Xi' x), and the solution must lie in the subspace spanned by BS. The kernel elements Kij are computed between the p basis vectors X i and the 1 training data Xj. Thus, K, K and :K are rectangular pxl but the covariance ma--T trices (K K ) and (K KT) used in the eigenproblem are only pxp. This approach can effectively solve very large problems, provided p < < l. The question of course is how to choose the basis vectors: it is both necessary and sufficient that they span the space of the solution in the kernel induced feature space. In a recent version of the algorithm [12] we use the sparse greedy method of [13] as a preprocessing step. This efficiently finds a small basis set that minimises the least-squares error between data points in feature space and those reconstructed in the feature space defined by the basis set. In the simulations below we used a less efficient greedy algorithm that performed equally well here, but requires a considerably larger basis setl. The complete online algorithm requires minimal memory, making it ideal for very large data sets. The implementation estimates the long- and short-term kernel means online using exponential time averages parameterised using half-lives As, At (as in [9]). Likewise, the covariance matrices KKT , i(i(T are updated online at --T --T -T each time step e.g. KK is updated to KK + KK where K is the column vector of kernel values centred using the long term mean and computed for the current time step; there is therefore no need to explicitly compute or store kernel matrices. 2 Simulation Results The simulation was performed using a grey-level stereo pair of resolution 128x128, shown in Figure 1 [a]. A new 2D direction 0° < e :::; 360° was selected at every 64 time steps, and the image was translated by one pixel per time step in this direction (with toroidal wrap-around). A set of 20 monocular simple cells was learnt using the algorithm described in [11] that maximises a nonlinear measure of temporal correlation (TRS) between the lVectors x are added to BS if, for y E BS, Ik(x,y) 1 ~ T where threshold T is slowly annealed from TO = 1, and the size of BS is set at 400. Figure 1: Training on natural images. [a] Stereo Pair. [b] Linear filters that maximise TRS [11]. [c] Output of filters for left image. [d] Output of nonlinear complex cells in binocular simulation. [e] Output of complex cells in monocular simulation. present and a previous output, based upon the transfer function g(y) = In cosh(y). We chose this algorithm since it is based on a nonlinear measure of temporal correlation and yet provides a linear sparse-distributed coding, very similar to that of lCA for describing simple cells [14]. We did not use the objective function described above since in the linear case it yields filters similar to the local Fourier series2 . The filters were optimised for this particular stereo pair; simulations using a greater variation of more natural images resulted in more spatially localised filters very similar to those in [14, 11]. We used only the 20 most predictable filters since results did not improve through use of the full set. The simple cell receptive field was 8x8, and during learning data was provided by both eyes at one position in the image3 . The oriented Gabor-like weight vectors for the 20 cells contributing most to the TRS objective function are shown in Figure l[b], and the result of processing the left image with these linear filters is shown in Figure l[c]. The complex cells received input from these 20 types of simple cells when processing both the left and right eye images. Complex cells had a spatial receptive field of 4x4; 2 An intuitive explanation for this necessity for nonlinearity in the objective function is provided in [11]; in brief, the temporal correlation of the output of a Gabor-like linear filter is low, whilst a similar correlation for a measure of the power in the filter is high. 3The dimension of the PeA-whitened space was reduced from 63 to 40, and 6.t = 1, 'f] = 10-3,0 = 10- 1 ; 105 input vectors were used. [a] [b] Figure 2: Testing on simulated pair used in [9] . [a] Artificial stereo pair. [b] Underlying disparity function. [c] Output of most predictable complex cell trained on Figure I[a]. each cell therefore received 320 simple cell inputs (2x4x4x20); these were normalised to have unit variance and zero mean. The most predictable features were extracted for this input vector over 105 time-steps, using the kernel-based method described above, using data at just one position in the image. The basis set was made up of 400 input vectors, and a polynomial kernel of degree 2 was used. The temporal half-lifes for estimating the short- and long-term means in U and V were As = 2, Al = 200. The algorithm therefore extracts 400 outputs; we display the outputs for the 8 most predictable (determined by highest eigenvalues) in Figure I[d]; further values were hard to interpret. Below this, in Figure I[e], we show the complex outputs obtained if we substitute the right image with the left one in the stereo pair, so making the simulation monocular. Consider first the monocular simulation in [e]. It is visually apparent how the most predictable units are strongly selective for regions of iso-orientation (looking quite different to any simple cell response in [c]). In this particular image, it results in different "T" -shaped parts of the Pentagon of considerable size being distinctly isolated. Since in our network the complex cell receptive field size in the image is only 50% greater than that for the simple cells, this implies translation invariance: over the time (or space) that a simple cell of the correct orientation gives a strong but transitory response, the complex cells provides a strong continuous response. That is, its response is invariant to the phase that determines the profile of the simple cell response. Consider now the stereo simulation in [d]. This tendency is still present (e.g. the 3rd output), but it is confounded with another parameter that isolates the complete shape of the Pentagon from the background. This is most striking in the output provided by the first feature; that is, this parameter is the most predictable in the image (providing an eigenvalue A = VjU = 7.28, as opposed to A ~ 4 for the "T"-shapes in [e]). This parameter is binocular disparity, generated by the variation in depth of the Pentagon roof compared to the ground. The proof of this lies in Figure 2. Here we have taken the artificial stereo pair used in [9], shown in Figure 2 [a] , that has been generated using the known eggshell disparity function shown in Figure 2[b]. We presented this to the network trained wholly on the Pentagon stereo pair; it can be seen that the most predictable component, shown in Figure 2[c], replicates the disparity function of [b]4. 4The output is somewhat noisy, partly because the image has few linear features like those in Figure l[b]; if we train the simple and complex cells on this image we get a much cleaner result. 3 Discussion The simulation above confirms that the linear properties of simple cells, and two of the nonlinear properties of complex cells (translation invariance and disparity coding) can be extracted simutaneously from natural images through maximising functions of temporal coherence in the input. Although these properties have been dealt with in others' work discussed above, they have been considered either in isolation or through theoretical simulation. It is only because the kernel-based method we present allows us to work efficiently with large amounts of data in a nonlinear feature space derived from high dimensional input that we have been able to extract both complex cell properties together from realistic image data. The method described above is computationally efficient. It is also biologically plausible in as much as [a] it uses a reasonable objective function based on temporal coherence of output, and [b] the final computation required to extract these most predictable outputs could be performed either by Sigma-Pi neurons, or fixed RBF networks (as in SFA [1]) . However, we do not claim either that the precise formulation of the objective function is biologically exact, or that a biological system would use the same means to arrive at the final architecture that computes the optimal solution: the learning algorithm is certainly different. Our approach is therefore focussed on the constraints provided by [a] and [b]. The method also exploits a distributed representation for maximising the objective function that results from the generalised eigenvector solution. Is this plausible given the emphasis that has been laid on sparse-coding early in the visual system [15]? Sparse representations are often the result of constraining different outputs to be uncorrelated, or stronger, independent. However, as one ascends the perceptual pathway generating more reduced nonlinear representations, even the constraint of uncorrelated output may be too strong, or unnecessary, to create the highly robust representations exploited by the brain. For example, Rolls reports and defends a highly distributed coding of faces in infero-temporal cortical areas with cells responding to a large proportion of stimuli to some degree ([16], chapter 5). Our method enforces the constraint that successive eigenvectors are orthogonal in the metrics C and C and can result in the partly correlated output expected in the robust distributed coding Rolls proposes. However, this would not be the case if the long-term means used for C are estimated with a temporal half-life sufficiently large that these means do not differ from the true expected values. Finally, although maximising the sparseness of representation may be inappropriate in deeper cortex, one might suggest that the coding of parameters we obtain in our simulation is not highly distributed across outputs: in reality each complex cell responds to a limited range of disparity and orientation. However, it can be seen in Figure l[d]) that there is a clear separation of orientation, and some mixing of disparity and orientation-sensitivity. It is a feature of our method that different outputs must have different measures of predictability (i.e. eigenvalues). In the case of sparse coding of translation invariance, for example, there is no obvious reason why this assumption should be met by cells coding different orientations alone; it can however be enforced by coding different mixtures of orientation and disparity parameters leading to distinct eigenvalues. There is certainly no practical or biological reason why these parameters should be carried separately in the visual system (see [1] for discussion). In conclusion, this work provides further support for the fruitful approach of extracting non-trivial parameters through maximisation of objective functions based on temporal properties of perceptual input. One of the challenges here is to extend current linear models into the nonlinear domain whilst limiting the extra complexity they bring, which can lead to excess degrees of freedom and computational problems. We have described here a kernel-based method that goes some way towards this, extracting disparity and translation simultaneously for complex cells trained on natural images. References [1] L. Wiskott and T .J . Sejnowski. Slow feature analysis: Unsupervised learning of invariances. Neural Computation, 14(4), 2002. [2] J. V. Stone and A. J. Bray. A learning rule for extracting spatio-temporal invariances. Network: Computation in Neural Systems, 6(3):429- 436, 1995. [3] James V. Stone. Blind source separation using temporal predictability. Neural Computation, (13):1559- 1574, 200l. [4] P. Foldiak. Learning invariance from transformation sequences. Neural Computation, 3(2):194- 200, 1991. [5] H. G. Barrow and A. J. Bray. A model of adaptive development of complex cortical cells. In 1. Aleksander and J. Taylor, editors, Artificial Neural Networks II: Proceedings of the International Conference on Artificial Neural Networks. Elsevier Publishers, 1992. [6] K. Fukushima. Self-organisation of shift-invariant receptive fields. Neural Networks, 12:826- 834, 1999. [7] M. Stewart Bartlett and T.J. Sejnowski. Learning viewpoint invariant face representations from visual experience in an attractor network. Network: Computation in Neural Systems, 9(3):399- 417, 1998. [8] E. T . Rolls and T. Milward. A model of invariant object recognition in the visual system: Learning rules, activation functions, lateral inhibition, and information-based performance measures. Neural Computation, 12:2547- 2572, 2000. [9] J. V. Stone. Learning perceptually salient visual parameters using spatiotemporal smoothness constraints. Neural Computation, 8(7):1463- 1492, October 1996. [10] K. Kayser, W. Einhiiuser, O. Dummer, P. Konig, and K. Kording. Extracting slow subspaces from natural videos leads to complex cells. In ICANN 2001, LNCS 2130, pages 1075- 1080. Springer-Verlag Berlin Heidelberg 2001, 200l. [11] J. Hurri and A. Hyvarinen. Simple-cell-like receptive fields maximise temporal coherence in natural video. Submitted, http://www.cis.hut.fi/)armo/publications. 2002. [12] D. Martinez and A. Bray. Nonlinear blind source separation using kernels. IEEE Trans. Neural Networks, 14(1):228- 235, Jan. 2003. [13] G. Baudat and F . Anouar. Kernel-based methods and function approximation. International Joint Conference of Neural Networks IJCNN, pages 1244-1249, 200l. [14] A. J. Bell and T. J. Sejnowski. The independent components of natural scenes are edge filters. Vision Research, 37:3327- 3338, 1997. [15] B.A. Olhausen and D.J. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381:607- 609, 1996. [16] E.T . Rolls and G. Deco. Computational Neuroscience of Vision. Oxford University Press, 2002.
|
2002
|
175
|
2,186
|
Monaural Speech Separation Guoning Hu DeLiang Wang Biophysics Program Department of Computer and Information The Ohio State University Science & Center of Cognitive Science Columbus, OH 43210 The Ohio State University, Columbus, OH 43210 hu.117@osu.edu dwang@cis.ohio-state.edu Abstract Monaural speech separation has been studied in previous systems that incorporate auditory scene analysis principles. A major problem for these systems is their inability to deal with speech in the highfrequency range. Psychoacoustic evidence suggests that different perceptual mechanisms are involved in handling resolved and unresolved harmonics. Motivated by this, we propose a model for monaural separation that deals with low-frequency and highfrequency signals differently. For resolved harmonics, our model generates segments based on temporal continuity and cross-channel correlation, and groups them according to periodicity. For unresolved harmonics, the model generates segments based on amplitude modulation (AM) in addition to temporal continuity and groups them according to AM repetition rates derived from sinusoidal modeling. Underlying the separation process is a pitch contour obtained according to psychoacoustic constraints. Our model is systematically evaluated, and it yields substantially better performance than previous systems, especially in the high-frequency range. 1 Introduction In a natural environment, speech usually occurs simultaneously with acoustic interference. An effective system for attenuating acoustic interference would greatly facilitate many applications, including automatic speech recognition (ASR) and speaker identification. Blind source separation using independent component analysis [10] or sensor arrays for spatial filtering require multiple sensors. In many situations, such as telecommunication and audio retrieval, a monaural (one microphone) solution is required, in which intrinsic properties of speech or interference must be considered. Various algorithms have been proposed for monaural speech enhancement [14]. These methods assume certain properties of interference and have difficulty in dealing with general acoustic interference. Monaural separation has also been studied using phasebased decomposition [3] and statistical learning [17], but with only limited evaluation. While speech enhancement remains a challenge, the auditory system shows a remarkable capacity for monaural speech separation. According to Bregman [1], the auditory system separates the acoustic signal into streams, corresponding to different sources, based on auditory scene analysis (ASA) principles. Research in ASA has inspired considerable work to build computational auditory scene analysis (CASA) systems for sound separation [19] [4] [7] [18]. Such systems generally approach speech separation in two main stages: segmentation (analysis) and grouping (synthesis). In segmentation, the acoustic input is decomposed into sensory segments, each of which is likely to originate from a single source. In grouping, those segments that likely come from the same source are grouped together, based mostly on periodicity. In a recent CASA model by Wang and Brown [18], segments are formed on the basis of similarity between adjacent filter responses (cross-channel correlation) and temporal continuity, while grouping among segments is performed according to the global pitch extracted within each time frame. In most situations, the model is able to remove intrusions and recover low-frequency (below 1 kHz) energy of target speech. However, this model cannot handle high-frequency (above 1 kHz) signals well, and it loses much of target speech in the high-frequency range. In fact, the inability to deal with speech in the high-frequency range is a common problem for CASA systems. We study monaural speech separation with particular emphasis on the high-frequency problem in CASA. For voiced speech, we note that the auditory system can resolve the first few harmonics in the low-frequency range [16]. It has been suggested that different perceptual mechanisms are used to handle resolved and unresolved harmonics [2]. Consequently, our model employs different methods to segregate resolved and unresolved harmonics of target speech. More specifically, our model generates segments for resolved harmonics based on temporal continuity and cross-channel correlation, and these segments are grouped according to common periodicity. For unresolved harmonics, it is well known that the corresponding filter responses are strongly amplitude-modulated and the response envelopes fluctuate at the fundamental frequency (F0) of target speech [8]. Therefore, our model generates segments for unresolved harmonics based on common AM in addition to temporal continuity. The segments are grouped according to AM repetition rates. We calculate AM repetition rates via sinusoidal modeling, which is guided by target pitch estimated according to characteristics of natural speech. Section 2 describes the overall system. In section 3, systematic results and a comparison with the Wang-Brown system are given. Section 4 concludes the paper. 2 Model description Our model is a multistage system, as shown in Fig. 1. Description for each stage is given below. 2.1 Initial processing First, an acoustic input is analyzed by a standard cochlear filtering model with a bank of 128 gammatone filters [15] and subsequent hair cell transduction [12]. This peripheral processing is done in time frames of 20 ms long with 10 ms overlap between consecutive frames. As a result, the input signal is decomposed into a group of timefrequency (T-F) units. Each T-F unit contains the response from a certain channel at a certain frame. The envelope of the response is obtained by a lowpass filter with Peripheral and mid-level processing Pitch tracking Unit labeling Mixture Segregated Speech Resynthesis Initial segregation Final segregation Figure 1. Schematic diagram of the proposed multistage system. passband [0, 1 kHz] and a Kaiser window of 18.25 ms. Mid-level processing is performed by computing a correlogram (autocorrelation function) of the individual responses and their envelopes. These autocorrelation functions reveal response periodicities as well as AM repetition rates. The global pitch is obtained from the summary correlogram. For clean speech, the autocorrelations generally have peaks consistent with the pitch and their summation shows a dominant peak corresponding to the pitch period. With acoustic interference, a global pitch may not be an accurate description of the target pitch, but it is reasonably close. Because a harmonic extends for a period of time and its frequency changes smoothly, target speech likely activates contiguous T-F units. This is an instance of the temporal continuity principle. In addition, since the passbands of adjacent channels overlap, a resolved harmonic usually activates adjacent channels, which leads to high crosschannel correlations. Hence, in initial segregation, the model first forms segments by merging T-F units based on temporal continuity and cross-channel correlation. Then the segments are grouped into a foreground stream and a background stream by comparing the periodicities of unit responses with global pitch. A similar process is described in [18]. Fig. 2(a) and Fig. 2(b) illustrate the segments and the foreground stream. The input is a mixture of a voiced utterance and a cocktail party noise (see Sect. 3). Since the intrusion is not strongly structured, most segments correspond to target speech. In addition, most segments are in the low-frequency range. The initial foreground stream successfully groups most of the major segments. 2.2 Pitch tracking In the presence of acoustic interference, the global pitch estimated in mid-level processing is generally not an accurate description of target pitch. To obtain accurate pitch information, target pitch is first estimated from the foreground stream. At each frame, the autocorrelation functions of T-F units in the foreground stream are summated. The pitch period is the lag corresponding to the maximum of the summation in the plausible pitch range: [2 ms, 12.5 ms]. Then we employ the following two constraints to check its reliability. First, an accurate pitch period at a frame should be consistent with the periodicity of the T-F units at this frame in the foreground stream. At frame j, let τ(j) represent the estimated pitch period, and A(i,j,τ) the autocorrelation function of uij, the unit in channel i. uij agrees with τ(j) if d m j i A j j i A θ τ τ > ) , , ( / )) ( , , ( (1) 0 0.5 1 1.5 80 387 1028 2335 5000 Time (Sec) (b) 0 0.5 1 1.5 80 387 1028 2335 5000 (a) Time (Sec) Frequency (Hz) Figure 2. Results of initial segregation for a speech and cocktail-party mixture. (a) Segments formed. Each segment corresponds to a contiguous black region. (b) Foreground stream. Here, θd=0.95, the same threshold used in [18], and τm is the lag corresponding to the maximum of A(i,j,τ) within [2 ms, 12.5 ms]. τ(j) is considered reliable if more than half of the units in the foreground stream at frame j agree with it. Second, pitch periods in natural speech vary smoothly in time [11]. We stipulate the difference between reliable pitch periods at consecutive frames be smaller than 20% of the pitch period, justified from pitch statistics. Unreliable pitch periods are replaced by new values extrapolated from reliable pitch points using temporal continuity. As an example, suppose at two consecutive frames j and j+1 that τ(j) is reliable while τ(j+1) is not. All the channels corresponding to the T-F units agreeing with τ(j) are selected. τ(j+1) is then obtained from the summation of the autocorrelations for the units at frame j+1 in those selected channels. Then the re-estimated pitch is further verified with the second constraint. For more details, see [9]. Fig. 3 illustrates the estimated pitch periods from the speech and cocktail-party mixture, which match the pitch periods obtained from clean speech very well. 2.3 Unit labeling With estimated pitch periods, (1) provides a criterion to label T-F units according to whether target speech dominates the unit responses or not. This criterion compares an estimated pitch period with the periodicity of the unit response. It is referred as the periodicity criterion. It works well for resolved harmonics, and is used to label the units of the segments generated in initial segregation. However, the periodicity criterion is not suitable for units responding to multiple harmonics because unit responses are amplitude-modulated. As shown in Fig. 4, for a filter response that is strongly amplitude-modulated (Fig. 4(a)), the target pitch corresponds to a local maximum, indicated by the vertical line, in the autocorrelation instead of the global maximum (Fig. 4(b)). Observe that for a filter responding to multiple harmonics of a harmonic source, the response envelope fluctuates at the rate of F0 [8]. Hence, we propose a new criterion for labeling the T-F units corresponding to unresolved harmonics by comparing AM repetition rates with estimated pitch. This criterion is referred as the AM criterion. To obtain an AM repetition rate, the entire response of a gammatone filter is half-wave rectified and then band-pass filtered to remove the DC component and other possible 0 0.5 1 4 6 8 10 12 14 Time (Sec) Pitch Period (ms) 180 185 190 195 200 205 210 Time (ms) (a) 0 2 4 6 8 10 12 (b) Lag (ms) Figure 3. Estimated target pitch for the speech and cocktail-party mixture, marked by “x”. The solid line indicates the pitch contour obtained from clean speech. Figure 4. AM effects. (a) Response of a filter with center frequency 2.6 kHz. (b) Corresponding autocorrelation. The vertical line marks the position corresponding to the pitch period of target speech. harmonics except for the F0 component. The rectified and filtered signal is then normalized by its envelope to remove the intensity fluctuations of the original signal, where the envelope is obtained via the Hilbert Transform. Because the pitch of natural speech does not change noticeably within a single frame, we model the corresponding normalized signal within a T-F unit by a single sinusoid to obtain the AM repetition rate. Specifically, 2 1 , )] / 2 sin( ) , (ˆ[ min arg , φ π φ φ + − − = = S M k f ij ij f f k k T j i r f , for f∈[80 Hz, 500 Hz], (2) where a square error measure is used. ) , (ˆ t i r is the normalized filter response, fS is the sampling frequency, M spans a frame, and T=10 ms is the progressing period from one frame to the next. In the above equation, fij gives the AM repetition rate for unit uij. Note that in the discrete case, a single sinusoid with a sufficiently high frequency can always match these samples perfectly. However, we are interested in finding a frequency within the plausible pitch range. Hence, the solution does not reduce to a degenerate case. With appropriately chosen initial values, this optimization problem can be solved effectively using iterative gradient descent (see [9]). The AM criterion is used to label T-F units that do not belong to any segments generated in initial segregation; such segments, as discussed earlier, tend to miss unresolved harmonics. Specifically, unit uij is labeled as target speech if the final square error is less than half of the total energy of the corresponding signal and the AM repetition rate is close to the estimated target pitch: f ij j f θ τ < −| 1 ) ( | . (3) Psychoacoustic evidence suggests that to separate sounds with overlapping spectra requires 6-12% difference in F0 [6]. Accordingly, we choose θf to be 0.12. 2.4 Final segregation and resynthesis For adjacent channels responding to unresolved harmonics, although their responses may be quite different, they exhibit similar AM patterns and their response envelopes are highly correlated. Therefore, for T-F units labeled as target speech, segments are generated based on cross-channel envelope correlation in addition to temporal continuity. The spectra of target speech and intrusion often overlap and, as a result, some segments generated in initial segregation contain both units where target speech dominates and those where intrusion dominates. Given unit labels generated in the last stage, we further divide the segments in the foreground stream, SF, so that all the units in a segment have the same label. Then the streams are adjusted as follows. First, since segments for speech usually are at least 50 ms long, segments with the target label are retained in SF only if they are no shorter than 50 ms. Second, segments with the intrusion label are added to the background stream, SB, if they are no shorter than 50 ms. The remaining segments are removed from SF, becoming undecided. Finally, other units are grouped into the two streams by temporal and spectral continuity. First, SB expands iteratively to include undecided segments in its neighborhood. Then, all the remaining undecided segments are added back to SF. For individual units that do not belong to either stream, they are grouped into SF iteratively if the units are labeled as target speech as well as in the neighborhood of SF. The resulting SF is the final segregated stream of target speech. Fig. 5(a) shows the new segments generated in this process for the speech and cocktailparty mixture. Fig. 5(b) illustrates the segregated stream from the same mixture. Fig. 5(c) shows all the units where target speech is stronger than intrusion. The foreground stream generated by our algorithm contains most of the units where target speech is stronger. In addition, only a small number of units where intrusion is stronger are incorrectly grouped into it. A speech waveform is resynthesized from the final foreground stream. Here, the foreground stream works as a binary mask. It is used to retain the acoustic energy from the mixture that corresponds to 1’s and reject the mixture energy corresponding to 0’s. For more details, see [19]. 3 Evaluation and comparison Our model is evaluated with a corpus of 100 mixtures composed of 10 voiced utterances mixed with 10 intrusions collected by Cooke [4]. The intrusions have a considerable variety. Specifically, they are: N0 - 1 kHz pure tone, N1 - white noise, N2 - noise bursts, N3 - “cocktail party” noise, N4 - rock music, N5 - siren, N6 - trill telephone, N7 - female speech, N8 - male speech, and N9 - female speech. Given our decomposition of an input signal into T-F units, we suggest the use of an ideal binary mask as the ground truth for target speech. The ideal binary mask is constructed as follows: a T-F unit is assigned one if the target energy in the corresponding unit is greater than the intrusion energy and zero otherwise. Theoretically speaking, an ideal binary mask gives a performance ceiling for all binary masks. Figure 5(c) illustrates the ideal mask for the speech and cocktail-party mixture. Ideal masks also suit well the situations where more than one target need to be segregated or the target changes dynamically. The use of ideal masks is supported by the auditory masking phenomenon: within a critical band, a weaker signal is masked by a stronger one [13]. In addition, an ideal mask gives excellent resynthesis for a variety of sounds and is similar to a prior mask used in a recent ASR study that yields excellent recognition performance [5]. The speech waveform resynthesized from the final foreground stream is used for evaluation, and it is denoted by S(t). The speech waveform resynthesized from the ideal binary mask is denoted by I(t). Furthermore, let e1(t) denote the signal present in I(t) but missing from S(t), and e2(t) the signal present in S(t) but missing from I(t). Then, the relative energy loss, REL, and the relative noise residue, RNR, are calculated as follows: = t t t I t e REL ) ( ) ( 2 2 1 , (4a) = t t t S t e RNR ) ( ) ( 2 2 2 . (4b) 0 0.5 1 80 387 1054 2355 5000 (a) Time (Sec) Frequency (Hz) 0 0.5 1 (b) Time (Sec) 0 0.5 1 (c) Time (Sec) Figure 5. Results of final segregation for the speech and cocktail-party mixture. (a) New segments formed in the final segregation. (b) Final foreground stream. (c) Units where target speech is stronger than the intrusion. Table 1: REL and RNR Proposed model Wang-Brown model Intrusion REL (%) RNR (%) REL (%) RNR (%) N0 2.12 0.02 6.99 0 N1 4.66 3.55 28.96 1.61 N2 1.38 1.30 5.77 0.71 N3 3.83 2.72 21.92 1.92 N4 4.00 2.27 10.22 1.41 N5 2.83 0.10 7.47 0 N6 1.61 0.30 5.99 0.48 N7 3.21 2.18 8.61 4.23 N8 1.82 1.48 7.27 0.48 N9 8.57 19.33 15.81 33.03 Average 3.40 3.32 11.91 4.39 The results from our model are shown in Table 1. Each value represents the average of one intrusion with 10 voiced utterances. A further average across all intrusions is also shown in the table. On average, our system retains 96.60% of target speech energy, and the relative residual noise is kept at 3.32%. As a comparison, Table 1 also shows the results from the Wang-Brown model [18], whose performance is representative of current CASA systems. As shown in the table, our model reduces REL significantly. In addition, REL and RNR are balanced in our system. Finally, to compare waveforms directly we measure a form of signal-to-noise ratio (SNR) in decibels using the resynthesized signal from the ideal binary mask as ground truth: ] )) ( ) ( ( ) ( [ log 10 2 2 10 − = t t t S t I t I SNR . (5) The SNR for each intrusion averaged across 10 target utterances is shown in Fig. 6, together with the results from the Wang-Brown system and the SNR of the original mixtures. Our model achieves an average SNR gain of around 12 dB and 5 dB improvement over the Wang-Brown model. 4 Discussion The main feature of our model lies in using different mechanisms to deal with resolved and unresolved harmonics. As a result, our model is able to recover target speech and reduce noise interference in the high-frequency range where harmonics of target speech are unresolved. The proposed system considers the pitch contour of the target source only. However, it is possible to track the pitch contour of the intrusion if it has a harmonic structure. With two pitch contours, one could label a T-F unit more accurately by comparing whether its periodicity is more consistent with one or the other. Such a method is expected to lead to better performance for the two-speaker situation, e.g. N7 through N9. As indicated in Fig. 6, the performance gain of our system for such intrusions is relatively limited. Our model is limited to separation of voiced speech. In our view, unvoiced speech poses the biggest challenge for monaural speech separation. Other grouping cues, such as onset, offset, and timbre, have been demonstrated to be effective for human ASA [1], and may play a role in grouping unvoiced speech. In addition, one should consider the acoustic and phonetic characteristics of individual unvoiced consonants. We plan to investigate these issues in future study. N0 N1 N2 N3 N4 N5 N6 N7 N8 N9 −5 0 5 10 15 20 Intrusion Type SNR (dB) Figure 6. SNR results for segregated speech. White bars show the results from the proposed model, gray bars those from the Wang-Brown system, and black bars those of the mixtures. Acknowledgments We thank G. J. Brown and M. Wu for helpful comments. Preliminary versions of this work were presented in 2001 IEEE WASPAA and 2002 IEEE ICASSP. This research was supported in part by an NSF grant (IIS-0081058) and an AFOSR grant (F4962001-1-0027). References [1] A. S. Bregman, Auditory scene analysis, Cambridge MA: MIT Press, 1990. [2] R. P. Carlyon and T. M. Shackleton, “Comparing the fundamental frequencies of resolved and unresolved harmonics: evidence for two pitch mechanisms?” J. Acoust. Soc. Am., Vol. 95, pp. 3541-3554, 1994. [3] G. Cauwenberghs, “Monaural separation of independent acoustical components,” In Proc. of IEEE Symp. Circuit & Systems, 1999. [4] M. Cooke, Modeling auditory processing and organization, Cambridge U.K.: Cambridge University Press, 1993. [5] M. Cooke, P. Green, L. Josifovski, and A. Vizinho, “Robust automatic speech recognition with missing and unreliable acoustic data,” Speech Comm., Vol. 34, pp. 267-285, 2001. [6] C. J. Darwin and R. P. Carlyon, “Auditory grouping,” in Hearing, B. C. J. Moore, Ed., San Diego CA: Academic Press, 1995. [7] D. P. W. Ellis, Prediction-driven computational auditory scene analysis, Ph.D. Dissertation, MIT Department of Electrical Engineering and Computer Science, 1996. [8] H. Helmholtz, On the sensations of tone, Braunschweig: Vieweg & Son, 1863. (A. J. Ellis, English Trans., Dover, 1954.) [9] G. Hu and D. L. Wang, “Monaural speech segregation based on pitch tracking and amplitude modulation,” Technical Report TR6, Ohio State University Department of Computer and Information Science, 2002. (available at www.cis.ohio-state.edu/~hu) [10] A. Hyvärinen, J. Karhunen, and E. Oja, Independent component analysis, New York: Wiley, 2001. [11] W. J. M. Levelt, Speaking: From intention to articulation, Cambridge MA: MIT Press, 1989. [12] R. Meddis, “Simulation of auditory-neural transduction: further studies,” J. Acoust. Soc. Am., Vol. 83, pp. 1056-1063, 1988. [13] B. C. J. Moore, An Introduction to the psychology of hearing, 4th Ed., San Diego CA: Academic Press, 1997. [14] D. O’Shaughnessy, Speech communications: human and machine, 2nd Ed., New York: IEEE Press, 2000. [15] R. D. Patterson, I. Nimmo-Smith, J. Holdsworth, and P. Rice, “An efficient auditory filterbank based on the gammatone function,” APU Report 2341, MRC, Applied Psychology Unit, Cambridge U.K., 1988. [16] R. Plomp and A. M. Mimpen, “The ear as a frequency analyzer II,” J. Acoust. Soc. Am., Vol. 43, pp. 764-767, 1968. [17] S. Roweis, “One microphone source separation,” In Advances in Neural Information Processing Systems 13 (NIPS’00), 2001. [18] D. L. Wang and G. J. Brown, “Separation of speech from interfering sounds based on oscillatory correlation,” IEEE Trans. Neural Networks, Vol. 10, pp. 684-697, 1999. [19] M. Weintraub, A theory and computational model of auditory monaural sound separation, Ph.D. Dissertation, Stanford University Department of Electrical Engineering, 1985.
|
2002
|
176
|
2,187
|
Multiplicative Updates for Nonnegative Quadratic Programming in Support Vector Machines Fei Sha1, Lawrence K. Saul1, and Daniel D. Lee2 1Department of Computer and Information Science 2Department of Electrical and System Engineering University of Pennsylvania 200 South 33rd Street, Philadelphia, PA 19104 {feisha,lsaul}@cis.upenn.edu, ddlee@ee.upenn.edu Abstract We derive multiplicative updates for solving the nonnegative quadratic programming problem in support vector machines (SVMs). The updates have a simple closed form, and we prove that they converge monotonically to the solution of the maximum margin hyperplane. The updates optimize the traditionally proposed objective function for SVMs. They do not involve any heuristics such as choosing a learning rate or deciding which variables to update at each iteration. They can be used to adjust all the quadratic programming variables in parallel with a guarantee of improvement at each iteration. We analyze the asymptotic convergence of the updates and show that the coefficients of non-support vectors decay geometrically to zero at a rate that depends on their margins. In practice, the updates converge very rapidly to good classifiers. 1 Introduction Support vector machines (SVMs) currently provide state-of-the-art solutions to many problems in machine learning and statistical pattern recognition[18]. Their superior performance is owed to the particular way they manage the tradeoff between bias (underfitting) and variance (overfitting). In SVMs, kernel methods are used to map inputs into a higher, potentially infinite, dimensional feature space; the decision boundary between classes is then identified as the maximum margin hyperplane in the feature space. While SVMs provide the flexibility to implement highly nonlinear classifiers, the maximum margin criterion helps to control the capacity for overfitting. In practice, SVMs generalize very well — even better than their theory suggests. Computing the maximum margin hyperplane in SVMs gives rise to a problem in nonnegative quadratic programming. The resulting optimization is convex, but due to the nonnegativity constraints, it cannot be solved in closed form, and iterative solutions are required. There is a large literature on iterative algorithms for nonnegative quadratic programming in general and for SVMs as a special case[3, 17]. Gradient-based methods are the simplest possible approach, but their convergence depends on careful selection of the learning rate, as well as constant attention to the nonnegativity constraints which may not be naturally enforced. Multiplicative updates based on exponentiated gradients (EG)[5, 10] have been investigated as an alternative to traditional gradient-based methods. Multiplicative updates are naturally suited to sparse nonnegative optimizations, but EG updates—like their additive counterparts—suffer the drawback of having to choose a learning rate. Subset selection methods constitute another approach to the problem of nonnegative quadratic programming in SVMs. Generally speaking, these methods split the variables at each iteration into two sets: a fixed set in which the variables are held constant, and a working set in which the variables are optimized by an internal subroutine. At the end of each iteration, a heuristic is used to transfer variables between the two sets and improve the objective function. An extreme version of this approach is the method of Sequential Minimal Optimization (SMO)[15], which updates only two variables per iteration. In this case, there exists an analytical solution for the updates, so that one avoids the expense of a potentially iterative optimization within each iteration of the main loop. In general, despite the many proposed approaches for training SVMs, solving the quadratic programming problem remains a bottleneck in their implementation. (Some researchers have even advocated changing the objective function in SVMs to simplify the required optimization[8, 13].) In this paper, we propose a new iterative algorithm, called Multiplicative Margin Maximization (M3), for training SVMs. The M3 updates have a simple closed form and converge monotonically to the solution of the maximum margin hyperplane. They do not involve heuristics such as the setting of a learning rate or the switching between fixed and working subsets; all the variables are updated in parallel. They provide an extremely straightforward way to implement traditional SVMs. Experimental and theoretical results confirm the promise of our approach. 2 Nonnegative quadratic programming We begin by studying the general problem of nonnegative quadratic programming. Consider the minimization of the quadratic objective function F(v) = 1 2vT Av + bT v, (1) subject to the constraints that vi ≥0 ∀i. We assume that the matrix A is symmetric and semipositive definite, so that the objective function F(v) is bounded below, and its optimization is convex. Due to the nonnegativity constraints, however, there does not exist an analytical solution for the global minimum (or minima), and an iterative solution is needed. 2.1 Multiplicative updates Our iterative solution is expressed in terms of the positive and negative components of the matrix A in eq. (1). In particular, let A+ and A−denote the nonnegative matrices: A+ ij = Aij if Aij > 0, 0 otherwise, and A− ij = |Aij| if Aij < 0, 0 otherwise. (2) It follows trivially that A = A+−A−. In terms of these nonnegativematrices, our proposed updates (to be applied in parallel to all the elements of v) take the form: vi ←−vi " −bi + p b2 i + 4(A+v)i(A−v)i 2(A+v)i # . (3) The iterative updates in eq. (3) are remarkably simple to implement. Their somewhat mysterious form will be clarified as we proceed. Let us begin with two simple observations. First, eq. (3) prescribes a multiplicative update for the ith element of v in terms of the ith elements of the vectors b, A+v, and A+v. Second, since the elements of v, A+, and A−are nonnegative, the overall factor multiplying vi on the right hand side of eq. (3) is always nonnegative. Hence, these updates never violate the constraints of nonnegativity. 2.2 Fixed points We can show further that these updates have fixed points wherever the objective function, F(v) achieves its minimum value. Let v∗denote a global minimum of F(v). At such a point, one of two conditions must hold for each element v∗ i: either (i) v∗ i > 0 and (∂F/∂vi)|v∗= 0, or (ii), v∗ i = 0 and (∂F/∂vi)|v∗≥0. The first condition applies to the positive elements of v∗, whose corresponding terms in the gradient must vanish. These derivatives are given by: ∂F ∂vi v∗ = (A+v∗)i −(A−v∗)i + bi. (4) The second condition applies to the zero elements of v∗. Here, the corresponding terms of the gradient must be nonnegative, thus pinning v∗ i to the boundary of the feasibility region. The multiplicative updates in eq. (3) have fixed points wherever the conditions for global minima are satisfied. To see this, let γi △= −bi + p b2 i + 4(A+v∗)i(A−v∗)i 2(A+v∗)i (5) denote the factor multiplying the ith element of v in eq. (3), evaluated at v∗. Fixed points of the multiplicative updates occur when one of two conditions holds for each element vi: either (i) v∗ i >0 and γi = 1, or (ii) v∗ i = 0. It is straightforward to show from eqs. (4–5) that (∂F/∂vi)|v∗= 0 implies γi = 1. Thus the conditions for global minima establish the conditions for fixed points of the multiplicative updates. 2.3 Monotonic convergence The updates not only have the correct fixed points; they also lead to monotonic improvement in the objective function, F(v). This is established by the following theorem: Theorem 1 The function F(v) in eq. (1) decreases monotonically to the value of its global minimum under the multiplicative updates in eq. (3). The proof of this theorem (sketched in Appendix A) relies on the construction of an auxiliary function which provides an upper bound on F(v). Similar methods have been used to prove the convergence of many algorithms in machine learning[1, 4, 6, 7, 12, 16]. 3 Support vector machines We now consider the problem of computing the maximum margin hyperplane in SVMs[3, 17, 18]. Let {(xi, yi)}N i=1 denote labeled examples with binary class labels yi = ±1, and let K(xi, xj) denote the kernel dot product between inputs. In this paper, we focus on the simple case where in the high dimensional feature space, the classes are linearly separable and the hyperplane is required to pass through the origin1. In this case, the maximum margin hyperplane is obtained by minimizing the loss function: L(α) = − X i αi + 1 2 X ij αiαjyiyjK(xi, xj), (6) subject to the nonnegativity constraints αi ≥0. Let α∗denote the location of the minimum of this loss function. The maximal margin hyperplane has normal vector w = P i α∗ i yixi and satisfies the margin constraints yiK(w, xi) ≥1 for all examples in the training set. 1The extensions to non-realizable data sets and to hyperplanes that do not pass through the origin are straightforward. They will be treated in a longer paper. Kernel Polynomial Radial Data k=4 k=6 σ=0.3 σ=1.0 σ=3.0 Sonar 9.6% 9.6% 7.6% 6.7% 10.6% Breast cancer 5.1% 3.6% 4.4% 4.4% 4.4% Table 1: Misclassification error rates on the sonar and breast cancer data sets after 512 iterations of the multiplicative updates. 3.1 Multiplicative updates The loss function in eq. (6) is a special case of eq. (1) with Aij = yiyjK(xi, xj) and bi = −1. Thus, the multiplicative updates for computing the maximal margin hyperplane in hard margin SVMs are given by: αi ←−αi " 1 + p 1 + 4(A+α)i(A−α)i 2(A+α)i # (7) where A± are defined as in eq. (2). We will refer to the learning algorithm for hard margin SVMs based on these updates as Multiplicative Margin Maximization (M3). It is worth comparing the properties of these updates to those of other approaches. Like multiplicative updates based on exponentiated gradients (EG)[5, 10], the M3 updates are well suited to sparse nonnegative optimizations2; unlike EG updates, however, they do not involve a learning rate, and they come with a guarantee of monotonic improvement. Like the updates for Sequential Minimal Optimization (SMO)[15], the M3 updates have a simple closed form; unlike SMO updates, however, they can be used to adjust all the quadratic programming variables in parallel (or any subset thereof), not just two at a time. Finally, we emphasize that the M3 updates optimize the traditional objective function for SVMs; they do not compromise the goal of computing the maximal margin hyperplane. 3.2 Experimental results We tested the effectiveness of the multiplicative updates in eq. (7) on two real world problems: binary classification of aspect-angle dependent sonar signals[9] and breast cancer data[14]. Both data sets, available from the UCI Machine Learning Repository[2], have been widely used to benchmark many learning algorithms, including SVMs[5]. The sonar and breast cancer data sets consist of 208 and 683 labeled examples, respectively. Training and test sets for the breast cancer experiments were created by 80%/20% splits of the available data. We experimented with both polynomial and radial basis function kernels. The polynomial kernels had degrees k = 4 and k = 6, while the radial basis function kernels had variances of σ = 0.3, 1.0 and 3.0. The coefficients αi were uniformly initialized to a value of one in all experiments. Misclassification rates on the test data sets after 512 iterations of the multiplicative updates are shown in Table 1. As expected, the results match previously published error rates on these data sets[5], showing that the M3 updates do in practice converge to the maximum margin hyperplane. Figure 1 shows the rapid convergence of the updates to good classifiers in just one or two iterations. 2In fact, the multiplicative updates by nature cannot directly set a variable to zero. However, a variable can be clamped to zero whenever its value falls below some threshold (e.g., machine precision) and when a zero value would satisfy the Karush-Kuhn-Tucker conditions. iteration 00 support vectors non-support vectors εt (%) εg (%) 2.9 3.6 01 2.4 2.2 02 1.1 4.4 04 0.5 4.4 08 0.0 4.4 16 0.0 4.4 32 0.0 4.4 0 100 200 300 400 500 0 64 0.0 4.4 training examples coefficients Figure 1: Rapid convergence of the multiplicative updates in eq. (7). The plots show results after different numbers of iterations on the breast cancer data set with the radial basis function kernel (σ = 3). The horizontal axes index the coefficients αi of the 546 training examples; the vertical axes show their values. For ease of visualization, the training examples were ordered so that support vectors appear to the left and non-support vectors, to the right. The coefficients αi were uniformly initialized to a value of one. Note the rapid attenuation of non-support vector coefficients after one or two iterations. Intermediate error rates on the training set (ϵt) and test set (ϵg) are also shown. 3.3 Asymptotic convergence The rapid decay of non-support vector coefficients in Fig. 1 motivated us to analyze their rates of asymptotic convergence. Suppose we perturb just one of the non-support vector coefficients in eq. (6)—say αi–away from the fixed point to some small nonzero value δαi. If we hold all the variables but αi fixed and apply its multiplicative update, then the new displacement δα′ i after the update is given asymptotically by (δα′ i) ≈(δαi)γi, where γi = 1 + p 1 + 4(A+α∗)i(A−α∗)i 2(A+α∗)i , (8) and Aij = yiyjK(xi, xj). (Eq. (8) is merely the specialization of eq. (5) to SVMs.) We can thus bound the asymptotic rate of convergence—in this idealized but instructive setting— by computing an upper bound on γi, which determines how fast the perturbed coefficient decays to zero. (Smaller γi implies faster decay.) In general, the asymptotic rate of convergence is determined by the overall positioning of the data points and classification hyperplane in the feature space. The following theorem, however, provides a simple bound in terms of easily understood geometric quantities. Theorem 2 Let di = |K(xi, w)|/ p K(w, w) denote the perpendicular distance in the feature space from xi to the maximum margin hyperplane, and let d = minj dj = 1/ p K(w, w) denote the one-sided margin of the classifier. Also, let ℓi = p K(xi, xi) denote the distance of xi to the origin in the feature space, and let ℓ= maxj ℓj denote the largest such distance. Then a bound on the asymptotic rate of convergence γi is given by: γi ≤ 1 + 1 2 (di −d)d ℓiℓ −1 . (9) d i + + + + _ _ _ _ i d l classification hyperplane Figure 2: Quantities used to bound the asymptotic rate of convergence in eq. (9); see text. Solid circles denote support vectors; empty circles denote non-support vectors. The proof of this theorem is sketched in Appendix B. Figure 2 gives a schematic representation of the quantities that appear in the bound. The bound has a simple geometric intuition: the more distant a non-support vector from the classification hyperplane, the faster its coefficient decays to zero. This is a highly desirable property for large numerical calculations, suggesting that the multiplicative updates could be used to quickly prune away outliers and reduce the size of the quadratic programming problem. Note that while the bound is insensitive to the scale of the inputs, its tightness does depend on their relative locations in the feature space. 4 Conclusion SVMs represent one of the most widely used architectures in machine learning. In this paper, we have derived simple, closed form multiplicative updates for solving the nonnegative quadratic programming problem in SVMs. The M3 updates are straightforward to implement and have a rigorous guarantee of monotonic convergence. It is intriguing that multiplicative updates derived from auxiliary functions appear in so many other areas of machine learning, especially those involving sparse, nonnegative optimizations. Examples include the Baum-Welch algorithm[1] for discrete hidden markov models, generalized iterative scaling[6] and adaBoost[4] for logistic regression, and nonnegative matrix factorization[11, 12] for dimensionality reduction and feature extraction. In these areas, simple multiplicative updates with guarantees of monotonic convergence have emerged over time as preferred methods of optimization. Thus it seems worthwhile to explore their full potential for SVMs. References [1] L. Baum. An inequality and associated maximization technique in statistical estimation of probabilistic functions of Markov processes. Inequalities, 3:1–8, 1972. [2] C. L. Blake and C. J. Merz. UCI repository of machine learning databases, 1998. [3] C. J. C. Burges. A tutorial on support vector machines for pattern recognition. Knowledge Discovery and Data Mining, 2(2):121–167, 1998. [4] M. Collins, R. Schapire, and Y. Singer. Logistic regression, adaBoost, and Bregman distances. In Proceedings of the Thirteenth Annual Conference on Computational Learning Theory, 2000. [5] N. Cristianini, C. Campbell, and J. Shawe-Taylor. Multiplicative updatings for support vector machines. In Proceedings of ESANN’99, pages 189–194, 1999. [6] J. N. Darroch and D. Ratcliff. Generalized iterative scaling for log-linear models. Annals of Mathematical Statistics, 43:1470–1480, 1972. [7] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society B, 39:1–37, 1977. [8] C. Gentile. A new approximate maximal margin classification algorithm. Journal of Machine Learning Research, 2:213–242, 2001. [9] R. P. Gorman and T. J. Sejnowski. Analysis of hidden units in a layered network trained to classify sonar targets. Neural Networks, 1(1):75–89, 1988. [10] J. Kivinen and M. Warmuth. Exponentiated gradient versus gradient descent for linear predictors. Information and Computation, 132(1):1–63, 1997. [11] D. D. Lee and H. S. Seung. Learning the parts of objects with nonnegative matrix factorization. Nature, 401:788–791, 1999. [12] D. D. Lee and H. S. Seung. Algorithms for non-negative matrix factorization. In T. K. Leen, T. G. Dietterich, and V. Tresp, editors, Advances in Neural and Information Processing Systems, volume 13, Cambridge, MA, 2001. MIT Press. [13] O. L. Mangasarian and D. R. Musicant. Lagrangian support vector machines. Journal of Machine Learning Research, 1:161–177, 2001. [14] O. L. Mangasarian and W. H. Wolberg. Cancer diagnosis via linear programming. SIAM News, 23(5):1–18, 1990. [15] J. Platt. Fast training of support vector machines using sequential minimal optimization. In B. Sch¨olkopf, C. J. C. Burges, and A. J. Smola, editors, Advances in Kernel Methods — Support Vector Learning, pages 185–208, Cambridge, MA, 1999. MIT Press. [16] L. K. Saul and D. D. Lee. Multiplicative updates for classification by mixture models. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural and Information Processing Systems, volume 14, Cambridge, MA, 2002. MIT Press. [17] B. Sch¨olkopf and A. J. Smola. Learning with Kernels. MIT Press, Cambridge, MA, 2002. [18] V. Vapnik. Statistical Learning Theory. Wiley, N.Y., 1998. A Proof of Theorem 1 The proof of monotonic convergence in the objective function F(v), eq. (1), is based on the derivation of an auxiliary function. Similar techniques have been used for many models in statistical learning[1, 4, 6, 7, 12, 16]. An auxiliary function G(˜v, v) has the two crucial properties that F(˜v)≤G(˜v, v) and F(v)=G(v, v) for all nonnegative ˜v,v. From such an auxiliary function, we can derive the update rule v′ = arg min˜vG(˜v, v) which never increases (and generally decreases) the objective function F(v): F(v′) ≤G(v′, v) ≤G(v, v) = F(v). (10) By iterating this procedure, we obtain a series of estimates that improve the objective function. For nonnegative quadratic programming, we derive an auxiliary function G(˜v, v) by decomposing F(v) in eq. (1) into three terms and then bounding each term separately: F(v) = 1 2 X ij A+ ijvivj −1 2 X ij A− ijvivj + X i bivi, (11) G(˜v, v) = 1 2 X i (A+v)i vi ˜v2 i −1 2 X ij A− ijvivj 1 + log ˜vi˜vj vivj + X i bi˜vi. (12) It can be shown that F(˜v) ≤G(˜v, v). The minimization of G(˜v, v) is performed by setting its derivative to zero, leading to the multiplicative updates in eq. (3). The updates move each element vi in the same direction as −∂F/∂vi, with fixed points occurring only if v∗ i =0 or ∂F/∂vi =0. Since the overall optimization is convex, all minima of F(v) are global minima. The updates converge to the unique global minimum if it exists. B Proof of Theorem 2 The proof of the bound on the asymptotic rate of convergence relies on the repeated use of equalities and inequalities that hold at the fixed point α∗. For example, if α∗ i = 0 is a non-support vector coefficient, then (∂L/∂αi)|α∗≥0 implies (A+α∗)i−(A−α∗)i ≥1. As shorthand, let z+ i =(A+α∗)i and z− i =(A−α∗)i. Then we have the following result: 1 γi = 2z+ i 1 + q 1 + 4z+ i z− i (13) ≥ 2z+ i 1 + q (z+ i −z− i )2 + 4z+ i z− i (14) = 2z+ i 1 + z+ i + z− i = 1 + z+ i −z− i −1 z+ i + z− i + 1 (15) ≥ 1 + z+ i −z− i −1 2z+ i . (16) To prove the theorem, we need to express this result in terms of kernel dot products. We can rewrite the variables in the numerator of eq. (16) as: z+ i −z− i = X j Aijα∗ j = X j yiyjK(xi, xj)α∗ j = yiK(xi, w) = |K(xi, w)|, (17) where w=P j α∗ jxjyj is the normal vector to the maximum margin hyperplane. Likewise, we can obtain a bound on the denominator of eq. (16) by: z+ i = X j A+ ijα∗ j (18) ≤ max k A+ ik X j α∗ j (19) ≤ max k |K(xi, xk)| X j α∗ j (20) ≤ p K(xi, xi) max k p K(xk, xk) X j α∗ j (21) = p K(xi, xi) max k p K(xk, xk)K(w, w). (22) Eq. (21) is an application of the Cauchy-Schwartz inequality for kernels, while eq. (22) exploits the observation that: K(w, w) = X jk Ajkα∗ jα∗ k = X j α∗ j X k Ajkα∗ k = X j α∗ j. (23) The last step in eq. (23) is obtained by recognizing that α∗ j is nonzero only for the coefficients of support vectors, and that in this case the optimality condition (∂L/∂αj)|α∗= 0 implies P k Ajkα∗ k = 1. Finally, substituting eqs. (17) and (22) into eq. (16) gives: 1 γi ≥1 + |K(xi, w)| −1 2 p K(xi, xi) maxk p K(xk, xk)K(w, w) . (24) This reduces in a straightforward way to the claim of the theorem.
|
2002
|
177
|
2,188
|
Kernel Design Using Boosting Koby Crammer Joseph Keshet Yoram Singer School of Computer Science & Engineering The Hebrew University, Jerusalem 91904, Israel {kobics,jkeshet,singer}@cs.huji.ac.il Abstract The focus of the paper is the problem of learning kernel operators from empirical data. We cast the kernel design problem as the construction of an accurate kernel from simple (and less accurate) base kernels. We use the boosting paradigm to perform the kernel construction process. To do so, we modify the booster so as to accommodate kernel operators. We also devise an efficient weak-learner for simple kernels that is based on generalized eigen vector decomposition. We demonstrate the effectiveness of our approach on synthetic data and on the USPS dataset. On the USPS dataset, the performance of the Perceptron algorithm with learned kernels is systematically better than a fixed RBF kernel. 1 Introduction and problem Setting The last decade brought voluminous amount of work on the design, analysis and experimentation of kernel machines. Algorithm based on kernels can be used for various machine learning tasks such as classification, regression, ranking, and principle component analysis. The most prominent learning algorithm that employs kernels is the Support Vector Machines (SVM) [1, 2] designed for classification and regression. A key component in a kernel machine is a kernel operator which computes for any pair of instances their inner-product in some abstract vector space. Intuitively and informally, a kernel operator is a means for measuring similarity between instances. Almost all of the work that employed kernel operators concentrated on various machine learning problems that involved a predefined kernel. A typical approach when using kernels is to choose a kernel before learning starts. Examples to popular predefined kernels are the Radial Basis Functions and the polynomial kernels (see for instance [1]). Despite the simplicity required in modifying a learning algorithm to a “kernelized” version, the success of such algorithms is not well understood yet. More recently, special efforts have been devoted to crafting kernels for specific tasks such as text categorization [3] and protein classification problems [4]. Our work attempts to give a computational alternative to predefined kernels by learning kernel operators from data. We start with a few definitions. Let X be an instance space. A kernel is an inner-product operator K : X × X → . An explicit way to describe K is via a mapping φ : X →H from X to an inner-products space H such that K(x, x′) = φ(x)·φ(x′). Given a kernel operator and a finite set of instances S = {xi, yi}m i=1, the kernel matrix (a.k.a the Gram matrix) is the matrix of all possible inner-products of pairs from S, Ki,j = K(xi, xj). We therefore refer to the general form of K as the kernel operator and to the application of the kernel operator to a set of pairs of instances as the kernel matrix. The specific setting of kernel design we consider assumes that we have access to a base kernel learner and we are given a target kernel K⋆manifested as a kernel matrix on a set of examples. Upon calling the base kernel learner it returns a kernel operator denote Kj. The goal thereafter is to find a weighted combination of kernels ˆK(x, x′) = P j αjKj(x, x′) that is similar, in a sense that will be defined shortly, to the target kernel, ˆK ∼K⋆. Cristianini et al. [5] in their pioneering work on kernel target alignment employed as the notion of similarity the inner-product between the kernel matrices < K, K′ >F = Pm i,j=1 K(xi, xj)K′(xi, xj). Given this definition, they defined the kernel-similarity, or alignment, to be the above inner-product normalized by the norm of each kernel, ˆA(S, ˆK, K⋆) = < ˆK, K⋆>F / q < ˆK, ˆK >F < K⋆, K⋆>F , where S is, as above, a finite sample of m instances. Put another way, the kernel alignment Cristianini et al. employed is the cosine of the angle between the kernel matrices where each matrix is “flattened” into a vector of dimension m2. Therefore, this definition implies that the alignment is bounded above by 1 and can attain this value iff the two kernel matrices are identical. Given a (column) vector of m labels y where yi ∈{−1, +1} is the label of the instance xi, Cristianini et al. used the outer-product of y as the the target kernel, K⋆= yyT . Therefore, an optimal alignment is achieved if ˆK(xi, xj) = yiyj. Clearly, if such a kernel is used for classifying instances from X, then the kernel itself suffices to construct an excellent classifier f : X →{−1, +1} by setting, f(x) = sign(yiK(xi, x)) where (xi, yi) is any instance-label pair. Cristianini et al. then devised a procedure that works with both labelled and unlabelled examples to find a Gram matrix which attains a good alignment with K⋆on the labelled part of the matrix. While this approach can clearly construct powerful kernels, a few problems arise from the notion of kernel alignment they employed. For instance, a kernel operator such that the sign(K(xi, xj)) is equal to yiyj but its magnitude, |K(xi, xj)|, is not necessarily 1, might achieve a poor alignment score while it can constitute a classifier whose empirical loss is zero. Furthermore, the task of finding a good kernel when it is not always possible to find a kernel whose sign on each pair of instances is equal to the products of the labels (termed the soft-margin case in [5, 6]) becomes rather tricky. We thus propose a different approach which attempts to overcome some of the difficulties above. Like Cristianini et al. we assume that we are given a set of labelled instances S = {(xi, yi) | xi ∈X, yi ∈{−1, +1}, i = 1, . . . , m} . We are also given a set of unlabelled examples ˜S = {˜xi} ˜m i=1. If such a set is not provided we can simply use the labelled instances (without the labels themselves) as the set ˜S. The set ˜S is used for constructing the primitive kernels that are combined to constitute the learned kernel ˆK. The labelled set is used to form the target kernel matrix and its instances are used for evaluating the learned kernel ˆK. This approach, known as transductive learning, was suggested in [5, 6] for kernel alignment tasks when the distribution of the instances in the test data is different from that of the training data. This setting becomes in particular handy in datasets where the test data was collected in a different scheme than the training data. We next discuss the notion of kernel goodness employed in this paper. This notion builds on the objective function that several variants of boosting algorithms maintain [7, 8]. We therefore first discuss in brief the form of boosting algorithms for kernels. 2 Using Boosting to Combine Kernels Numerous interpretations of AdaBoost and its variants cast the boosting process as a procedure that attempts to minimize, or make small, a continuous bound on the classification error (see for instance [9, 7] and the references therein). A recent work by Collins et al. [8] unifies the boosting process for two popular loss functions, the exponential-loss (denoted henceforth as ExpLoss) and logarithmic-loss (denoted as LogLoss) that bound the empirInput: Labelled and unlabelled sets of examples: S = {(xi, yi)}m i=1 ; ˜S = {˜xi} ˜m i=1 Initialize: K ←0 (all zeros matrix) For t = 1, 2, . . . , T: • Calculate distribution over pairs 1 ≤i, j ≤m: Dt(i, j) = exp(−yiyjK(xi, xj)) ExpLoss 1/(1 + exp(−yiyjK(xi, xj))) LogLoss • Call base-kernel-learner with (Dt, S, ˜S) and receive Kt • Calculate: S+ t = {(i, j) | yiyjKt(xi, xj) > 0} ; S− t = {(i, j) | yiyjKt(xi, xj) < 0} W + t = P (i,j)∈S+ t Dt(i, j)|Kt(xi, xj)| ; W − t = P (i,j)∈S− t Dt(i, j)|Kt(xi, xj)| • Set: αt = 1 2 ln W + t W − t ; K ←K + αtKt. Return: kernel operator K : X × X → Figure 1: The skeleton of the boosting algorithm for kernels. ical classification error. Given the prediction of a classifier f on an instance x and a label y ∈{−1, +1} the ExpLoss and the LogLoss are defined as, ExpLoss(f(x), y) = exp(−yf(x)) LogLoss(f(x), y) = log(1 + exp(−yf(x))) . Collins et al. described a single algorithm for the two losses above that can be used within the boosting framework to construct a strong-hypothesis which is a classifier f(x). This classifier is a weighted combination of (possibly very simple) base classifiers. (In the boosting framework, the base classifiers are referred to as weak-hypotheses.) The stronghypothesis is of the form f(x) = PT t=1 αtht(x). Collins et al. discussed a few ways to select the weak-hypotheses ht and to find a good of weights αt. Our starting point in this paper is the first sequential algorithm from [8] that enables the construction or creation of weak-hypotheses on-the-fly. We would like to note however that it is possible to use other variants of boosting to design kernels. In order to use boosting to design kernels we extend the algorithm to operate over pairs of instances. Building on the notion of alignment from [5, 6], we say that the inner-product of x1 and x2 is aligned with the labels y1 and y2 if sign(K(x1, x2)) = y1y2. Furthermore, we would like to make the magnitude of K(x, x′) to be as large as possible. We therefore use one of the following two alignment losses for a pair of examples (x1, y1) and (x2, y2), ExpLoss(K(x1, x2), y1y2) = exp(−y1y2K(x1, x2)) LogLoss(K(x1, x2), y1y2) = log(1 + exp(−y1y2K(x1, x2))) . Put another way, we view a pair of instances as a single example and cast the pairs of instances that attain the same label as positively labelled examples while pairs of opposite labels are cast as negatively labelled examples. Clearly, this approach can be applied to both losses. In the boosting process we therefore maintain a distribution over pairs of instances. The weight of each pair reflects how difficult it is to predict whether the labels of the two instances are the same or different. The core boosting algorithm follows similar lines to boosting algorithms for classification algorithm. The pseudo code of the booster is given in Fig. 1. The pseudo-code is an adaptation the to problem of kernel design of the sequentialupdate algorithm from [8]. As with other boosting algorithm, the base-learner, which in our case is charge of returning a good kernel with respect to the current distribution, is left unspecified. We therefore turn our attention to the algorithmic implementation of the base-learning algorithm for kernels. 3 Learning Base Kernels The base kernel learner is provided with a training set S and a distribution Dt over a pairs of instances from the training set. It is also provided with a set of unlabelled examples ˜S. Without any knowledge of the topology of the space of instances a learning algorithm is likely to fail. Therefore, we assume the existence of an initial inner-product over the input space. We assume for now that this initial inner-product is the standard scalar products over vectors in n. We later discuss a way to relax the assumption on the form of the inner-product. Equipped with an inner-product, we define the family of base kernels to be the possible outer-products Kw = wwT between a vector w ∈ n and itself. Input: A distribution Dt. Labelled and unlabelled sets: S = {(xi, yi)}m i=1 ; ˜S = {˜xi} ˜m i=1 . Compute : • Calculate: A ∈ m× ˜m , Ai,r = xi · ˜xr B ∈ m×m , Bi,j = Dt(i, j)yiyj K ∈ ˜m× ˜m , Kr,s = ˜xr · ˜xs • Find the generalized eigenvector v ∈ m for the problem AT BAv = λKv which attains the largest eigenvalue λ • Set: w = (P r vr˜xr)/∥P r vr˜xr∥. Return: Kernel operator Kw = wwt. Figure 2: The base kernel learning algorithm. Using this definition we get, Kw(xi, xj) = (xi·w)(xj·w) . Therefore, the similarity between two instances xi and xj is high iff both xi and xj are similar (w.r.t the standard inner-product) to a third vector w. Analogously, if both xi and xj seem to be dissimilar to the vector w then they are similar to each other. Despite the restrictive form of the inner-products, this family is still too rich for our setting and we further impose two restrictions on the inner products. First, we assume that w is restricted to a linear combination of vectors from ˜S. Second, since scaling of the base kernels is performed by the boosted, we constrain the norm of w to be 1. The resulting class of kernels is therefore, C = {Kw = wwT | w = P ˜m r=1 βr˜xr, ∥w∥= 1} . In the boosting process we need to choose a specific base-kernel Kw from C. We therefore need to devise a notion of how good a candidate for base kernel is given a labelled set S and a distribution function Dt. In this work we use the simplest version suggested by Collins et al. This version can been viewed as a linear approximation on the loss function. We define the score of a kernel Kw w.r.t to the current distribution Dt to be, Score(Kw) = X i,j Dt(i, j)yiyjKw(xi, xj) . (1) The higher the value of the score is, the better Kw fits the training data. Note that if Dt(i, j) = 1/m2 (as is D0) then Score(Kw) is proportional to the alignment since ∥w∥= 1. Under mild assumptions the score can also provide a lower bound of the loss function. To see that let c be the derivative of the loss function at margin zero, c = Loss′(0) . If all the training examples xi ∈S lies in a ball of radius √c, we get that Loss(Kw(xi, xj), yiyj) ≥ 1 −cKw(xi, xj)yiyj ≥0, and therefore, X i,j Dt(i, j)Loss(Kw(xi, xj), yiyj) ≥1 −c X i,j Dt(i, j)Kw(xi, xj)yiyj . Using the explicit form of Kw in the Score function (Eq. (1)) we get, Score(Kw) = P i,j D(i, j)yiyj(w·xi)(w·xj) . Further developing the above equation using the constraint that w = P ˜m r=1 βr˜xr we get, Score(Kw) = X r,s βsβr X i,j D(i, j)yiyj (xi · ˜xr) (xj · ˜xs) . To compute efficiently the base kernel score without an explicit enumeration we exploit the fact that if the initial distribution D0 is symmetric (D0(i, j) = D0(j, i)) then all the distributions generated along the run of the boosting process, Dt, are also symmetric. We now define a matrix A ∈ m× ˜m where Ai,r = xi · ˜xr and a symmetric matrix B ∈ m×m with Bi,j = Dt(i, j)yiyj. Simple algebraic manipulations yield that the score function can be written as the following quadratic form, Score(β) = βT (AT BA)β , where β is ˜m dimensional column vector. Note that since B is symmetric so is AT BA. Finding a good base kernel is equivalent to finding a vector β which maximizes this quadratic form under the norm equality constraint ∥w∥2 = ∥P ˜m r=1 βr˜xr∥2 = βT Kβ = 1 where Kr,s = ˜xr · ˜xs . Finding the maximum of Score(β) subject to the norm constraint is a well known maximization problem known as the generalized eigen vector problem (cf. [10]). Applying simple algebraic manipulations it is easy to show that the matrix AT BA is positive semidefinite. Assuming that the matrix K is invertible, the the vector β which maximizes the quadratic form is proportional the eigenvector of K−1AT BA which is associated with the generalized largest eigenvalue. Denoting this vector by v we get that w ∝P ˜m r=1 vr˜xr. Adding the norm constraint we get that w = (P ˜m r=1 vr˜xr)/∥P ˜m r=1 vr˜xr∥. The skeleton of the algorithm for finding a base kernels is given in Fig. 3. To conclude the description of the kernel learning algorithm we describe how to the extend the algorithm to be employed with general kernel functions. Kernelizing the Kernel: As described above, we assumed that the standard scalarproduct constitutes the template for the class of base-kernels C. However, since the procedure for choosing a base kernel depends on S and ˜S only through the inner-products matrix A, we can replace the scalar-product itself with a general kernel operator κ : X ×X → , where κ(xi, xj) = φ(xi) · φ(xj). Using a general kernel function κ we can not compute however the vector w explicitly. We therefore need to show that the norm of w, and evaluation Kw on any two examples can still be performed efficiently. First note that given the vector v we can compute the norm of w as follows, ∥w∥2 = X r vr˜xr !T X s vs˜xr ! = X r,s vrvsκ(˜xr, ˜xs) . Next, given two vectors xi and xj the value of their inner-product is, Kw(xi, xj) = X r,s vrvsκ(xi, ˜xr)κ(xj, ˜xs) . Therefore, although we cannot compute the vector w explicitly we can still compute its norm and evaluate any of the kernels from the class C. 4 Experiments Synthetic data: We generated binary-labelled data using as input space the vectors in 100. The labels, in {−1, +1}, were picked uniformly at random. Let y designate the label of a particular example. Then, the first two components of each instance were drawn from a two-dimensional normal distribution, N(µ, ∆P ∆−1) with the following parameters, µ = y 0.03 0.03 ∆= 1 √ 2 1 −1 1 1 X = 0.1 0 0 0.01 . That is, the label of each examples determined the mean of the distribution from which the first two components were generated. The rest of the components in the vector (98 −0.2 0 0.2 −0.2 0 0.2 −8 −6 −4 −2 0 2 4 6 8 −8 −6 −4 −2 0 2 4 6 8 20 40 60 80 100 120 140 160 180 200 50 100 150 200 250 300 20 40 60 80 100 120 140 160 180 200 50 100 150 200 250 300 Figure 3: Results on a toy data set prior to learning a kernel (first and third from left) and after learning (second and fourth). For each of the two settings we show the first two components of the training data (left) and the matrix of inner products between the train and the test data (right). altogether) were generated independently using the normal distribution with a zero mean and a standard deviation of 0.05. We generated 100 training and test sets of size 300 and 200 respectively. We used the standard dot-product as the initial kernel operator. On each experiment we first learned a linear classier that separates the classes using the Perceptron [11] algorithm. We ran the algorithm for 10 epochs on the training set. After each epoch we evaluated the performance of the current classifier on the test set. We then used the boosting algorithm for kernels with the LogLoss for 30 rounds to build a kernel for each random training set. After learning the kernel we re-trained a classifier with the Perceptron algorithm and recorded the results. A summary of the online performance is given in Fig. 4. The plot on the left-hand-side of the figure shows the instantaneous error (achieved during the run of the algorithm). Clearly, the Perceptron algorithm with the learned kernel converges much faster than the original kernel. The middle plot shows the test error after each epoch. The plot on the right shows the test error on a noisy test set in which we added a Gaussian noise of zero mean and a standard deviation of 0.03 to the first two features. In all plots, each bar indicates a 95% confidence level. It is clear from the figure that the original kernel is much slower to converge than the learned kernel. Furthermore, though the kernel learning algorithm was not expoed to the test set noise, the learned kernel reflects better the structure of the feature space which makes the learned kernel more robust to noise. Fig. 3 further illustrates the benefits of using a boutique kernel. The first and third plots from the left correspond to results obtained using the original kernel and the second and fourth plots show results using the learned kernel. The left plots show the empirical distribution of the two informative components on the test data. For the learned kernel we took each input vector and projected it onto the two eigenvectors of the learned kernel operator matrix that correspond to the two largest eigenvalues. Note that the distribution after the projection is bimodal and well separated along the first eigen direction (x-axis) and shows rather little deviation along the second eigen direction (y-axis). This indicates that the kernel learning algorithm indeed found the most informative projection for separating the labelled data with large margin. It is worth noting that, in this particular setting, any algorithm which chooses a single feature at a time is prone to failure since both the first and second features are mandatory for correctly classifying the data. The two plots on the right hand side of Fig. 3 use a gray level color-map to designate the value of the inner-product between each pairs instances, one from training set (y-axis) and the other from the test set. The examples were ordered such that the first group consists of the positively labelled instances while the second group consists of the negatively labelled instances. Since most of the features are non-relevant the original inner-products are noisy and do not exhibit any structure. In contrast, the inner-products using the learned kernel yields in a 2 × 2 block matrix indicating that the inner-products between instances sharing the same label obtain large positive values. Similarly, for instances of opposite 10 0 10 1 10 2 10 3 10 4 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Round Averaged Cumulative Error % Regular Kernel Learned Kernel 2 4 6 8 10 0 2 4 6 8 10 12 Epochs Test Error % Regular Kernel Learned Kernel 2 4 6 8 10 9 10 11 12 13 14 15 16 17 18 19 Epochs Test Error % Regular Kernel Learned Kernel Figure 4: The online training error (left), test error (middle) on clean synthetic data using a standard kernel and a learned kernel. Right: the online test error for the two kernels on a noisy test set. labels the inner products are large and negative. The form of the inner-products matrix of the learned kernel indicates that the learning problem itself becomes much easier. Indeed, the Perceptron algorithm with the standard kernel required around 94 training examples on the average before converging to a hyperplane which perfectly separates the training data while using the Perceptron algorithm with learned kernel required a single example to reach a perfect separation on all 100 random training sets. USPS dataset: The USPS (US Postal Service) dataset is known as a challenging classification problem in which the training set and the test set were collected in a different manner. The USPS contains 7, 291 training examples and 2, 007 test examples. Each example is represented as a 16 × 16 matrix where each entry in the matrix is a pixel that can take values in {0, . . ., 255}. Each example is associated with a label in {0, . . . , 9} which is the digit content of the image. Since the kernel learning algorithm is designed for binary problems, we broke the 10-class problem into 45 binary problems by comparing all pairs of classes. The interesting question of how to learn kernels for multiclass problems is beyond the scopre of this short paper. We thus constraint on the binary error results for the 45 binary problem described above. For the original kernel we chose a RBF kernel with σ = 1 which is the value employed in the experiments reported in [12]. We used the kernelized version of the kernel design algorithm to learn a different kernel operator for each of the binary problems. We then used a variant of the Perceptron [11] and with the original RBF kernel and with the learned kernels. One of the motivations for using the Perceptron is its simplicity which can underscore differences in the kernels. We ran the kernel learning algorithm with LogLoss and ExpLoss, using bith the training set and the test test as ˜S. Thus, we obtained four different sets of kernels where each set consists of 45 kernels. By examining the training loss, we set the number of rounds of boosting to be 30 for the LogLoss and 50 for the ExpLoss, when using the trainin set. When using the test set, the number of rounds of boosting was set to 100 for both losses. Since the algorithm exhibits slower rate of convergence with the test data, we choose a a higher value without attempting to optimize the actual value. The left plot of Fig. 5 is a scatter plot comparing the test error of each of the binary classifiers when trained with the original RBF a kernel versus the performance achieved on the same binary problem with a learned kernel. The kernels were built using boosting with the LogLoss and ˜S was the training data. In almost all of the 45 binary classification problems, the learned kernels yielded lower error rates when combined with the Perceptron algorithm. The right plot of Fig. 5 compares two learned kernels: the first was build using the training instances as the templates constituing ˜S while the second used the test instances. Although the differenece between the two versions is not as significant as the difference on the left plot, we still achieve an overall improvement in about 25% of the binary problems by using the test instances. 0 1 2 3 4 5 6 0 1 2 3 4 5 6 Base Kernel Learned Kernel (Train) 0 1 2 3 4 5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 Learned Kernel (Train) Learned Kernel (Test) Figure 5: Left: a scatter plot comparing the error rate of 45 binary classifiers trained using an RBF kernel (x-axis) and a learned kernel with training instances. Right: a similar scatter plot for a learned kernel only constructed from training instances (x-axis) and test instances. 5 Discussion In this paper we showed how to use the boosting framework to design kernels. Our approach is especially appealing in transductive learning tasks where the test data distribution is different than the the distribution of the training data. For example, in speech recognition tasks the training data is often clean and well recorded while the test data often passes through a noisy channel that distorts the signal. An interesting and challanging question that stem from this research is how to extend the framework to accommodate more complex decision tasks such as multiclass and regression problems. Finally, we would like to note alternative approaches to the kernel design problem has been devised in parallel and independently. See [13, 14] for further details. Acknowledgements: Special thanks to Cyril Goutte and to John Show-Taylor for pointing the connection to the generalized eigen vector problem. Thanks also to the anonymous reviewers for constructive comments. References [1] V. N. Vapnik. Statistical Learning Theory. Wiley, 1998. [2] N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector Machines. Cambridge University Press, 2000. [3] Huma Lodhi, John Shawe-Taylor, Nello Cristianini, and Christopher J. C. H. Watkins. Text classification using string kernels. Journal of Machine Learning Research, 2:419–444, 2002. [4] C. Leslie, E. Eskin, and W. Stafford Noble. The spectrum kernel: A string kernel for svm protein classification. In Proceedings of the Pacific Symposium on Biocomputing, 2002. [5] Nello Cristianini, Andre Elisseeff, John Shawe-Taylor, and Jaz Kandla. On kernel target alignment. In Advances in Neural Information Processing Systems 14, 2001. [6] G. Lanckriet, N. Cristianini, P. Bartlett, L. El Ghaoui, and M. Jordan. Learning the kernel matrix with semi-definite programming. In Proc. of the 19th Intl. Conf. on Machine Learning, 2002. [7] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Additive logistic regression: a statistical view of boosting. Annals of Statistics, 28(2):337–374, April 2000. [8] Michael Collins, Robert E. Schapire, and Yoram Singer. Logistic regression, adaboost and bregman distances. Machine Learning, 47(2/3):253–285, 2002. [9] Llew Mason, Jonathan Baxter, Peter Bartlett, and Marcus Frean. Functional gradient techniques for combining hypotheses. In Advances in Large Margin Classifiers. MIT Press, 1999. [10] Roger A. Horn and Charles R. Johnson. Matrix Analysis. Cambridge University Press, 1985. [11] F. Rosenblatt. The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65:386–407, 1958. [12] B. Sch¨olkopf, S. Mika, C.J.C. Burges, P. Knirsch, K. M¨uller, G. R¨atsch, and A.J. Smola. Input space vs. feature space in kernel-based methods. IEEE Trans. on NN, 10(5):1000–1017, 1999. [13] O. Bosquet and D.J.L. Herrmann. On the complexity of learning the kernel matrix. NIPS, 2002. [14] C.S. Ong, A.J. Smola, and R.C. Williamson. Superkenels. NIPS, 2002.
|
2002
|
178
|
2,189
|
Rational Kernels Corinna Cortes Patrick Haffner Mehryar Mohri AT&T Labs – Research 180 Park Avenue, Florham Park, NJ 07932, USA corinna, haffner, mohri @research.att.com Abstract We introduce a general family of kernels based on weighted transducers or rational relations, rational kernels, that can be used for analysis of variable-length sequences or more generally weighted automata, in applications such as computational biology or speech recognition. We show that rational kernels can be computed efficiently using a general algorithm of composition of weighted transducers and a general single-source shortest-distance algorithm. We also describe several general families of positive definite symmetric rational kernels. These general kernels can be combined with Support Vector Machines to form efficient and powerful techniques for spoken-dialog classification: highly complex kernels become easy to design and implement and lead to substantial improvements in the classification accuracy. We also show that the string kernels considered in applications to computational biology are all specific instances of rational kernels. 1 Introduction In many applications such as speech recognition and computational biology, the objects to study and classify are not just fixed-length vectors, but variable-length sequences, or even large sets of alternative sequences and their probabilities. Consider for example the problem that originally motivated the present work, that of classifying speech recognition outputs in a large spoken-dialog application. For a given speech utterance, the output of a large-vocabulary speech recognition system is a weighted automaton called a word lattice compactly representing the possible sentences and their respective probabilities based on the models used. Such lattices, while containing sometimes just a few thousand transitions, may contain hundreds of millions of paths each labeled with a distinct sentence. The application of discriminant classification algorithms to word lattices, or more generally weighted automata, raises two issues: that of handling variable-length sequences, and that of applying a classifier to a distribution of alternative sequences. We describe a general technique that solves both of these problems. Kernel methods are widely used in statistical learning techniques such as Support Vector Machines (SVMs) [18] due to their computational efficiency in high-dimensional feature spaces. This motivates the introduction and study of kernels for weighted automata. We present a general family of kernels based on weighted transducers or rational relations, rational kernels which apply to weighted automata. We show that rational kernels can be computed efficiently using a general algorithm of composition of weighted transducers and a general single-source shortest-distance algorithm. We also briefly describe some specific rational kernels and their applications to spokendialog classification. These kernels are symmetric and positive definite and can thus be combined with SVMs to form efficient and powerful classifiers. An important benefit of SEMIRING SET Boolean Probability
Log Tropical Table 1: Semiring examples. ! is defined by: "# $ %& ('*),+-/.,021 .,02354 . our approach is its generality and its simplicity: the same efficient algorithm can be used to compute arbitrarily complex rational kernels. This makes highly complex kernels easy to use and helps us achieve substantial improvements in classification accuracy. 2 Weighted automata and transducers In this section, we present the algebraic definitions and notation necessary to introduce rational kernels. Definition 1 ([7]) A system -/6 77 8 4 is a semiring if: -96 7 4 is a commutative monoid with identity element ; -96 7 4 is a monoid with identity element ; distributes over ; and is an annihilator for : for all :<; 6 : !& =>:#& . Thus, a semiring is a ring that may lack negation. Table 2 lists some familiar examples of semirings. In addition to the Boolean semiring and the probability semiring used to combine probabilities, two semirings often used in applications are the log semiring which is isomorphic to the probability semiring via a ' )+ morphism, and the tropical semiring which is derived from the log semiring using the Viterbi approximation. Definition 2 A weighted finite-state transducer ? over a semiring 6 is an 8-tuple ?@& -BA CDEFGHJIKLMJN 4 where: A is the finite input alphabet of the transducer; C is the finite output alphabet; E is a finite set of states; FPOQE the set of initial states; GROSE the set of final states; I@OTEU
-VA XW 4
CY XW 4
6
E a finite set of transitions; LZF\[ 6 the initial weight function; and N]ZGU[ 6 the final weight function mapping G to 6 . Weighted automata can be formally defined in a similar way by simply omitting the input or the output labels. Given a transition . ;^I , we denote by _a` .cb its origin or previous state and d` .cb its destination state or next state, and e!` .cb its weight. A path fQ& .hgjiciiJ.lk is an element of Im with consecutive transitions: d` .,n 0 g b &o_a` .5npb , q$&srtcuucuvw . We extend d and _ to paths by setting: d` f b &xd` .kXb and _a` f b &o_a` .gJb . The weight function e can also be extended to paths by defining the weight of a path as the -product of the weights of its constituent transitions: e!` f b &se!` . g b icii Qe!` . k b . We denote by y -/z z|{}4 the set of paths from z to z { and by y -9z ~"a~%2 z { 4 the set of paths from z to z { with input label "; A m and output label % (transducer case). These definitions can be extended to subsets 7 { OE , by: y #J"8~%2 { 4 &
jV~
/y -/z ~"a~%2 z { 4 . A transducer ? is regulated if the output weight associated by ? to any pair of input-output string "a~% 4 by: ` ` ? b bV"8J% 4 & 1 3 M L _a` f b4 e!` f b ^N2` d` f b b (1) is well-defined and in 6 . ` ` ? b b" 4 & when y FJ"8~%2G 4 &x . In the following, we will assume that all the transducers considered are regulated. Weighted transducers are closed under , and Kleene-closure. In particular, the -sum and -multiplications of two transducers ? g and ?M are defined for each pair "a~% 4 by: ` ` ? g ? b b"8J% 4 & ` ` ? gb bV"8J% 4 S` ` ?2 b bV"8~% 4 (2) ` ` ? g ? b b"8J% 4 & 1XM1lV1X 3vM3v3 ` ` ? gb b" g J% gv4 ` ` ? b b"J% 4 (3) 3 Rational kernels This section introduces rational kernels, presents a general algorithm for computing them efficiently and describes several examples of rational kernels. 3.1 Definition Definition 3 A kernel is rational if there exist a weighted transducer ? & -BA CDEFGHJIKLMJN 4 over the semiring 6 and a function Z 6 [ such that for all "a~%\; A m : "8J% 4 & ` ` ? b b"a~% 4~4 (4) In general, is an arbitrary function mapping 6 to . In some cases, it may be desirable to assume that it is a semiring morphism as in Section 3.6. It is often the identity function when 6 &o and may be a projection when the semiring 6 is the cross-product of and another semiring ( 6 &S
6 { ). Rational kernels can be naturally extended to kernels over weighted automata. In the following, to simplify the presentation, we will restrict ourselves to the case of acyclic weighted automata which is the case of interest for our applications, but our results apply similarly to arbitrary weighted automata. Let and be two acyclic weighted automata over the semiring 6 , then 4 is defined by: ! 4 & 1 3 ` ` b bV" 4 S` ` ? b bV"8~% 4 ` ` b bV% 4J4 (5) More generally, the results mentioned in the following for strings apply all similarly to acyclic weighted automata. Since the set of weighted transducers over a semiring 6 is also closed under -sum and -product [2, 3], it follows that rational kernels over a semiring 6 are closed under sum and product. We denote by g the sum and by g
the product of two rational kernels g and < . Let ? g and ?M be the associated transducers of these kernels, we have for example: g 4"a~% 4 & ` ` ? g ? 4Vb b"a~% 4~4 & g "8J% 4
"a~% 4 (6) In learning techniques such as those based on SVMs, we are particularly interested in positive definite symmetric kernels, which guarantee the existence of a corresponding reproducing kernel Hilbert space. Not all rational kernels are positive definite symmetric but in the following sections we will describe some general classes of rational kernels that have this property. Positive definite symmetric kernels can be used to construct other families of kernels that also meet these conditions [17]. Polynomial kernels of degree _ are formed from the expression T: 4 , and Gaussian kernels can be formed as -~ 4 with "a~% 4 & "8J" 4 %2~% 4 r "8J% 4 . Since the class of symmetric positive definite kernels is closed under sum [1], the sum of two positive definite rational kernels is also a positive definite rational kernel. In what follows, we will focus on the algorithm for computing rational kernels. The algorithm for computing "a~% 4 , or ! 4 , for any two acyclic weighted automata, is based on two general algorithms that we briefly present: composition of weighted transducers to combine , ? , and , and a general shortest-distance algorithm in a semiring 6 to compute the -sum of the weights of the successful paths of the combined machine. 3.2 Composition of weighted transducers Composition is a fundamental operation on weighted transducers that can be used in many applications to create complex weighted transducers from simpler ones. Let 6 be a commutative semiring and let ? g and ?M be two weighted transducers defined over 6 such that the input alphabet of ? coincides with the output alphabet of ? g . Then, the composition of ? g and ?M is a weighted transducer ? g ? which, when it is regulated, is defined for all 0 1 a:a/1.61 2 b:b/0.22 a:b/0 b:a/0.69 3/0 b:a/0.69 0 a:a/1.2 1 a:b/2.3 b:a/0.51 b:b/0.92 2/0 a:a/0.51 (a) (b) 0 1 a:a/2.81 4 a:b/3.91 2 b:a/0.73 a:a/0.51 a:b/0.92 3/0 b:a/1.2 (c) Figure 1: (a) Weighted transducer ? g over the log semiring. (b) Weighted transducer ? over the log semiring. (c) Construction of the result of composition ? g ? . Initial states are represented by bold circles, final states by double circles. Inside each circle, the first number indicates the state number, the second, at final states only, the value of the final weight function N at that state. Arrows represent transitions and are labeled with symbols followed by their corresponding weight. "8J% by [2, 3, 15, 7]:1 ` ` ? g ? b bV"8J% 4 & ` ` ? gb bV"8 4 ` ` ? b b ~% 4 (7) Note that a transducer can be viewed as a matrix over a countable set A m!
CKm and composition as the corresponding matrix-multiplication. There exists a general and efficient composition algorithm for weighted transducers which takes advantage of the sparsity of the input transducers [14, 12]. States in the composition ? g ? of two weighted transducers ? g and ?8 are identified with pairs of a state of ? g and a state of ?8 . Leaving aside transitions with W inputs or outputs, the following rule specifies how to compute a transition of ? g ? from appropriate transitions of ? g and ? :2 -9z g J: ~e g z 4 and -/z { g ~e z { 4 & -~-9z g z { g 4 :hl~e g e -/z z { 4J4 (8) In the worst case, all transitions of ? g leaving a state z g match all those of ?a leaving state zl{ g , thus the space and time complexity of composition is quadratic: -~- E g I g 4- E I 4~4 . Fig.(1) (a)-(c) illustrate the algorithm when applied to the transducers of Fig.(1) (a)(b) defined over the log semiring. The intersection of two weighted automata is a special case of composition. It corresponds to the case where the input and output label of each transition are identical. 3.3 Single-source shortest distance algorithm over a semiring Given a weighted automaton or transducer , the shortest-distance from state z to the set of final states G is defined as the -sum of all the paths from z to G : ` zXb & } M e!` f b N2` d` f b*b (9) when this sum is well-defined and in 6 , which is always the case when the semiring is w closed or when is acyclic [11], the case of interest in what follows. There exists a general algorithm for computing the shortest-distance ` zXb in linear time - E ?
(? 4 I 4 , where ? denotes the maximum time to compute and ? the time to compute [11]. The algorithm is a generalization of Lawler’s algorithm [8] to the case of an arbitrary semiring 6 . It is based on a generalized relaxation of the outgoing transitions of each state of visited in reverse topological order [11]. 1We use a matrix notation for the definition of composition as opposed to a functional notation. This is a deliberate choice motivated by an improved readability in many applications. 2See [14, 12] for a detailed presentation of the algorithm including the use of a transducer filter for dealing with -multiplicity in the case of non-idempotent semirings. 0/0 a:a/0 b:b/0 a:b/1 b:a/1 a:ε/2 b:ε/2 ε:a/3 ε:b/3 0 a:ε b:ε 1 ε:a ε:b 2 a:a b:b ε:a ε:b a:a b:b a:ε/λ b:ε/λ 3 ε:a/λ ε:b/λ 4 a:a b:b ε:a/λ ε:b/λ a:a b:b a:ε b:ε 5 ε:a ε:b ε:a ε:b (a) (b) Figure 2: Weighted transducers associated to two rational kernels. (a) Edit-distance kernel. (b) Gappy -gram count kernel, with = 2. 3.4 Algorithm Let be a rational kernel and let ? be the associated weighted transducer. Let and be two acyclic weighted automata. and may represent just two strings "8J%^; A m or may be any other complex weighted acceptors. By definition of rational kernels (Eq.(5)) and the shortest-distance (Eq.(9)), ! 4 can be computed by: 1. Constructing the acyclic composed transducer & ? . 2. Computing ` b , the shortest-distance from the initial states of to its final states using the shortest-distance algorithm described in the previous section. 3. Computing ` b4 . Thus, the total complexity of the algorithm is - ? 4 , where ? , , and denote respectively the size of ? , and and the worst case complexity of computing " 4 , "; 6 . If we assume that can be computed in constant time as in many applications, then the complexity of the computation of 4 is quadratic with respect to and is: - ? 4 . 3.5 Edit-distance kernels Recently, several kernels, string kernels, have been introduced in computational biology for input vectors representing biological sequences [4, 19]. String kernels are specific instances of rational kernels. Fig.(2) (a) shows the weighted transducer over the tropical semiring associated to a classical type of string kernel. The kernel corresponds to an edit-distance based on a symbol substitution with cost , deletion with cost r , and insertion of cost . All classical edit-distances can be represented by weighted transducers over the tropical semiring [13, 10]. The kernel computation algorithm just described can be used to compute efficiently the edit-distance of two strings or two sets of strings represented by automata. 3 3.6 Rational kernels of the type ? ? 0 g There exists a general method for constructing a positive definite and symmetric rational kernel from a weighted transducer ? when Z 6 [ is a semiring morphism – this implies in particular that 6 is commutative. Denote by ? 0 g the inverse of ? , that is the transducer obtained from ? by transposing the input and output labels of each transition. Then the composed transducer &S? ? 0 g is symmetric and, when it is regulated, defines 3We have proved and will present elsewhere a series of results related to kernels based on the notion of edit-distance. In particular, we have shown that the classical edit-distance with equal costs for insertion, deletion and substitution is not negative definite [1] and that the Gaussian kernel
is not positive definite. a positive definite symmetric rational kernel . Indeed, since is a semiring morphism, by definition of composition: "8J% 4 & ` ` b b"a~% 4~4 & ` ` ? b b"8 4J4i ` ` ? b b%2 4J4 which shows that is symmetric. For any non-negative integer d and for all "a~% we define a symmetric kernel by: "8~% 4 & ` ` ? b b"8 4J4i ` ` ? b bV% 4J4 where the sum runs over all strings of length less or equal to d . Let g ucuuv be an arbitrary ordering of these strings. For any and any " g uucuJ"
; A m , define the matrix by: n & " n ~" X4 . Then, & with defined by n & ` ` ? b b" n X4J4 . Thus, the eigenvalues of are all non-negative, which implies that is positive definite [1]. Since is a point-wise limit of , "8J% 4 & ' "8J% 4 , is also definite positive [1]. 4 Application to spoken-dialog classification Rational kernels can be used in a variety of applications ranging from computational biology to optical character recognition. This section singles out one specific application, that of topic classification applied to the output of a speech recognizer. We will show how the use of weighted transducers rationalizes the design and optimization of kernels. Simple equations and graphs replace complex diagrams and intricate algorithms often used for the definition and analysis of string kernels. As mentioned in the introduction, the output of a speech recognition system associated to a speech utterance is a weighted automaton called a word lattice representing a set of alternative sentences and their respective probabilities based on the models used. Rational kernels help address both the problem of handling variable-length sentences and that of applying a classification algorithm to such distributions of alternatives. The traditional solution to sentence classification is the “bag-of-words” approach used in information retrieval. Because of the very large dimension of the input space, the use of large-margin classifiers such as SVMs [6] and AdaBoost [16] was found to be appropriate in such applications. One approach adopted in various recent studies to measure the topic-similarity of two sentences consists of counting their common non-contiguous -grams, i.e., their common substrings of words with possible insertions. These -grams can be extracted explicitly from each sentence [16] or matched implicitly through a string kernel [9]. We will show that such kernels are rational and will describe how they can be easily constructed and computed using the general algorithms given in the previous section. More generally, we will show how rational kernels can be used to compute the expected counts of common non-contiguous -grams of two weighted automata and thus define the topic-similarity of two lattices. This will demonstrate the simplicity, power, and flexibility of our framework for the design of kernels. 4.1 Application of ? ? 0 g kernels Consider a word lattice over the probability semiring. can be viewed as a probability distribution y over all strings <; A m . The expected count or number of occurrences of an -gram sequence " in a string for the probability distribution y is: y 4 1 , where 1 denotes the number of occurrences of " in . It is easy to construct a weighted transducer ? that outputs the set of -grams of an input lattice with their corresponding expected counts. Fig.(3) (a) shows that transducer, when the alphabet is reduced to A & : and &r . Similarly, the transducer ? H of Fig.(3) (b) can be used to output non-contiguous or gappy -grams with their expected counts. 4 Long gaps are penalized 4The transducers shown in the figures of this section are all defined over the probability semiring, thus a transition corresponding to a gap in ! #" $ is weighted by % . 0 a:ε b:ε 1 a:a b:b 2 a:a b:b a:ε b:ε 0 a:ε b:ε 1 a:a b:b a:ε/λ b:ε/λ 2 a:a b:b a:ε b:ε (a) (b) Figure 3: -gram transducers ( = 2) defined over the probability semiring. (a) Bigram counter transducer ?a . (b) Gappy bigram counter ?v . with a decay factor LY : a gap of length reduces the count by L . A transducer counting variable-length -grams is obtained by simply taking the sum of these transducers: ? & ? t . In the remaining of this section, we will omit the subscript and L since our results are independent of the choice of these parameters. Thus the topic-similarity of two strings or lattices and based on the expected counts of theirs common substrings is given by: 4 & ` ? ? 0 g 4 b (10) The kernel is of the type studied in section 3.6 and thus is symmetric and positive definite. 4.2 Computation The specific form of the kernel and the associativity of composition provide us with several alternatives for computing . General algorithm. We can use the general algorithm described in Section 3.4 to compute by precomputing the transducer ? ? 0 g . Fig.(2)(b) shows the result of that composition in the case of gappy bigrams. Using that algorithm, the complexity of the computation of the kernel 4 as described in the previous section is quadratic - 4 . This particular example has been treated by ad hoc algorithms with a similar complexity, but that only work with strings [9, 5] and not with weighted automata or lattices. Other factoring. Thanks to the associativity of composition, we can consider a different factoring of the composition cascade defining : 4 & ` ? 4 ? 0 g 4Vb (11) This factoring suggests computing ? and ? 0 g first and then composing the resulting transducers rather than constructing ? ? 0 g . The choice between the two methods does not affect the overall time complexity of the algorithm, but in practice one method may be preferable over the other. We are showing elsewhere that in the specific case of the counting transducers such as those described in previous sections, the kernel computation can in fact be performed in linear time, that is in - 4 , in particular by using the notion of failure functions. 4.3 Experimental results We used the ? ? 0 g -type kernel with SVMs for call-classification in the spoken language understanding (SLU) component of the AT&T How May I Help You natural dialog system. In this system, users ask questions about their bill or calling plans and the objective is to assign a class to each question out of a finite set of 38 classes made of call-types and named entities such as Billing Services, or Calling Plans. In our experiments, we used 7,449 utterances as our training data and 2,228 utterances as our test data. The feature space corresponding to our lattice kernel is that of all possible trigrams over a vocabulary of 5,405 words. Training required just a few minutes on a single processor of a 1GHz Intel Pentium processor Linux cluster with 2GB of memory and 256 KB cache. The implementation took only about a few hours and was entirely based on the FSM library. Compared to the standard approach of using trigram counts over the best recognized sentence, our experiments with a trigram rational kernel showed a reduction in error rate at a rejection level. 5 Conclusion In our classification experiments in spoken-dialog applications, we found rational kernels to be a very powerful exploration tool for constructing and generalizing highly efficient string and weighted automata kernels. In the design of learning machines such as SVMs, rational kernels give us access to the existing set of efficient and general weighted automata algorithms [13]. Prior knowledge about the task can be crafted into the kernel using graph editing tools or weighted regular expressions, in a way that is often more intuitive and easy to modify than complex matrices or formal algorithms. References [1] Christian Berg, Jens Peter Reus Christensen, and Paul Ressel. Harmonic Analysis on Semigroups. Springer-Verlag: Berlin-New York, 1984. [2] Jean Berstel. Transductions and Context-Free Languages. Teubner Studienbucher: Stuttgart, 1979. [3] Samuel Eilenberg. Automata, Languages and Machines, volume A-B. Academic Press, 1974. [4] David Haussler. Convolution kernels on discrete structures. Technical Report UCSC-CRL-9910, University of California at Santa Cruz, 1999. [5] Ralf Herbrich. Learning Kernel Classifiers. MIT Press, Cambridge, 2002. [6] Thorsten Joachims. Text categorization with support vector machines: learning with many relevant features. In Proc. of ECML-98. Springer Verlag, 1998. [7] Werner Kuich and Arto Salomaa. Semirings, Automata, Languages. Number 5 in EATCS Monographs on Theoretical Computer Science. Springer-Verlag, Berlin, Germany, 1986. [8] Eugene L. Lawler. Combinatorial Optimization: Networks and Matroids. Holt, Rinehart, and Winston, 1976. [9] Huma Lodhi, John Shawe-Taylor, Nello Cristianini, and Christopher J. C. H. Watkins. Text classification using string kernels. In NIPS, pages 563–569, 2000. [10] Mehryar Mohri. Edit-Distance of Weighted Automata. In Jean-Marc Champarnaud and Denis Maurel, editor, Seventh International Conference, CIAA 2002, volume to appear of Lecture Notes in Computer Science, Tours, France, July 2002. Springer-Verlag, Berlin-NY. [11] Mehryar Mohri. Semiring Frameworks and Algorithms for Shortest-Distance Problems. Journal of Automata, Languages and Combinatorics, 7(3):321–350, 2002. [12] Mehryar Mohri, Fernando C. N. Pereira, and Michael Riley. Weighted automata in text and speech processing. In ECAI-96 Workshop, Budapest, Hungary. ECAI, 1996. [13] Mehryar Mohri, Fernando C. N. Pereira, and Michael Riley. The Design Principles of a Weighted Finite-State Transducer Library. Theoretical Computer Science, 231:17–32, January 2000. http://www.research.att.com/sw/tools/fsm. [14] Fernando C. N. Pereira and Michael D. Riley. Speech recognition by composition of weighted finite automata. In Emmanuel Roche and Yves Schabes, editors, Finite-State Language Processing, pages 431–453. MIT Press, Cambridge, Massachusetts, 1997. [15] Arto Salomaa and Matti Soittola. Automata-Theoretic Aspects of Formal Power Series. Springer-Verlag: New York, 1978. [16] Robert E. Schapire and Yoram Singer. Boostexter: A boosting-based system for text categorization. Machine Learning, 39(2/3):135–168, 2000. [17] Bernhard Scholkopf and Alex Smola. Learning with Kernels. MIT Press: Cambridge, MA, 2002. [18] Vladimir N. Vapnik. Statistical Learning Theory. John Wiley & Sons, New-York, 1998. [19] Chris Watkins. Dynamic alignment kernels. Technical Report CSD-TR-98-11, Royal Holloway, University of London, 1999.
|
2002
|
179
|
2,190
|
Learning a Forward Model of a Reflex Bernd Porr and Florentin W¨org¨otter Computational Neuroscience Psychology University of Stirling FK9 4LR Stirling, UK bp1,faw1 @cn.stir.ac.uk Abstract We develop a systems theoretical treatment of a behavioural system that interacts with its environment in a closed loop situation such that its motor actions influence its sensor inputs. The simplest form of a feedback is a reflex. Reflexes occur always “too late”; i.e., only after a (unpleasant, painful, dangerous) reflex-eliciting sensor event has occurred. This defines an objective problem which can be solved if another sensor input exists which can predict the primary reflex and can generate an earlier reaction. In contrast to previous approaches, our linear learning algorithm allows for an analytical proof that this system learns to apply feedforward control with the result that slow feedback loops are replaced by their equivalent feed-forward controller creating a forward model. In other words, learning turns the reactive system into a pro-active system. By means of a robot implementation we demonstrate the applicability of the theoretical results which can be used in a variety of different areas in physics and engineering. 1 Introduction Feedback loops are prevalent in animal behaviour, where they are normally called a “reflex”. However, the reflex has the disadvantage of always being too late. Thus, an objective goal is to avoid a reflex (feedback) reaction. This can be done by an anticipatory (feedforward) action; for example when retracting a limb in response to heat radiation without actually having to touch the hot surface, which would elicit a pain-induced reflex. While this has been interpreted as successful forward control [1] the question arises how such a behavioural system can be robustly generated. In this article we introduce a linear algorithm for temporal sequence learning between two sensor events and provide an analytical proof that this process turns a pre-wired reflex loop into its equivalent feed-forward controller. After learning the system will respond with an anticipatory action thereby avoiding the reflex. Figure 1: Diagram of the system in its environment (in Laplace-notation). The input signal is (“disturbance”) reaching both sensor inputs at different times as indicated by the temporal delay . The environmental transfer functions are denoted as . are linear transfer functions, the filtered inputs which converge with weights onto the output neuron . 2 The learning rule and its environment Fig. 1 shows the general situation which arises when temporal sequence learning takes place in a system which interacts with its environment [2]. We distinguish two loops: The inner loop represents the reflex which has fixed unchanging properties. The outer loop represents the to-be-learned anticipatory action. Sequence learning requires causally related input events at both sensors
(e.g. heat radiation and pain) where denotes the time delay between both inputs. The outer loop receives the earlier (anticipatory) input. The delayed and un-delayed signals are processed by a linear transform (e.g. a lowor band-pass filter), subsequently their sum is taken with weights on a single neuron. Note that all input signals are filtered. The system is therefore completely isotropic. Line is fanned out in order to adjust to the a priori unknown delay by the combination of different transforms (see below). The output of the neuron is in the LAPLACE-domain given by: ! " # "$&% with
'()* + (1) where , are the synaptic weights. In the following we will drop the function argument for the sake of brevity wherever possible. The transfer functions in Fig. 1 denote how the environment influences the different signals. The goal of sequence learning is that the outer loop should after learning functionally replace the inner loop such that the reflex will cease to be triggered. In this case we receive -/. which we call the “desired state” of the system. This allows calculating the general requirements for the outer loop without having to specify the actual learning process. The reflex pathway is described by ( 0 1+ 32,4658769:% (2) where 2 46587 represents the delay in LAPLACE-notation. The signal on the anticipatory (outer) pathway has the representation ;+ < =?> <<@ BA (3) where
A : is the learned transfer-function which generates the anticipatory response triggered by the input . We want to express by the environmental transferfunctions %! and < . is solved for the condition . where the reflex is no longer triggered. Eliminating and we get: BA > 4 2 46587 =?> < 2 46587 (4) Eq. 4 can be further simplified. Following standard control theory [3] we neglect the denominator, because it does not add additional poles to the transfer function A . Such a pole appears only for "& 2 5*7 . A transfer function 2 587 , however, is meaningless because it violates temporal causality. Thus, the denominator can at most add phase-shifts to the systems behaviour. As a consequence, we may set & . and the behaviour of A is determined by: A ' > B4 2 46587 (5) The interpretation of the last equation is straight-forward. The learning goal of 3 /. requires compensating the disturbance . The disturbance, however, enters the system only after having been filtered by the environmental transfer function . Thus, compensation of requires to reverse this filtering by a term
4 which is the inverse environmental transfer function (hence “inverse controller”). The second term 2 4 587 in Eq. 5 compensates for the delay between the two sensor signals originating from the disturbance . Having outlined the general setup in terms of our linear approach and system theoretic notation we devote the remaining three sections to the following topics: 2.1. The learning rule and convergence to a given solution A under this rule. 2.2. The construction of (approximate) solutions
A . 3. Implementation of the system in a (real world) robot experiment. 2.1 The learning rule and convergence. Here, we assume that a set of functions exists (as will be be specified below) for which a solution can be approximated by
A ! : , . We will now specify the learning rule, by which the development of the weight values is controlled and show that any deviation from the given solution
A is eliminated due to learning. In terms of the time domain functions , corresponding to and , our learning rule is given by: #
= (6) Thus, the weight change depends on the correlation between and the time derivative of . Since the structure of the system is completely isotropic (see Fig. 1) and learning can take place at any synapse we shall call our learning algorithm isotropic sequence order learning (“ISO-learning”). The positive constant
is taken small enough such that all weight changes occur on a much longer time scale (i.e., very slowly) as compared to the decay of the responses . This rule is related to the one used in “temporal difference” learning [4]. The total weight change can be calculated by [5]: #
4 > > :, (7) where >
> represents the derivative of in the LAPLACE domain. We assume that the reflex pathway is unchanging with a fixed weight ! . (negative feedback). Note, that its open loop transfer characteristic given by must carry a low-pass component, otherwise the reflex loop would be unstable. We keep & . as before. Furthermore we assume that for a given set of B we have found a set of weights $,% =#" $!"&% which solves Eq. 5. We will show that a perturbation of the weights will be compensated by applying the learning procedure. Since we do not make any assumption as to the size of the perturbation this is indicative of convergence in general. To this end, we substitute . Stability of the solution is expected if the weight change opposes the perturbation, thus, if > = . Here, we however assume an ’adiabatic’ environment in which the system internally relaxes on a time scale much shorter than the time scale on which the disturbances occur. To be specific, a disturbance/perturbation may occur near . . In calculating the weight change (7) due to this disturbance signal we disregard any subsequent disturbances as well as perturbations ( ) following the steady state condition. We use the relations for and and insert them into Eq. 7. For we have: " 1 $ . 3& $
. (8) Inserting Eqs. 2 and 8 into Eq. 1 we get: 2 4 587 + ! " =?> (9) Substituting this yields: ! : =?> (10) We use the superscript 4 and to denote the arguments > and respectively and calculate the weight change using Eq. 7 integrating between > and : ,
> 0 4 4 " 4 =?> 4 4 9 (11) We realize that the first part of this integral describes the unperturbed equilibrium state and can be dropped, thus, together with 4 , which holds because is a transfer function, we get:
" > 4 =?> 4 4 (12) Furthermore we assume orthogonality (see also below) given by: . > 4 =?> B4 4 for $ (13) and get accordingly: , # > ! " #$ %& ' # 4)(+*-,/. * % . * (14) 0 , > 4)(+*-,/. * % . * (15) We now apply PLANCHEREL’S theorem [5] in order to transfer the integral into the timedomain and prove that it is negative. This assures stability and, hence, convergence, because we know that
is small, preventing oscillatory behaviour. We have: #
# 214365-7 (16) where we call 1835-7 the autocorrelation function of # :9<; $ which is the inverse transform of ( 9 denotes a convolution) and is the temporal derivative of the impulse response of the inverse transform of the remaining second term in Eq. 15. Since we know that B: must carry a low-pass component we can in general state that the fraction 4)(+*,=* % * represents a (non-standard) high-pass. Its derivative has a very high negative value for 1. (ideally > ) and vanishes soon thereafter. The autocorrelation 1 is positive around . . Thus, the integral in question will remain negative for almost all realistic choices of 6# . As an important special case we find that this especially holds if we assume delta-pulse disturbance at . , corresponding to , . 2.2 Construction of solutions. Here, we use a set of well-known functions (band-pass filters) and show explicitly that a solution which approximates the inverse controller (Eq. 5) can be constructed for % = and discuss how the approximation is improved for higher values of % . The transfer functions of the band-pass filters , which we use, are specified in the LAPLACE-domain: + 5 5 where 5 represents the complex conjugate of the pole ( 1 . Real and imaginary parts of the poles are given by 1
> % > 1 , where is the frequency of the oscillation. The damping characteristic of the resonator is reflected by . . Concerning convergence one finds in Eq. 16 that with such a set of functions /. for . and that converges fast to zero for . . Band-pass functions are not orthogonal to each other but numerically we found that they can be approximately treated of being orthogonal. In fact only a small drift of the weights is observed which could be compensated if required. In practise, however, this becomes unimportant as discussed below. The use of resonators is also motivated by biology [6] and band-pass filtered response characteristics are prevalent in neuronal systems which also have been used in other neuro-theoretical approaches [7]. We return to Eq. 5. Let us first assume that the environment does not filter the disturbance, thus = . Then, for the case % = , an approximative solution of Eq. 5 can be easily constructed by developing > 2 46587 into a Taylor series and obtaining the parameters through comparing coefficients in: = 2 5*7 = = + 4 4 4 > ! 5 "$#&%(' -!) " # * 5 " #+% ' 4 #-,. " / " > (17) Accordingly we get for the parameters of : > 7 # % , 0 21 7 % 3 . For un-filtered throughput ' = , this result shows that for all there exists a resonator with a weight , which approximates > 2 4 5*7 to the second order. The approximation continues to improve for higher orders of % , which we pursued up to % (fourth order Taylor), but the set of equations becomes rather cluttered. In general represents an environmental transfer function which is passive and “well-behaved”. Thus, in most cases it can be represented by just another passive low- or band-pass filter (sum of complex conjugated poles). Under this assumption a solution can also be constructed for the complete term > 4 2 4 5*7 by a combination of % = resonators. As mentioned above, constructing solutions becomes impractical for % = and it would require to know and 4 a priori. Note, if you would know
4 , you had already reached your goal of designing the inverse controller and learning would be obsolete. Thus, normally a set of resonators must be predefined in a somewhat arbitrary way and their weights shall be learned. The uniqueness of the solution assured by orthogonality becomes secondary in practise, because – without prior knowledge of and 4 – one has to use an over-complete set of , in order to make sure that a solution can be found. In practise, this means that a large enough set of filters must be used which normally leads to a manifold of solutions. Now obviously the question arises if satisfactory solutions exist under these relaxed conditions and if they remain stable. Figure 2: Robot experiment: (a) The robot has 2 output neurons for speed ( ) and steering angle ( ). The retraction mechanism is implemented by 3 resonators ( . , = Hz) which connect the collision sensors (CS) to the neurons (speed) and (steering angle) with fixed weights (reflex). Each range finder (RF) is fed into a filter bank of 10 resonators with = . = . Hz where its output converges with variable weights on both the and -neuron. A more detailed technical description together with a set of movies can be found at: http://www.cn.stir.ac.uk/predictor/real – movie 1. (b,d) Parts of the motion trajectory for one trial in an arena of . .,. with three obstacles (shaded). Circles denote collisions. (c) Development of the weights from the left range finder sensor to the the neuron . 3 Implementation in a robot experiment. In this section, we show a robot experiment where we apply a conventional filter bank approach using rather few filters with constant and logarithmically spaced frequencies and demonstrate that the algorithms still produces the desired behaviour. The task in this robot experiment is collision avoidance [8]. The built-in reflex-behaviour is a retraction reaction after the robot has hit an obstacle which represents the inner loop feedback mechanism1. The robot has three collision sensors ( ) and two range finders ( ), which produce the predictive signals. When driving around there is always a causal relation between the earlier occurring range finder signals and the later occurring collision, which drives the learning process. Fig. 2b shows that early during learning many collisions (circles) occur. After a collision a fast reflex-like retraction&turning reaction is elicited. On the other hand, the robot movement trace is now free of collisions after successful learning of the temporal correlation between range finder and collision signals (Fig. 2d) and the 1In fact it is also possible to construct an attraction-case if the reflex performs an initial attractionreaction. trajectory is maximally smooth. The robot always found a stable solution, but those were as expected - not unique. This is partly due to the different initial conditions but also due to the over-complete set of . Possible solutions, which we have observed, are that the robot after learning simply stops in front of an obstacle and that it slightly oscillates back and forth. The more common solution of the robot is that it continuously drives around and uses mainly his steering to avoid obstacles. Note that this rather complex behaviour is established by only two neurons. Fig. 2c shows that the weight change slows down after the last collision has happened (dotted line in c). The still existing smaller weight change is due to the fact that after functional silencing of
(no more collisions) temporally correlated inputs still exist namely between the left and right range finders. Thus, learning is now governed by these correlations instead and is driven by the earliest response of one of them which finally leads to the desired stabilisation. 4 Discussion Replacing a feedback loop with its equivalent feed-forward controller is of central relevance for efficient control particularly in slow feedback systems, where long loop-delays exist. So far, feed-forward control is in general model-based and, thus, often not robust [9]. On the other hand, it has been suggested earlier by studies of limb movement control that temporal sequence learning could be used to solve the inverse controller problem [1]. Figure 3: Differences between the Sutton and Barto models (a,c) and ISO-learning (b) in the case of % = . a) shows the drive reinforcement-model by Sutton and Barto [4] and c) the temporal difference (TD) learning by Sutton and Barto [10]. Note that the obsolete summation-point in a) allows to add the reward-signal in c). b) shows ISO-learning like in Fig. 1 with % = . Additionally the circuit for the weight change (learning) is shown. The input-filters in the Sutton and Barto-models (a,c) are first order low-pass filters (eligibility trace). and represent addition and multiplication, respectively. is the derivative. Widely used models of derivative based temporal sequence learning are those by Sutton and Barto which have the aim to model experiments of classical conditioning [4, 11, 10]. Fig. 3 shows their models in comparison to ISO-learning. All models strengthen the weight if 6 precedes (or , respectively). All models use filters at the inputs. However, in the Sutton and Barto-models these filtered input signals are only used as an input for the learning circuit (Fig. 3a,c) whereas the output is a superposition of the original input signals. Learning is therefore achieved by correlating the filtered input with the derivative of the (un-filtered) output-signal. Thus, filtered signals are correlated with un-filtered signals. In contrast to the Sutton and Barto-models, our model is completely isotropic and uses the filtered signals for both, the learning circuit and the output since the filtered signals are also responsible for an appropriate behaviour of the organism. These different wirings reflect the different learning goals: in our model the weight stabilises when the input has become silent (the reflex has been avoided). In the Sutton and Barto-models the weight stabilises if the output has reached a specific condition. In the drive-reinforcement model this is the case if the output-signal caused by has a similar strength than the output triggered by . This reflects the Rescorla/Wagner rule [12]. In the case of TDlearning learning stops if the prediction error between reward and the output is zero, thus if optimally predicts . In general our model is closely related to any correlation-based sequence-learning [4, 13] and is not related to any form of reinforcement-learning [10, 14] as it does not need a special reward- or punishment-signal. The current study demonstrates analytically the convergence of ISO-learning in a closed loop paradigm in conjunction with some rather general assumptions concerning the structure of such a system. Thus, this type of learning is able to generate a model-free inverse controller of a reflex, which improves the performance of conventional feedbackcontrol, while the feedback still serves as a fall-back. Apart from biological implications this promises a broad field of applications in physics and engineering. References [1] Daniel M. Wolpert and Zoubin Ghahramani. Computational principles of movement neuroscience. Nature Neuroscience supplement, 3:1212–1217, 2000. [2] P. Read Montague, Peter Dayan, and Terrence J. Sejnowski. Bee foraging in uncertain environments using predictive hebbian learning. Nature, 377:725–728, 1995. [3] W.E Sollecito and S.G Reque. Stability. In Jerry Fitzgerald, editor, Fundamentals of System Analysis, chapter 21. Wiley, New York, 1981. [4] R.S. Sutton and A.G. Barto. Towards a modern theory of adaptive networks: expectation and prediction. Psychol. Review, 88:135–170, 1981. [5] John L. Stewart. Fundamentals of signal theory. Mc Graw-Hill, New York, 1960. [6] Gordon M. Shepherd, editor. The synaptic organisation of the brain. Oxford University Press, New York, 1990. [7] Steven Grossberg. A spectral network model of pitch perception. J Acoust Soc Am, 98(2):862–879, 1995. [8] P.F.M.J Verschure and T. Voegtlin. A bottom-up approach towards the aquisition, retention, and expression of sequential representations: Distributed adaptive control III. Neural Networks, 11:1531–1549, 1998. [9] William J. Palm. Modeling, Analysis and Control of Dynamic Systems. Wiley, New York, 2000. [10] R.S. Sutton. Learning to predict by method of temporal differences. Machine learning, 3(1):9–44, 1988. [11] R.S. Sutton and A.G. Barto. Simulation of anticipatory responses in classical conditioning by a neuron-like adaptive element. Behav. Brain. Res., 4(3):221–235, 1982. [12] R.A. Rescorla and A.R. Wagner. A theory of pavlovian conditioning: Variations in the effectiveness of reinforcement and nonreinforcement. In A.H Black and W.F. Prokasy, editors, Classical conditioning 2, current theory and research, pages 64–99. ACC, New York, 1972. [13] A. Harry Klopf. A drive-reinforcement model of single neuron function. In John S. Denker, editor, Neural Networks for computing: AIP conference proceedings, volume 151 of AIP conference proceedings, New York, 1986. American Institute of Physics. [14] Christofer J.C.H Watkins and Peter Dayan. Q-learning. Machine Learning, 8:279– 292, 1992.
|
2002
|
18
|
2,191
|
Linear Combinations of Optic Flow Vectors for Estimating Self-Motion –a Real-World Test of a Neural Model Matthias O. Franz MPI f¨ur biologische Kybernetik Spemannstr. 38 D-72076 T¨ubingen, Germany mof@tuebingen.mpg.de Javaan S. Chahl Center of Visual Sciences, RSBS Australian National University Canberra, ACT, Australia javaan@zappa.anu.edu.au Abstract The tangential neurons in the fly brain are sensitive to the typical optic flow patterns generated during self-motion. In this study, we examine whether a simplified linear model of these neurons can be used to estimate self-motion from the optic flow. We present a theory for the construction of an estimator consisting of a linear combination of optic flow vectors that incorporates prior knowledge both about the distance distribution of the environment, and about the noise and self-motion statistics of the sensor. The estimator is tested on a gantry carrying an omnidirectional vision sensor. The experiments show that the proposed approach leads to accurate and robust estimates of rotation rates, whereas translation estimates turn out to be less reliable. 1 Introduction The tangential neurons in the fly brain are known to respond in a directionally selective manner to wide-field motion stimuli. A detailed mapping of their local motion sensitivities and preferred motion directions shows a striking similarity to certain self-motion-induced flow fields (an example is shown in Fig. 1). This suggests a possible involvement of these neurons in the extraction of self-motion parameters from the optic flow, which might be useful, for instance, for stabilizing the fly’s head during flight manoeuvres. A recent study [2] has shown that a simplified computational model of the tangential neurons as a weighted sum of flow measurements was able to reproduce the observed response fields. The weights were chosen according to an optimality principle which minimizes the output variance of the model caused by noise and distance variability between different scenes. The question on how the output of such processing units could be used for self-motion estimation was left open, however. In this paper, we want to fill a part of this gap by presenting a classical linear estimation approach that extends a special case of the previous model to the complete self-motion problem. We again use linear combinations of local flow measurements but, instead of prescribing a fixed motion axis and minimizing the output variance, we require that the quadratic error in the estimated self-motion parameters be as small as possible. From this 0 30 60 90 120 150 180 −75 −45 −15 15 45 75 azimuth (deg.) elevation (deg.) Figure 1: Mercator map of the response field of the neuron VS7. The orientation of each arrow gives the local preferred direction (LPD), and its length denotes the relative local motion sensitivity (LMS). VS7 responds maximally to rotation around an axis at an azimuth of about 30◦and an elevation of about −15◦(after [1]). optimization principle, we derive weight sets that lead to motion sensitivities similar to those observed in tangential neurons. In contrast to the previous model, this approach also yields the preferred motion directions and the motion axes to which the neural models are tuned. We subject the obtained linear estimator to a rigorous real-world test on a gantry carrying an omnidirectional vision sensor. 2 Modeling fly tangential neurons as optimal linear estimators for self-motion 2.1 Sensor and neuron model In order to simplify the mathematical treatment, we assume that the N elementary motion detectors (EMDs) of our model eye are arranged on the unit sphere. The viewing direction of a particular EMD with index i is denoted by the radial unit vector di. At each viewing direction, we define a local two-dimensional coordinate system on the sphere consisting of two orthogonal tangential unit vectors ui and vi (Fig. 2a). We assume that we measure the local flow component along both unit vectors subject to additive noise. Formally, this means that we obtain at each viewing direction two measurements xi and yi along ui and vi, respectively, given by xi = pi · ui + nx,i and yi = pi · vi + ny,i, (1) where nx,i and ny,i denote additive noise components and pi the local optic flow vector. When the spherical sensor translates with T while rotating with R about an axis through the origin, the self-motion-induced image flow pi at di is [3] pi = −µi(T −(T · di)di) −R × di. (2) µi is the inverse distance between the origin and the object seen in direction di, the socalled “nearness”. The entire collection of flow measurements xi and yi comprises the w11 w12 w13 + optic flow vectors LPD unit vectors LMSs summation ui v i pi di x z y a. b. Figure 2: a. Sensor model: At each viewing direction di, there are two measurements xi and yi of the optic flow pi along two directions ui and vi on the unit sphere. b. Simplified model of a tangential neuron: The optic flow and the local noise signal are projected onto a unit vector field. The weighted projections are linearly integrated to give the estimator output. input to the simplified neural model of a tangential neuron which consists of a weighted sum of all local measurements (Fig. 2b) ˆθ = N X i wx,ixi + N X i wy,iyi (3) with local weights wx,i and wy,i. In this model, the local motion sensitivity (LMS) is defined as wi = ∥(wx,i, wy,i)∥, the local preferred motion direction (LPD) is parallel to the vector 1 wi (wx,i, wy,i). The resulting LMSs and LPDs can be compared to measurements on real tangential neurons. As our basic hypothesis, we assume that the output of such model neurons is used to estimate the self-motion of the sensor. Since the output is a scalar, we need in the simplest case an ensemble of six neurons to encode all six rotational and translational degrees of freedom. The local weights of each neuron are chosen to yield an optimal linear estimator for the respective self-motion component. 2.2 Prior knowledge An estimator for self-motion consisting of a linear combination of flow measurements necessarily has to neglect the dependence of the optic flow on the object distances. As a consequence, the estimator output will be different from scene to scene, depending on the current distance and noise characteristics. The best the estimator can do is to add up as many flow measurements as possible hoping that the individual distance deviations of the current scene from the average will cancel each other. Clearly, viewing directions with low distance variability and small noise content should receive a higher weight in this process. In this way, prior knowledge about the distance and noise statistics of the sensor and its environment can improve the reliability of the estimate. If the current nearness at viewing direction di differs from the the average nearness ¯µi over all scenes by ∆µi, the measurement xi can be written as ( see Eqns. (1) and (2)) xi = −(¯µiu⊤ i , (ui × di)⊤) T R + nx,i −∆µiuiT, (4) where the last two terms vary from scene to scene, even when the sensor undergoes exactly the same self-motion. To simplify the notation, we stack all 2N measurements over the entire EMD array in the vector x = (x1, y1, x2, y2, ..., xN, yN)⊤. Similarly, the self-motion components along the x-, y- and z-directions of the global coordinate systems are combined in the vector θ = (Tx, Ty, Tz, Rx, Ry, Rz)⊤, the scene-dependent terms of Eq. (4) in the 2N-vector n = (nx,1 −∆µ1u1T, ny,1 −∆µ1v1T, ....)⊤and the scene-independent terms in the 6xN-matrix F = ((−¯µ1u⊤ 1 , −(u1 × d1)⊤), (−¯µ1v⊤ 1 , −(v1 × d1)⊤), ....)⊤. The entire ensemble of measurements over the sensor can thus be written as x = Fθ + n. (5) Assuming that T, nx,i, ny,i and µi are uncorrelated, the covariance matrix C of the scenedependent measurement component n is given by Cij = Cn,ij + Cµ,iju⊤ i CT uj (6) with Cn being the covariance of n, Cµ of µ and CT of T. These three covariance matrices, together with the average nearness ¯µi, constitute the prior knowledge required for deriving the optimal estimator. 2.3 Optimized neural model Using the notation of Eq. (5), we write the linear estimator as ˆθ = Wx. (7) W denotes a 2Nx6 weight matrix where each of the six rows corresponds to one model neuron (see Eq. (3)) tuned to a different component of θ. The optimal weight matrix is chosen to minimize the mean square error e of the estimator given by e = E(∥θ −ˆθ∥2) = tr[WCW ⊤] (8) where E denotes the expectation. We additionally impose the constraint that the estimator should be unbiased for n = 0, i.e., ˆθ = θ. From Eqns. (5) and (7) we obtain the constraint equation WF = 16x6. (9) The solution minimizing the associated Euler-Lagrange functional (Λ is a 6x6-matrix of Lagrange multipliers) J = tr[WCW ⊤] + tr[Λ⊤(16x6 −WF)] (10) can be found analytically and is given by W = 1 2ΛF ⊤C−1 (11) with Λ = 2(F ⊤C−1F)−1. When computed for the typical inter-scene covariances of a flying animal, the resulting weight sets are able to reproduce the characteristics of the LMS and LPD distribution of the tangential neurons [2]. Having shown the good correspondence between model neurons and measurement, the question remains whether the output of such an ensemble of neurons can be used for some real-world task. This is by no means evident given the fact that - in contrast to most approaches in computer vision - the distance distribution of the current scene is completely ignored by the linear estimator. 3 Experiments 3.1 Linear estimator for an office robot As our test scenario, we consider the situation of a mobile robot in an office environment. This scenario allows for measuring the typical motion patterns and the associated distance statistics which otherwise would be difficult to obtain for a flying agent. -180 -150 -120 -90 -60 -30 0 30 60 90 120 150 180 -75 -45 -15 15 45 75 azimuth (deg.) elevation (deg.) 0.75 0.75 0.75 1 1 1.25 1.25 1.5 1.5 1.75 1.75 1.75 2 2 2 2.25 2.25 2.25 2.25 2.5 2.5 2.5 2.75 3 -180 -150 -120 -90 -60 -30 0 30 60 90 120 150 180 -75 -45 -15 15 45 75 azimuth (deg.) elevation (deg.) 0.25 0.25 0.25 0.25 0.25 0.25 0.5 0.5 0.5 0.5 0.75 0.75 0.75 0.75 0.75 0.75 1 1 1 1 1 1.25 1.25 1.25 1.25 1.25 1.25 1.5 1.5 1.5 1.5 1.5 1.75 1.75 1.75 a. b. Figure 3: Distance statistics of an indoor robot (0 azimuth corresponds to forward direction): a. Average distances from the origin in the visual field (N = 26). Darker areas represent larger distances. b. Distance standard deviation in the visual field (N = 26). Darker areas represent stronger deviations. The distance statistics were recorded using a rotating laser scanner. The 26 measurement points were chosen along typical trajectories of a mobile robot while wandering around and avoiding obstacles in an office environment. The recorded distance statistics therefore reflect properties both of the environment and of the specific movement patterns of the robot. From these measurements, the average nearness ¯µi and its covariance Cµ were computed (cf. Fig. 3, we used distance instead of nearness for easier interpretation). The distance statistics show a pronounced anisotropy which can be attributed to three main causes: (1) Since the robot tries to turn away from the obstacles, the distance in front and behind the robot tends to be larger than on its sides (Fig. 3a). (2) The camera on the robot usually moves at a fixed height above ground on a flat surface. As a consequence, distance variation is particularly small at very low elevations (Fig. 3b). (3) The office environment also contains corridors. When the robot follows the corridor while avoiding obstacles, distance variations in the frontal region of the visual field are very large (Fig. 3b). The estimation of the translation covariance CT is straightforward since our robot can only translate in forward direction, i.e. along the z-axis. CT is therefore 0 everywhere except the lower right diagonal entry which is the square of the average forward speed of the robot (here: 0.3 m/s). The EMD noise was assumed to be zero-mean, uncorrelated and uniform over the image, which results in a diagonal Cn with identical entries. The noise standard 0 30 60 90 120 150 180 -75 -45 -15 15 45 75 azimuth (deg.) elevation (deg.) a. 0 30 60 90 120 150 180 -75 -45 -15 15 45 75 azimuth (deg.) elevation (deg.) b. Figure 4: Model neurons computed as part of the linear estimator. Notation is identical to Fig. 1. The depicted region of the visual field extends from −15◦to 180◦azimuth and from −75◦to 75◦elevation. The model neurons are tuned to a. forward translation, and b. to rotations about the vertical axis. deviation of 0.34 deg./s was determined by presenting a series of natural images moving at 1.1 deg./s to the flow algorithm used in the implementation of the estimator (see Sect. 3.2). ¯µ, Cµ, CT and Cn constitute the prior knowledge necessary for computing the estimator (Eqns. (6) and (11)). Examples of the optimal weight sets for the model neurons (corresponding to a row of W) are shown in Fig. 4. The resulting model neurons show very similar characteristics to those observed in real tangential neurons, however, with specific adaptations to the indoor robot scenario. All model neurons have in common that image regions near the rotation or translation axis receive less weight. In these regions, the self-motion components to be estimated generate only small flow vectors which are easily corrupted by noise. Equation (11) predicts that the estimator will preferably sample in image regions with smaller distance variations. In our measurements, this is mainly the case at the ground around the robot (Fig. 3). The rotation-selective model neurons weight image regions with larger distances more highly, since distance variations at large distances have a smaller effect. In our example, distances are largest in front and behind the robot so that the rotation-selective neurons assign the highest weights to these regions (Fig. 3b). 3.2 Gantry experiments The self-motion estimates from the model neuron ensemble were tested on a gantry with three translational and one rotational (yaw) degree of freedom. Since the gantry had a position accuracy below 1mm, the programmed position values were taken as ground truth for evaluating the estimator’s accuracy. As vision sensor, we used a camera mounted above a mirror with a circularly symmetric hyperbolic profile. This setup allowed for a 360◦horizontal field of view extending from 90◦below to 45◦above the horizon. Such a large field of view considerably improves the estimator’s performance since the individual distance deviations in the scene are more likely to be averaged out. More details about the omnidirectional camera can be found in [4]. In each experiment, the camera was moved to 10 different start positions in the lab with largely varying distance distributions. After recording an image of the scene at the start position, the gantry translated and rotated at various prescribed speeds and directions and took a second image. After the recorded image pairs (10 for each type of movement) were unwarped, we computed the optic flow input for the model neurons using a standard gradient-based scheme [5]. 1 2 3 4 5 0 50 100 150 estimator response [%] 1 2 3 0 0.1 0.2 0.3 0.4 0.5 0.6 estimator response 1 2 3 0 0.1 0.2 0.3 0.4 0.5 0.6 estimator response 4 6 8 10 12 14 16 18 20 22 4 6 8 10 12 14 16 18 20 true self-motion estimated self-motion translation rotation a. b. d. c. Figure 5: Gantry experiments: Results are given in arbitrary units, true rotation values are denoted by a dashed line, translation by a dash-dot line. Grey bars denote translation estimates, white bars rotation estimates a. Estimated vs. real self-motion; b. Estimates of the same self-motion at different locations; c. Estimates for constant rotation and varying translation; d. Estimates for constant translation and varying rotation. The average error of the rotation rate estimates over all trials (N=450) was 0.7◦/s (5.7% rel. error, Fig. 5a), the error in the estimated translation speeds (N=420) was 8.5 mm/s (7.5% rel. error). The estimated rotation axis had an average error of magnitude 1.7◦, the estimated translation direction 4.5◦. The larger error of the translation estimates is mainly caused by the direct dependence of the translational flow on distance (see Eq. (2)) whereas the rotation estimates are only indirectly affected by distance errors via the current translational flow component which is largely filtered out by the LPD arrangement. The larger sensitivity of the translation estimates can be seen by moving the sensor at the same translation and rotation speeds in various locations. The rotation estimates remain consistent over all locations whereas the translation estimates show a higher variance and also a location-dependent bias, e.g., very close to laboratory walls (Fig. 5b). A second problem for translation estimation comes from the different properties of rotational and translational flow fields: Due to its distance dependence, the translational flow field shows a much wider range of values than a rotational flow field. The smaller translational flow vectors are often swamped by simultaneous rotation or noise, and the larger ones tend to be in the upper saturation range of the used optic flow algorithm. This can be demonstrated by simultaneously translating and rotating the semsor. Again, rotation estimates remain consistent while translation estimates are strongly affected by rotation (Fig. 5c and d). 4 Conclusion Our experiments show that it is indeed possible to obtain useful self-motion estimates from an ensemble of linear model neurons. Although a linear approach necessarily has to ignore the distances of the currently perceived scene, an appropriate choice of local weights and a large field of view are capable of reducing the influence of noise and the particular scene distances on the estimates. In particular, rotation estimates were highly accurate - in a range comparable to gyroscopic estimates - and consistent across different scenes and different simultaneous translations. Translation estimates, however, turned out to be less accurate and less robust against changing scenes and simultaneous rotation. The components of the estimator are simplified model neurons which have been shown to reproduce the essential receptive field properties of the fly’s tangential neurons [2]. Our study suggests that the output of such neurons could be directly used for self-motion estimation by simply combining them linearly at a later integration stage. As our experiments have shown, the achievable accuracy would probably be more than enough for head stabilization under closed loop conditions. Finally, we have to point out a basic limitation of the proposed theory: It assumes linear EMDs as input to the neurons (see Eq. (1)). The output of fly EMDs, however, is only linear for very small image motions. It quickly saturates at a plateau value at higher image velocities. In this range, the tangential neuron can only indicate the presence and the sign of a particular self-motion component, not the current rotation or translation velocity. A linear combination of output signals, as in our model, is no more feasible but would require some form of population coding. In addition, a detailed comparison between the linear model and real neurons shows characteristic differences indicating that tangential neurons usually operate in the plateau range rather than in the linear range of the EMDs [2]. As a consequence, our study can only give a hint on what might happen at small image velocities. The case of higher image velocities has to await further research. Acknowledgments The gantry experiments were done at the Center of Visual Sciences in Canberra. The authors wish to thank J. Hill, M. Hofmann and M. V. Srinivasan for their help. Financial support was provided by the Human Frontier Science Program and the Max-PlanckGesellschaft. References [1] Krapp, H.G., Hengstenberg, B., & Hengstenberg, R. (1998). Dendritic structure and receptive field organization of optic low processing interneurons in the fly. J. of Neurophysiology, 79, 1902 1917. [2] Franz, M. O. & Krapp, H C. (2000). Wide-field, motion-sensitive neurons and matched filters for optic flow fields. Biol. Cybern., 83, 185 - 197. [3] Koenderink, J. J., & van Doorn, A. J. (1987). Facts on optic flow. Biol. Cybern., 56, 247 - 254. [4] Chahl, J. S, & Srinivasan, M. V. (1997). Reflective surfaces for panoramic imaging. Applied Optics, 36(31), 8275 - 8285. [5] Srinivasan, M. V. (1994). An image-interpolation technique for the computation of optic flow and egomotion. Biol. Cybern., 71, 401 - 415.
|
2002
|
180
|
2,192
|
Kernel Dependency Estimation Jason Weston, Olivier Chapelle, Andre Elisseeff, Bernhard Scholkopf and Vladimir Vapnik* Max Planck Institute for Biological Cybernetics, 72076 Tubingen, Germany *NEC Research Institute, Princeton, NJ 08540 USA Abstract We consider the learning problem of finding a dependency between a general class of objects and another, possibly different, general class of objects. The objects can be for example: vectors, images, strings, trees or graphs. Such a task is made possible by employing similarity measures in both input and output spaces using kernel functions, thus embedding the objects into vector spaces. We experimentally validate our approach on several tasks: mapping strings to strings, pattern recognition, and reconstruction from partial images. 1 Introduction In this article we consider the rather general learning problem of finding a dependency between inputs x E X and outputs y E Y given a training set (Xl,yl), ... ,(xm , Ym) E X x Y where X and Yare nonempty sets. This includes conventional pattern recognition and regression estimation. It also encompasses more complex dependency estimation tasks, e.g mapping of a certain class of strings to a certain class of graphs (as in text parsing) or the mapping of text descriptions to images. In this setting, we define learning as estimating the function j(x, ex*) from the set offunctions {f (., ex), ex E A} which provides the minimum value of the risk function R(ex) = r L(y, j(x,ex))dP(x,y) ix xY (1) where P is the (unknown) joint distribution ofx and y and L(y, 1]) is a loss function, a measure of distance between the estimate 1] and the true output y at a point x. Hence in this setting one is given a priori knowledge of the similarity measure used in the space Y in the form of a loss function. In pattern recognition this is often the zero-one loss, in regression often squared loss is chosen. However, for other types of outputs, for example if one was required to learn a mapping to images, or to a mixture of drugs (a drug cocktail) to prescribe to a patient then more complex costs would apply. We would like to be able to encode these costs into the method of estimation we choose. The framework we attempt to address is rather general. Few algorithms have been constructed which can work in such a domain - in fact the only algorithm that we are aware of is k-nearest neighbors. Most algorithms have focussed on the pattern recognition and regression problems and cannot deal with more general outputs. Conversely, specialist algorithms have been made for structured outputs, for example the ones of text classification which calculate parse trees for natural language sentences, however these algorithms are specialized for their tasks. Recently, kernel methods [12, 11] have been extended to deal with inputs that are structured objects such as strings or trees by linearly embedding the objects using the so-called kernel trick [5, 7]. These objects are then used in pattern recognition or regression domains. In this article we show how to construct a general algorithm for dealing with dependencies between both general inputs and general outputs. The algorithm ends up in an formulation which has a kernel function for the inputs and a kernel function (which will correspond to choosing a particular loss function) for the outputs. This also enables us (in principle) to encode specific prior information about the outputs (such as special cost functions and/or invariances) in an elegant way, although this is not experimentally validated in this work. The paper is organized as follows. In Section 2 it is shown how to use kernel functions to measure similarity between outputs as well as inputs. This leads to the derivation of the Kernel Dependency Estimation (KDE) algorithm in Section 3. Section 4 validates the method experimentally and Section 5 concludes. 2 Loss functions and kernels An informal way of looking at the learning problem consists of the following. Generalization occurs when, given a previously unseen x EX, we find a suitable y E Y such that (x,y) should be "similar" to (Xl,Yl), ... ,(xm,Ym). For outputs one is usually given a loss function for measuring similarity (this can be, but is not always, inherent to the problem domain). For inputs, one way of measuring similarity is by using a kernel function. A kernel k is a symmetric function which is an inner product in some Hilbert space F, i.e., there exists a map <I>k : X ---+ F such that k(X,X/) = (<I>k(X) . <I>k(X/)). We can think of the patterns as <I>k(X) , <I>k(X/), and carry out geometric algorithms in the inner product space ("feature space") F. Many successful algorithms are now based on this approach, see e.g [12, 11]. Typical kernel functions are polynomials k(x, Xl) = (x . Xl + 1)P and RBFs k (x, Xl) = exp( -llx - x/l12 /2( 2 ) although many other types (including ones which take into account prior information about the learning problem) exist. Note that, like distances between examples in input space, it is also possible to think of the loss function as a distance measure in output space, we will denote this space 1:. We can measure inner products in this space using a kernel function. We will denote this as C(y,y/) = (<I>£(y). <I>£(y/)), where <I>£ : Y ---+ 1:. This map makes it possible to consider a large class of nonlinear loss functions. l As in the traditional kernel trick for the inputs, the nonlinearity is only taken into account when computing the kernel matrix. The rest of the training is "simple" (e.g., a convex program, or methods of linear algebra such as matrix diagonalization) . It also makes it possible to consider structured objects as outputs such as the ones described in [5]: strings, trees, graphs and so forth. One embeds the output objects in the space I: using a kernel. Let us define some kernel functions for output spaces. IFor instance, assuming the outputs live in lI~n, usin~ an RBF kernel, one obtains a loss function II<I>e(y) <I>e(Y/) 112 = 2 - 2 exp (-Ily - y'll /2(7 2 ). This is a nonlinear loss function which takes the value 0 if Y and y' coincide, and 2 if they are maximally different. The rate of increase in between (i.e., the "locality"), is controlled by a . In M-class pattern recognition, given Y = {I, ... , M}, one often uses the distance L(y, y') = 1- [y = y'], where [y = y'] is 1 if Y = y' and 0 otherwise. To construct a corresponding inner product it is necessary to embed this distance into a Euclidean space, which can be done using the following kernel: £pat(y,y') = ~[y = y'], (2) as L(y, y')2 = Illf>f(Y) - If>f(y')112 = £(y, y) + £(y', y') - 2£(y, y') = 1 - [y = y']. It corresponds to embedding into aM-dimensional Euclidean space via the map If>f(Y) = (0,0, . . . , 1', ... , 0) where the yth coordinate is nonzero. It is also possible to describe multi-label classification (where anyone example belongs to an arbitrary subset of the M classes) in a similar way. For regression estimation, one can use the usual inner product £reg(y, y') = (y . y'). (3) For outputs such as strings and other structured objects we require the corresponding string kernels and kernels for structured objects [5, 7]. We give one example here, the string subsequence kernel employed in [7] for text categorization. This kernel is an inner product in a feature space consisting of all ordered subsequences of length r, denoted ~r. The subsequences, which do not have to be contiguous, are weighted by an exponentially decaying factor A of their full length in the text: (4) u EEr i:u= s[i] j:u=t[j] where u = xli] denotes u is the subsequence of x with indices 1 :::; it :::; ... :::; ilul and l(i) = ilul - it + 1. A fast way to compute this kernel is described in [7]. Sometimes, one would also like apply the loss given by an (arbitrary) distance matrix D of the loss between training examples, i.e where Dij = L(Yi,Yj). In general it is not always obvious to find an embedding of such data in an Euclidian space (in order to apply kernels). However, one such method is to compute the inner product with [11, Proposition 2.27]: ( m m m ) £(Yi,Yj) = ~ ID ijl2 - ~CpIDiPI2 - {;CqlDqjl2 + p~t cpcqlDpq l2 (5) where coefficients Ci satisfy L i Ci = 1 (e.g using Ci = 1... for all i this amounts to using the centre of mass as an origin). See also [m for ways of dealing with problems of embedding distances when equation (5) will not suffice. 3 Algorithm Now we will describe the algorithm for performing KDE. We wish to minimize the risk function (1) using the feature space F induced by the kernel k and the loss function measured in the space £ induced by the kernel £. To do this we must learn the mapping from If>k(X) to If>f(Y). Our solution is the following: decompose If>e(Y) into p orthogonal directions using kernel principal components analysis (KPCA) (see, e.g [11, Chapter 14]). One can then learn the mapping from If>k(X) to each direction independently using a standard kernel regression method, e.g SVM regression [12] or kernel ridge regression [9]. Finally, to output an estimate Y given a test example x one must solve a pre-image problem as the solution of the algorithm is initially a solution in the space £. We will now describe each step in detail. 1) Decomposition of outputs Let us construct the kernel matrix L on the training data such that Lij = f(Yi,Yj), and perform kernel principal components analysis on L. This can be achieved by centering the data in feature space using: V = (I ~lm1~)L(1 ~lm1~), where 1 is the m-dimensional identity matrix and 1m is an m dimensional vector of ones. One then solves the eigenvalue problem Aa = Va where an is the nth eigenvector of V which we normalize such that 1 = (an. Van) = An(an . an). We can then compute the projection of If>£(y) onto the nth principal component v n = 2:::1o:ilf>£(Yi) by (vn . If>£(y)) = 2:::1 o:if(Yi' y) . 2) Learning the map We can now learn the map from If>k(X) to ((Vi . If>c(Y)), ... , (vP ·If>c(Y))) where p is the number of principal components. One can learn the map by estimating each output independently. In our experiments we use kernel ridge regression [9] , note that this requires only a single matrix inversion to learn all p directions. That is, we minimize with respect to w the function ~ 2:::1 (Yi - (w . If> k (Xi)))2 + , IIwl12 in its dual form. We thus learn each output direction (vn . If> £ (y)) using the kernel matrix Kij = k(Xi' Xj) and the training labels :Vi = (vn ·If>C(Yi)) , with estimator fn(x): m fn(x) = L ,Bik(Xi' x), (6) i=l 3) Solving the pre-image problem During the testing phase, to obtain the estimate Y for a given x it is now necessary to find the pre-image of the given output If>c(Y). This can be achieved by finding: Y(X) = argminYEyl1 ((vi. If>c(Y)), ... , (vP . If>c(Y))) - (It (x), ... , fp(x))11 For the kernel (3) it is possible to compute the solution explicit ely. For other problems searching from a set of candidate solutions may be enough, e.g from the set of training set outputs Yl, ... , Ym; in our experiments we use this set. When more accurate solutions are required, several algorithms exist for finding approximate pre-images e.g via fixed-point iteration methods, see [10] or [11, Chapter 18] for an overview. For the simple case of vectorial outputs with linear kernel (3), if the output is only one dimension the method of KDE boils down to the same solution as using ridge regression since the matrix L is rank 1 in this case. However, when there are d outputs, the rank of L is d and the method trains ridge regression d times, but the kernel PCA step first decorrelates the outputs. Thus, in the special case of multiple outputs regression with a linear kernel, the method is also related to the work of [2] (see [4, page 73] for an overview of other multiple output regression methods.) In the case of classification, the method is related to Kernel Fisher Discriminant Analysis (KFD) [8]. 4 Experiments In the following we validate our method with several experiments. In the experiments we chose the parameters of KDE to be from the following: u* = {l0-3, 10-2,10-\ 10°,10\ 102, 103} where u = b, and the ridge parameter , = {l0-4, 10-3, 10-2,10-\ 100, 1O1}. We chose them by five fold cross validation. 4.1 Mapping from strings to strings Toy problem. Three classes of strings consist ofletters from the same alphabet of 4 letters (a,b,c,d), and strings from all classes are generated with a random length between 10 to 15. Strings from the first class are generated by a model where transitions from any letter to any other letter are equally likely. The output is the string abad, corrupted with the following noise. There is a probability of 0.3 of a random insertion of a random letter, and a probability of 0.15 of two random insertions. After the potential insertions there is a probability of 0.3 of a random deletion, and a probability of 0.15 of two random deletions. In the second class, transitions from one letter to itself (so the next letter is the same as the last) have probability 0.7, and all other transitions have probability 0.1. The output is the string dbbd, but corrupted with the same noise as for class one. In the third class only the letters c and d are used; transitions from one letter to itself have probability 0.7. The output is the string aabc, but corrupted with the same noise as for class one. For classes one and two any starting letter is equally likely, for the third class only c and d are (equally probable) starting letters. input string output string ccdddddddd --+ aabc dccccdddcd --+ abc adddccccccccc --+ bb bbcdcdadbad --+ aebad cdaaccadcbccdd --+ abad Figure 1: Five examples from our artificial task (mapping strings to strings). The task is to predict the output string given the input string. Note that this is almost like a classification problem with three classes, apart from the noise on the outputs. This construction was employed so we can also calculate classification error as a sanity check. We use the string subsequence kernel (4) from [7] for both inputs and outputs, normalized such that k(x,x') = k(x,x' )/(Jk(x,x)Jk(x',x')). We chose the parameters r = 3 and A = 0.01. In the space induced by the input kernel k we then chose a further nonlinear map using an RBF kernel: exp( - (k(x, x) + k(x',x') - 2k(x,x'))/2(J2). We generated 200 such strings and measured the success by calculating the mean and standard error of the loss (computed via the output kernel) over 4 fold cross validation. We chose (J (the width of the RBF kernel) and'Y (the ridge parameter) on each trial via a further level of 5 fold cross validation. We compare our method to an adaptation of k-nearest neighbors for general outputs: if k = 1 it returns the output of the nearest neighbor, otherwise it returns the linear combination (in the space of outputs) of the k nearest neighbors (in input space) . In the case of k > 1, as well as for KDE, we find a pre-image by finding the closest training example output to the given solution. We choose k again via a further level of 5 fold cross validation. The mean results, and their standard errors, are given in Table 1. string loss classification loss KDE 0.676 ± 0.030 0.125 ± 0.012 k-NN 0.985 ± 0.029 0.205 ± 0.026 Table 1: Performance of KDE and k-NN on the string to string mapping problem. 4.2 Multi-class classification problem We next tried a multi-class classification problem, a simple special case of the general dependency estimation problem. We performed 5-fold cross validation on 1000 digits (the first 100 examples of each digit) of the USPS handwritten 16x16 pixel digit database, training with a single fold (200 examples) and testing on the remainder. We used an RBF kernel for the inputs and the zero-one multi-class classification loss for the outputs using kernel (2). We again compared to k-NN and also to 1vs-rest Support Vector Machines (SVMs) (see, e.g [11, Section 7.6]). We found k for k-NN and a and "( for the other methods (we employed a ridge also to the SVM method, reulting in a squared error penalization term) by another level of 5-fold cross validation. The results are given in Table 2. SVMs and KDE give similar results (this is not too surprising since KDE gives a rather similar solution to KFD, whose similarity to SVMs in terms of performance has been shown before [8]). Both SVM and KDE outperform k-NN. KDE 1-vs-rest SVM k-NN classification loss 0.0798 ± 0.0067 0.0847 ± 0.0064 0.1250 ± 0.0075 Table 2: Performance of KDE, 1-vs-rest SVMs and k-NN on a classification problem of handwritten digits. 4.3 Image reconstruction We then considered a problem of image reconstruction: given the top half (the first 8 pixel lines) of a USPS postal digit, it is required to estimate what the bottom half will be (we thus ignored the original labels of the data).2 The loss function we choose for the outputs is induced by an RBF kernel. The reason for this is that a penalty that is only linear in y would encourage the algorithm to choose images that are "inbetween" clearly readable digits. Hence, the difficulty in this task is both choosing a good loss function (to reflect the end user's objectives) as well as an accurate estimator. We chose the width a' of the output RBF kernel which maximized the kernel alignment [1] with a target kernel generated via k-means clustering. We chose k=30 clusters and the target kernel is K ij = 1 if Xi and Xj are in the same cluster, and 0 otherwise. Kernel alignment is then calculated via: A(K1 ,K2 ) = (K l,K2)F/J(Kl, Kl)F(K2,K2)F where (K , K')F = 2:7,'j=l KijK~j is the Frobenius dot product, which gave a' = 0.35. For the inputs we use an RBF kernel of width a . We again performed 5-fold cross validation on the first 1000 digits of the USPS handwritten 16x16 pixel digit database, training with a single fold (200 examples) and testing on the remainder, comparing KDE to k-NN and a Hopfield net.3 The Hopfield network we used was the one of [6] implemented in the Neural Network Toolbox for Matlab. It is a generalization of standard Hopfield nets that has a nonlinear transfer function and can thus deal with scalars between -1 and +1; after building the network based on the (complete) digits of the training set we present the top half of test digits and fill the bottom half with zeros, and then find the networks equilibrium point. We then chose as output the pre-image from the training data that is closest to this solution (thus the possible outputs are the 2 A similar problem, of higher dimensionality, would be to learn the mapping from top half to complete digit. 3Note that training a naive regressor on each pixel output independently would not take into account that the combination of pixel outputs should resemble a digit. Figure 2: Errors in the digit database image reconstruction problem. Images have to be estimated using only the top half (first 8 rows of pixels) of the original image (top row) by KDE (middle row) and k-NN (bottom row). We show all the test examples on the first fold of cross validation where k-NN makes an error in estimating the correct digit whilst KDE does not (73 mistakes) and vice-versa (23 mistakes). We chose them by viewing the complete results by eye (and are thus somewhat subjective). The complete results can be found at http://www.kyb.tuebingen.mpg.de/bs/people/weston/kde/kde.html. same as the competing algorithms). We found (Y and I for KDE and k for k-NN by another level of 5-fold cross validation. The results are given in Table 3. KDE k-NN Hopfield net RBF loss 0.8384 ± 0.0077 0.8960 ± 0.0052 1.2190 ± 0.0072 Table 3: Performance of KDE, k-NN and a Hopfield network on an image reconstruction problem of handwritten digits. KDE outperforms k-NN and Hopfield nets on average, see Figure 2 for comparison with k-NN. Note that we cannot easily compare classification rates on this problem using the pre-images selected since KDE outputs are not correlated well with the labels. For example it will use the bottom stalk of a digit "7" or a digit "9" equally if they are identical, whereas k-NN will not: in the region of the input space which is the top half of "9"s it will only output the bottom half of "9"s. This explains why measuring the class of the pre-images compared to the true class as a classification problem yields a lower loss for k-NN, 0.2345 ± 0.0058, compared to KDE, 0.2985 ± 0.0147 and Hopfield nets, 0.591O±0.0137. Note that if we performed classification as in Section 4.2 but using only the first 8 pixel rows then k-NN yields 0.2345 ± 0.0058, but KDE yields 0.1878 ± 0.0098 and 1-vs-rest SVMs yield 0.1942 ± 0.0097, so k-NN does not adapt well to the given learning task (loss function). Finally, we note that nothing was stopping us from incorporating known invariances into our loss function in KDE via the kernel. For example we could have used a kernel which takes into account local patches of pixels rendering spatial information or jittered kernels which take into account chosen transformations (translations, rotations, and so forth). It may also be useful to add virtual examples to the output matrix 1:- before the decomposition step. For an overview of incorporating invariances see [11, Chapter 11] or [12]. 5 Discussion We have introduced a kernel method of learning general dependencies. We also gave some first experiments indicating the usefulness of the approach. There are many applications of KDE to explore: problems with complex outputs (natural language parsing, image interpretation/manipulation, ... ), applying to special cost functions (e.g ROC scores) and when prior knowledge can be encoded in the outputs. In terms of further research, we feel there are also still many possibilities to explore in terms of algorithm development. We admit in this work we have a very simplified algorithm for the pre-image part (just choosing the closest image given from the training sample). To make the approach work on more complex problems (where a test output is not so trivially close to a training output) improved pre-image approaches should be applied. Although one can apply techniques such as [10] for vector based pre-images, efficiently finding pre-images for structured objects such as strings is an open problem. Finally, the algorithm should be extended to deal with non-Euclidean loss functions directly, e.g for classification with a general cost matrix. One naive way is to use a distance matrix directly, ignoring the PCA step. References [1] N. Cristianini, A. Elisseeff, and J. Shawe-Taylor. On optimizing kernel alignment. Technical Report 2001-087, NeuroCOLT, 200l. [2] I. Frank and J . Friedman. A Statistical View of Some Chemometrics Regression Tools. Technometrics, 35(2):109- 147, 1993. [3] T. Graepel, R. Herbrich, P. Bollmann-Sdorra, and K Obermayer. Classification on pairwise proximity data. NIPS, 11:438- 444, 1999. [4] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. Springer-Verlag, New York, 200l. [5] D. Haussler. Convolutional kernels on discrete structures. Technical Report UCSCCRL-99-10, Computer Science Department, University of California at Santa Cruz, 1999. [6] J . Li, A. N. Michel, and W . Porod. Analysis and synthesis of a class of neural networks: linear systems operating on a closed hypercube. IEEE Trans. on Circuits and Systems, 36(11):1405- 22, 1989. [7] H. Lodhi, C. Saunders, J . Shawe-Taylor, N. Cristianini, and C. Watkins. Text classification using string kernels. Journal of Machine Learning Research, 2:419- 444, 2002. [8] S. Mika, G. Ratsch, J. Weston, B. Sch6lkopf, and K-R. Miiller. Fisher discriminant analysis with kernels. In Y.-H. Hu, J . Larsen, E. Wilson, and S. Douglas, editors, N eural Networks for Signal Processing IX, pages 41- 48. IEEE, 1999. [9] C. Saunders, V. Vovk, and A. Gammerman. Ridge regression learning algorithm in dual variables. In J . Shavlik, editor, Machine Learning Proceedings of the Fifteenth International Conference(ICML '98), San Francisco, CA, 1998. Morgan Kaufmann. [10] B. Sch6lkopf, S. Mika, C. Burges, P. Knirsch, K-R. Miiller, G. Ratsch, and A. J. Smola. Input space vs. feature space in kernel-based methods. IEEE-NN, 10(5):10001017, 1999. [11] B. Sch6lkopf and A. J. Smola. Learning with K ernels. MIT Press, Cambridge, MA, 2002. [12] V. Vapnik. Statistical Learning Theory. John Wiley and Sons, New York, 1998.
|
2002
|
181
|
2,193
|
Circuit Model of Short-Term Synaptic Dynamics Shih-Chii Liu, Malte Boegershausen, and Pascal Suter Institute of Neuroinformatics University of Zurich and ETH Zurich Winterthurerstrasse 190 CH-8057 Zurich, Switzerland shih@ini.phys.ethz.ch Abstract We describe a model of short-term synaptic depression that is derived from a silicon circuit implementation. The dynamics of this circuit model are similar to the dynamics of some present theoretical models of shortterm depression except that the recovery dynamics of the variable describing the depression is nonlinear and it also depends on the presynaptic frequency. The equations describing the steady-state and transient responses of this synaptic model fit the experimental results obtained from a fabricated silicon network consisting of leaky integrate-and-fire neurons and different types of synapses. We also show experimental data demonstrating the possible computational roles of depression. One possible role of a depressing synapse is that the input can quickly bring the neuron up to threshold when the membrane potential is close to the resting potential. 1 Introduction Short-term synaptic dynamics have been observed in many parts of the cortical system [Stratford et al., 1998, Varela et al., 1997, Tsodyks et al., 1998]. The functionality of the short-term synaptic dynamics have been implicated in various cortical models [Senn et al., 1998, Chance et al., 1998, Matveev and Wang, 2000]. along with the processing capabilities of a network with dynamic synapses [Tsodyks et al., 1998, Maass and Zador, 1999]. The introduction of these dynamic synapses into hardware implementations of recurrent neuronal networks allow a wide range of operating regimes especially in the case of time-varying inputs. In this work, we describe a model that was derived from a circuit implementation of shortterm depression. The circuit implementation was initially described by [Rasche and Hahnloser, 2001] but the dynamics were not analyzed in their work. We also compare the dynamics of the circuit model of depression with the equations of one of the theoretical models frequently used in network simulations [Abbott et al., 1997,Varela et al., 1997] and show examples of transient and steady-state responses of this synaptic circuit to inputs of different statistical distributions. This circuit has been included in a silicon network of leaky integrate-and-fire neurons together with other short-term dynamic synapses like facilitation synapses. We also show experimental data from the chip that demonstrate the possible computational roles of depression. We postulate that one possible role of depression is to bring the neuron’s response quickly up to threshold if the membrane potential of the neuron was close to the resting potential. We also mapped a proposed cortical model of direction-selectivity that uses depressing synapses onto this chip. The results are qualitatively similar to the results obtained in the original work [Chance et al., 1998]. The similarity of the circuit responses to the responses from Abbott and colleagues’s synaptic model means that we can use these VLSI networks of integrate-and-fire (I/F) neurons as an alternative to computer simulations of dynamical networks composed of large numbers of integrate-and-fire neurons using synapses with different time constants. The outputs of such networks can also be used to interface with neural wetware. An infrastructure for a reprogrammable, reconfigurable, multi-chip neuronal system is being developed along with a user-defined interface so that the system is easily accessible to a naive user. 2 Comparisons between Models of Depression We compare the circuit model with the theoretical model from [Abbott et al., 1997] describing synaptic depression and facilitation. Similar comparisons with [Tsodyks and Markram, 1997] give the same conclusions. Here, we only describe the circuit model for synaptic depression. The equivalent model for facilitation is described elsewhere [Liu, 2002]. 2.1 Theoretical Model for Depression Model In the model from [Abbott et al., 1997], the synaptic strength is described by , where is a variable between 0 and 1 that describes the amount of depression ( means no depression) and is the maximum synaptic strength. The recovery dynamics of is:
(1) where
is the recovery time of the depression. The update equation for right after a spike at time is ! (2) where ( #" ) is the amount by which is decreased right after the spike and $! is the time of the spike. The average steady-state value of depression for a regular spike train with a rate % is '& )(*,+.-$/0 1 & 2(*,+.-3/ 0 134 (3) 2.2 Circuit Model of Depressing Synapse In this circuit model of synaptic depression, the equation that describes the recovery dynamics of the depressing variable, is nonlinear. This nonlinearity comes about because the exponential dynamics in Eq. 1 was replaced with the dynamics of the current through a single diode-connected transistor. Hence, the equation describing the recovery of (derived from the circuit in the region where a transistor operates in the subthreshold region or the current is exponential in the gate voltage of the transistor) can be formulated as 5 67' (* 879 (4) where :;5 is the equivalent of
< in Eq. 1 and = (a transistor parameter) is less than 1. The maximum value of is 1. The update equation remains as before: ! > ? 4 (5) Vd Vx Vpre C Va Ir Isyn C2 Id M1 M3 M2 M4 M5 M6 M7 Vgain Vpre 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.1 0.15 0.2 0.25 0.3 0.35 Time (s) Vx (V) Update Slow recovery Fast recovery Vd=0.26 V Vd=0.28 V Vd=0.3 V (a) (b) Figure 1: Schematic for a depressing synapse circuit and responses to a regular input spike train. (a) Depressing synapse circuit. The voltage determines the synaptic conductance while the synaptic term or is exponential in the voltage, . The subcircuit consisting of transistors, 5 ( , 5 , and 5
, control the dynamics of . The presynaptic input goes to the gate terminal of 5 which acts like a switch. When there is a presynaptic spike, a quantity of charge (determined by ) is removed from the node . In between spikes, recovers to the voltage, through the diode-connected transistor, 5 ( . When there is no spike, is around . When the presynaptic input comes from a regular spike train, decreases with each spike and recovers in between spikes. It reaches a steadystate value as shown in (b). During the spike, transistor 5 turns on and the synaptic weight current charges up the membrane potential of the neuron through the currentmirror circuit consisting of 5 , 5 , and the capacitor . We can convert the current source into a synaptic current with some gain and a “time constant” by adjusting the voltage . The decay dynamics of is given by +"!$#%! $& 1 (('*) +*,.-0/*-0$&21 354 where 687 9 7 and : & 8<+<; 00 =;?>@ A 1*B 4 . In a normal synapse circuit (that is, without shortterm dynamics), = is controlled by an external bias voltage. (b) Input spike train at a frequency of 20 Hz (bottom curve) and corresponding response C (top curve) of the circuit for 0.26,0.28, 0.3 V. The diode-connected transistor 5 ( has nonlinear dynamics. The recovery time of the depressing variable depends on the distance of the present value of from D . The recovery rate of increases for a larger difference between E and D . 2.2.1 Circuit Equations 4 and 5 are derived from the circuit in Fig. 1. The operation of this circuit is described in the caption. The detailed analysis leading to the differential equations for is described in [Liu, 2002]. The voltage codes for > . The conductance is set by while the dynamics of is set by both and D . The time taken for the present value of to return to % is determined by the current dynamics of the diode-connected transistor 5 ( and . The recovery time constant ( :;5 ) of is set by . The synaptic weight is described by the current, in Fig. 1(a): GF &8=;2H*IB 4 > (6) where F & 8=;2@,*B 4 is the synaptic strength, is JK ,"L 002MNL @ 10OP 4 Q& SRT +"!1 , and -IU GF & 8 +<; 00 =;?H,1*B 4 . The recovery time constant ( :;5 ) of is set by V ( 5 Q& 8 4 & + ( 281 +<; 00,%; @ 1*B 4 ). The synaptic current, to the neuron, is then a current source G which lasts for the duration of the pulse width of the presynaptic spike. However, we can set a longer time constant for the synaptic current through . The equation describing this dependence (that is, the current equation for a current-mirror circuit) is given in the caption of Fig. 1. 0 500 1000 1500 2000 2500 3000 3500 4000 0 0.5 1 Spikes a) Abbott’s model Circuit model 0 500 1000 1500 2000 2500 3000 3500 4000 −75 −70 −65 −60 Vm (mV) b) 0 500 1000 1500 2000 2500 3000 3500 4000 0 0.5 1 Time (msec) D c) Figure 2: Comparison between the outputs of the two models of depression. An optimization algorithm was used to determine the parameters of the models so that the least square error in the difference between the EPSPs from the two models was at a minimum. The corresponding distribution is shown in (c). (a) Poisson-distributed input with an initial frequency of 40 Hz and an end frequency of 1 Hz. (b) The EPSP responses of both models were identical. (c) The values were almost identical except in the region when is close to 1. Parameters used in the simulations:
, 4 , =
4 , 5 4 2( . It is difficult to compute a closed-form solution for Eq. 4 for any value of = (a transistor parameter which is less than 1). This value also changes under different operating conditions and between transistors fabricated in different processes. Hence, we solve for in the case of = 4 given that the last spike occurred at : : 5 > !5 "!# 5 $% !5 & ' "!# 5 4 (7) When is far from its recovered value of 1, we can approximate its recovery dynamics by : 5 (irrespective of = ) and solving for , we get 5 ' 4 In this regime, follows a linear trajectory. Note that the same is true of Eq. 1 when " "
. 0 1 2 3 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 Time (s) Neuron response (V) 0 10 20 30 40 50 0 0.2 0.4 0.6 0.8 1 1.2 1.4 EPSP amplitude (V) Frequency (Hz) Poisson spike train Vd = 0.2 V, Va = 1.01 V Non−depressing synapse Vd = 0.4 V, Va = 1.03 V Vd = 0.6 V, Va = 1.15 V (a) (b) Figure 3: Transient EPSP responses to a 10 Hz Poisson-distributed train (a) and dependence of steady-state EPSP responses on the input frequency for different values of depression (b). The data was measured from the fabricated circuit. In (a), the amplitude of the EPSP decreases with each incoming input spike clearly showing the effect of synaptic depression. In (a), the EPSP amplitude depends on the occurrence of the previous spike. The asterisks are the fits of the circuit model to the peak value of each EPSP. The fits give a value of 0.79. The input is the bottom curve of the plot. (b) Steady-state EPSP amplitude versus frequency for a Poisson-distributed input. The solid lines are fits from the theoretical equation. 3 Comparison between Models We compare the two models by looking at how changes in response to a Poissondistributed input whose frequency varied from 40 Hz to 1 Hz as shown in Fig. 2. We used a simple linear differential equation to describe the dynamics of the membrane potential :
where
is the membrane time constant and is the synaptic current. We ran an optimization algorithm on the parameters in the two models so that the least square error between the EPSP outputs of both models was at a minimum. In this case, the EPSP responses were identical (Fig. 2(b)) and the corresponding values (Fig. 2(c)) were almost identical except in the region where was close to the maximum value. We performed the same comparison with Tsodyks and Markram’s model and the results were similar. Hence, the circuit model can be used to describe short-term synaptic depression in a network simulation. However, the nonlinear recovery dynamics of the circuit model leads a different functional dependence of the average steady-state EPSP on the frequency of a regular input spike train. 4 Circuit Response The data in the figures in the remainder of this paper are obtained from a fabricated silicon network of aVLSI integrate-and-fire neurons of the type described in [Boahen, 1997,Van Schaik, 2001,Indiveri, 2000,Liu et al., 2001] with different types of synapses. 4.1 Transient Response We first measured the transient response of the neuron when stimulated by a 10 Hz Poissondistributed input through the depressing synapse. We tuned the parameters of the synapse and the leak current so that the membrane potential did not build up to threshold. This data is shown in Fig. 3(a). The fit (marked with asterisks with in the figure) using Eq. 6 along with computed from Eq. 7, describes the experimental data well. 4.2 Steady-State Response The equation describing the dependence of the steady-state values of on the presynaptic frequency can easily be determined in the case of a regular spiking input of rate % by using Eqs. 5 and 7. The resulting expression is somewhat complicated but by using the reduced dynamics expression ( : 5 ), we obtain a simpler expression for : 5 % 4 (8) This equation shows that the steady-state and hence, the steady-state EPSP amplitude is inversely dependent on the presynaptic rate % . The form of the curve is similar to the results obtained in the work of [Abbott et al., 1997] where the data can be fitted with Eq. 3. From the chip, we measured the steady-state EPSP amplitudes using a Poisson-distributed train whose frequency varied over a range of 3 Hz to 50 Hz in steps of 1 Hz. Each frequency interval lasted 15 s and the EPSP amplitude was averaged in the last 5 s to obtain the steadystate value. Four separate trials were performed and the resulting mean and the variance of the measurements are shown in Fig. 3(b). The parameters from the fits using the response data to a regular spiking input were used to generate the fitted curve to the data in Fig. 3(b). The values from the fits give recovery time constants from 1–3s and values varying between 0.02-0.04. 5 Role of Synaptic Depression Different computational roles have been proposed for networks which incorporate synaptic depression. In this section, we describe some measurements which illustrate the postulated roles of depression. The direction-selective model of [Chance et al., 1998] which makes use of the phase advance property from depressing synapses have been attempted on a neuron on our chip and the direction-selective results were qualitatively similar. Depressing synapses have also been implicated in cortical gain control [Abbott et al., 1997]. A depressing synapse acts like a transient detector to changes in frequency (or a first derivative filter). A synapse with short-term depression responds equally to equal percentage rate changes in its input on different firing rates. We demonstrate the gain-control mechanism of short-term depression by measuring the neuron’s response to step changes in input frequency from 10 Hz to 20 Hz to 40 Hz. Each step change represents the same rate change in input frequency. These results are shown in Fig. 4(a) for a regular train and in (b) for a Poisson-distributed train. Each frequency epoch lasted 3 s so the synaptic strength should have reached steady-state before the next increase in input frequency. For both figures in Fig. 4, the top curve shows the response of the neuron when stimulated by the input (bottom curve) through a depressing synapse (top curve) and a non-depressing synapse (middle curve). Figure 4(a) shows clearly that the transient increase in the firing rate of a neuron when stimulated through a depressing synapse right after each step increase in input frequency and the subsequent adaptation of its firing rate to a steady-state value. The steady-state firing rate of the neuron with a depressing synapse is less dependent on the 3 4 5 6 7 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 Regular spike train Time (s) Vm (V) 0 2 4 6 8 0 1 2 3 4 5 Poisson spike train Time (s) Vm (V) (a) (b) Figure 4: Response of neuron to changes in input frequency (bottom curve) when stimulated through a depressing synapse (top curve) and a non-depressing synapse (middle curve). The neuron was stimulated for three frequency intervals (10Hz to 20 Hz to 40 Hz) lasting 3 s each. (a) Response of neuron using a regular spiking input. The steady-state firing rate of the neuron increased almost linearly with the input frequency when stimulated through the non-depressing synapse. In the depressing-synapse curve, there is a transient increase in the neuron’s firing rate before the rate adapted to steady-state. (b) Response of neuron using a Poisson-distributed input. The parameters for both types of synapses were tuned so that the steady-state firing rates were about the same at the end of each frequency interval for both synapses. Notice that during the 10 Hz interval, the neuron quickly built up to threshold if it was stimulated through the depressing synapse. absolute input frequency when compared to the firing rate of the neuron when stimulated through the non-depressing synapse. In the latter case, the firing rate of the neuron is approximately linear in the input rate. The data in Fig. 4(b) obtained from a Poisson-distributed train shows an obvious difference in the responses between the depressing and non-depressing synapse. In the depressingsynapse case, the neuron quickly reached threshold for a 10 Hz input, while it remained subthreshold in the non-depressing case until the input has increased to 20Hz. This suggests that a potential role of a depressing synapse is to drive a neuron quickly to threshold when its membrane potential is far away from its threshold. 6 Conclusion We described a model of synaptic depression that was derived from a circuit implementation. This circuit model has nonlinear recovery dynamics in contrast to current theoretical models of dynamic synapses. It gives qualitatively similar results when compared to the model of Abbott and colleagues. Measured data from a chip with aVLSI integrate-and-fire neurons and dynamic synapses show that this network can be used to simulate the responses of dynamic networks with short-term dynamic synapses. Experimental results suggest that depressing synapses can be used to drive a neuron quickly up to threshold if its membrane potential is at the resting potential. The silicon networks provide an alternative to computer simulation of spike-based processing models with different time constant synapses because they run in real-time and the computational time does not scale with the size of the neuronal network. Acknowledgments This work was supported in part by the Swiss National Foundation Research SPP grant. We acknowledge Kevan Martin, Pamela Baker, and Ora Ohana for many discussions on dynamic synapses. References [Abbott et al., 1997] Abbott, L., Sen, K., Varela, J., and Nelson, S. (1997). Synaptic depression and cortical gain control. Science, 275(5297):220–223. [Boahen, 1997] Boahen, K. A. (1997). Retinomorphic Vision Systems: Reverse Engineering the Vertebrate Retina. PhD thesis, California Institute of Technology, Pasadena CA. [Chance et al., 1998] Chance, F., Nelson, S., and Abbott, L. (1998). Synaptic depression and the temporal response characteristics of V1 cells. Journal of Neuroscience, 18(12):4785–4799. [Indiveri, 2000] Indiveri, G. (2000). Modeling selective attention using a neuromorphic aVLSI device. Neural Computation, 12(12):2857–2880. [Liu, 2002] Liu, S.-C. (2002). Dynamic synapses and neuron circuits for mixed-signal processing. EURASIP Journal on Applied Signal Processing: Special Issue. Submitted. [Liu et al., 2001] Liu, S.-C., Kramer, J., Indiveri, G., Delbr¨uck, T., Burg, T., and Douglas, R. (2001). Orientation-selective aVLSI spiking neurons. Neural Networks: Special Issue on Spiking Neurons in Neuroscience and Technology, 14(6/7):629–643. [Maass and Zador, 1999] Maass, W. and Zador, A. (1999). Computing and learning with dynamic synapses. In Maass, W. and Bishop, C. M., editors, Pulsed Neural Networks, chapter 6, pages 157–178. MIT Press, Boston, MA. ISBN 0-262-13350-4. [Matveev and Wang, 2000] Matveev, V. and Wang, X. (2000). Differential short-term synaptic plasticity and transmission of complex spike trains: to depress or to facilitate? Cerebral Cortex, 10(11):1143–1153. [Rasche and Hahnloser, 2001] Rasche, C. and Hahnloser, R. (2001). Silicon synaptic depression. Biological Cybernetics, 84(1):57–62. [Senn et al., 1998] Senn, W., Segev, I., and Tsodyks, M. (1998). Reading neuronal synchrony with depressing synapses. Neural Computation, 10(4):815–819. [Stratford et al., 1998] Stratford, K., Tarczy-Hornoch, K., Martin, K., Bannister, N., and Jack, J. (1998). Excitatory synaptic inputs to spiny stellate cells in cat visual cortex. Nature, 382:258–261. [Tsodyks and Markram, 1997] Tsodyks, M. and Markram, H. (1997). The neural code between neocortical pyramidal neurons depends on neurotransmitter release probability. Proc. Natl. Acad. Sci. USA, 94(2). [Tsodyks et al., 1998] Tsodyks, M., Pawelzik, K., and Markram, H. (1998). Neural networks with dynamic synapses. Neural Computation, 10(4):821–835. [Van Schaik, 2001] Van Schaik, A. (2001). Building blocks for electronic spiking neural networks. Neural Networks, 14(6/7):617–628. Special Issue on Spiking Neurons in Neuroscience and Technology. [Varela et al., 1997] Varela, J., Sen, K., Gibson, J., Fost, J., Abbott, L., and Nelson, S. (1997). A quantitative description of short-term plasticity at excitatory synapses in layer 2/3 of rat primary visual cortex. Journal of Neuroscience, 17(20):7926–7940.
|
2002
|
182
|
2,194
|
Coulomb Classifiers: Generalizing Support Vector Machines via an Analogy to Electrostatic Systems Sepp Hochreiter†, Michael C. Mozer∗, and Klaus Obermayer† †Department of Electrical Engineering and Computer Science Technische Universit¨at Berlin, 10587 Berlin, Germany ∗Department of Computer Science University of Colorado, Boulder, CO 80309–0430, USA {hochreit,oby}@cs.tu-berlin.de, mozer@cs.colorado.edu Abstract We introduce a family of classifiers based on a physical analogy to an electrostatic system of charged conductors. The family, called Coulomb classifiers, includes the two best-known support-vector machines (SVMs), the ν–SVM and the C–SVM. In the electrostatics analogy, a training example corresponds to a charged conductor at a given location in space, the classification function corresponds to the electrostatic potential function, and the training objective function corresponds to the Coulomb energy. The electrostatic framework provides not only a novel interpretation of existing algorithms and their interrelationships, but it suggests a variety of new methods for SVMs including kernels that bridge the gap between polynomial and radial-basis functions, objective functions that do not require positive-definite kernels, regularization techniques that allow for the construction of an optimal classifier in Minkowski space. Based on the framework, we propose novel SVMs and perform simulation studies to show that they are comparable or superior to standard SVMs. The experiments include classification tasks on data which are represented in terms of their pairwise proximities, where a Coulomb Classifier outperformed standard SVMs. 1 Introduction Recently, Support Vector Machines (SVMs) [2, 11, 9] have attracted much interest in the machine-learning community and are considered state of the art for classification and regression problems. One appealing property of SVMs is that they are based on a convex optimization problem, which means that a single minimum exists and can be computed efficiently. In this paper, we present a new derivation of SVMs by analogy to an electrostatic system of charged conductors. The electrostatic framework not only provides a physical interpretation of SVMs, but it also gives insight into some of the seemingly arbitrary aspects of SVMs (e.g., the diagonal of the quadratic form), and it allows us to derive novel SVM approaches. Although we are the first to make the analogy between SVMs and electrostatic systems, previous researchers have used electrostatic nonlinearities in pattern recognition [1] and a mechanical interpretation of SVMs was introduced in [9]. In this paper, we focus on the classification of an input vector x ∈X into one of two categories, labeled “+” and “−”. We assume a supervised learning paradigm in which N training examples are available, each example i consisting of an input xi and a label yi ∈{−1, +1}. We will introduce three electrostatic models that are directly analogous to existing machine-learning (ML) classifiers, each of which builds on and generalizes the previous. For each model, we describe the physical system upon which it is based and show its correspondence to an ML classifier. 1.1 Electrostatic model 1: Uncoupled point charges Consider an electrostatic system of point charges populating a space X ′ homologous to X. Each point charge corresponds to a particular training example; point charge i is fixed at location xi in X ′, and has a charge of sign yi. We define two sets of fixed charges: S+ = © xi | yi = +1 ª and S−= © xi | yi = −1 ª . The charge of point i is Qi ≡yi αi, where αi ≥0 is the amount of charge, to be discussed below. We briefly review some elementary physics. If a unit positive charge is at x in X ′, it will be attracted to all charges in S−and repelled by all charges in S+. To move the charge from x to some other location ˜x, the attractive and repelling forces must be overcome at every point along the trajectory; the path integral of the force along the trajectory is called the work and does not depend on the trajectory. The potential at x is the work that must be done to move a unit positive charge from a reference point (usually infinity) to x. The potential at x is ϕ (x) = PN j=1 Qj G ¡ xj, x ¢ , where G is a function of the distance. In electrostatic systems with point charges, G (a, b) = 1/ ∥a −b∥2. From this definition, one can see that the potential at x is negative (positive) if x is in a neighborhood of many negative (positive) charges. Thus, the potential indicates the sign and amount of charge in the local neighborhood. Turning back to the ML classifier, one might propose a classification rule for some input x that assigns the label “+” if ϕ(x) > 0 or “−” otherwise. Abstracting from the electrostatic system, if αi = 1 and G is a function that decreases sufficiently steeply with distance, we obtain a nearest-neighbor classifier. This potential classifier can be also interpreted as Parzen windows classifier [9]. 1.2 Electrostatic model 2: Coupled point charges Consider now an electrostatic model that extends the previous model in two respects. First, the point charges are replaced by conductors, e.g., metal spheres. Each conductor i has a self–potential coefficient, denoted Pii, which is a measure of how much charge it can easily hold; for a metal sphere, Pii is related to sphere’s diameter. Second, the conductors in S+ are coupled, as are the conductors in S−. “Coupling” means that charge is free to flow between the conductors. Technically, S+ and S−can each be viewed as a single conductor. In this model, we initially place the same charge ν/N on each conductor, and allow charges within S+ and S−to flow freely (we assume no resistance in the coupling and no polarization of the conductors). After the charges redistribute, charge will tend to end up on the periphery of a homogeneous neighborhood of conductors, because like charges repel. Charge will also tend to end up along the S+–S− boundary because opposite charges attract. Figure 1 depicts the redistribution of charges, where the shading is proportional to the magnitude αi. An ML classifier can be built based on this model, once again using ϕ(x) > 0 as the decision rule for classifying an input x. In this model, however, the αi are not uniform; the conductors with large αi will have the greatest influence on the potential function. Consequently, one can think of αi as the weight or importance of example i. As we will show shortly, the examples with αi > 0 are exactly support vectors of an SVM. + + + + + + + + + + + + + + + + + + + + + + Figure 1: Coupled conductor system following charge redistribution. Shading reflects the charge magnitude, and the contour indicates a zero potential. The redistribution of charges in the electrostatic system is achieved via minimization of the Coulomb energy. Imagine placing the same total charge magnitude, m, on S+ and S−by dividing it uniformly among the conductors, i.e., αi = m/ |Syi|. The free charge flow in S+ and S−yields a distribution of charges, the αi, such that Coulomb energy is minimized. To introduce Coulomb energy, we begin with some preliminaries. The potential at conductor i, ϕ(xi), which we will denote more compactly as ϕi, can be described in terms of the coefficients of potential Pij [10]: ϕi = PN j=1 Pij Qj, where Pij is the potential induced on conductor i by charge Qj on conductor j; Pii ≥Pij ≥0 and Pij = Pji. If each conductor i is a metal sphere centered at xi and has radius ri (radii are enforced to be small enough so that the spheres do not touch each other), the system can be modeled by a point charge Qi at xi, and Pij = G ¡ xi, xj¢ as in the previous section [10]. The self-potential, Pii, is defined as a function of ri. The Coulomb energy is defined in terms of the potential on the conductors, ϕi: E = 1 2 N X i=1 ϕi Qi = 1 2 QT P Q = 1 2 N X i,j=1 Pij yi yj αi αj . When the energy minimum is reached, the potential ϕi will be the same for all connected i ∈S+ (i ∈S−); we denote this potential ϕS+ (ϕS−). Two additional constraints on the system of coupled conductors are necessary in order to interpret the system in terms of existing machine learning models. First, the positive and negative potentials must be balanced, i.e., ϕS+ = −ϕS−. This constraint is achieved by setting the reference point of the potentials through b, b = −0.5 (ϕS+ + ϕS−), into the potential function: ϕ (x) = PN i=1 Qi G ¡ xi, x ¢ + b. Second, the conductors must be prevented from reversing the sign of their charge, i.e., αi ≥0, and from holding more than a quantity C of charge, i.e., αi ≤C. These requirements can be satisfied in the electrostatic model by disconnecting a conductor i from the charge flow in S+ or S−when αi reaches a bound, which will subsequently freeze its charge. Mathematically, the requirements are satisfied by treating energy minimization as a constrained optimization problem with 0 ≤αi ≤C. The electrostatic system corresponds to a ν–support vector machine (ν–SVM) [9] with kernel G if we set C = 1/N. The electrostatic system assures that P i∈S+ αi = P i∈S−αi = 0.5 ν. The identity holds because the Coulomb energy is exactly the ν–SVM quadratic objective function, and the thresholded electrostatic potential evaluated at a location is exactly the SVM decision rule. The minimization of potentials differences in the systems S+ and S−corresponds to the minimization of slack variables in the SVM (slack variables express missing potential due to the upper bound on αi). Mercer’s condition [6], the essence of the nonlinear SVM theory, is equivalent to the fact that continuous electrostatic energy is positive, i.e., E = R G (x, z) h (x) h (z) dx dz ≥0. The self-potentials of the electrostatic system provide an interpretation to the diagonal elements in the quadratic objective function of the SVM. This interpretation of the diagonal elements allows us to introduce novel kernels and novel SVM methods, as we discuss later. 1.3 Electrostatic model 3: Coupled point charges with battery In electrostatic model 2, we control the magnitude of charge applied to S+ and S−. Although we apply the same charge magnitude to each, we do not have to control the resulting potentials ϕS+ and ϕS−, which may be imbalanced. We compensate for this imbalance via the potential offset b. In electrostatic model 3, we control the potentials ϕS+ and ϕS+ directly by adding a battery to the system. We connect S+ to the positive pole of the battery with potential +1 and S−to the negative pole with potential −1. The battery ensures that ϕS+ = +1 and ϕS−= −1 because charges flow from the battery into or out of the system until the systems take on the potential of the battery poles. The battery can then be removed. The potential ϕi = yi is forced by the battery on conductor i. The total Coulomb energy is the energy from model 2 minus the work done by the battery. The work done by the battery is P i≤N yiQi = P i≤N αi. The Coulomb energy is 1 2 QT P Q − N X i=1 αi = 1 2 N X i,j=1 Pij yi yj αi αj − N X i=1 αi . This physical system corresponds to a C–support vector machine (C–SVM) [2, 11]. The C–SVM requires that P i yiαi = 0; although this constraint may not be fulfilled in the system described here, it can be enforced by a slightly different system [4]. A more straightforward relation to the C–SVM is given in [9] where the authors show that every ν–SVM has the same class boundaries as a C–SVM with appropriate C. 2 Comparison of existing and novel models 2.1 Novel Kernels The electrostatic perspective makes it easy to understand why SVM algorithms can break down in high-dimensional spaces: Kernels with rapid fall-offinduce small potentials and consequently, almost every conductor retains charge. Because a charged conductor corresponds to a support vector, the number of support vectors is large, which leads to two disadvantages: (1) the classification procedure is slow, and (2) the expected generalization error increases with the number of support vectors [11]. We therefore should use kernels that do not drop offexponentially. The self–potential permits the use of kernels that would otherwise be invalid, such as a generalization of the electric field: G ¡ xi, xj¢ := °°xi −xj°°−l 2 and G ¡ xi, xi¢ := r−l i = Pii, where ri the radius of the ith sphere. The ris are increased to their maximal values, i.e. until they hit other conductors (ri = 0.5 minj °°xi −xj°° 2). These kernels, called “Coulomb kernels”, are invariant to scaling of the input space in the sense that scaling does not change the minimum of the objective function. Consequently, such kernels are appropriate for input data with varying local densities. Figure 2 depicts a classification task with input regions of varying density. The optimal class boundary is smooth in the low data density regions and has high curvature in regions, where the data density is high. The classification boundary was constructed using a C-SVM with a Plummer kernel G ¡ xi, xj¢ := ³°°xi −xj°°2 2 + ϵ2´−l/2 , which is an approximation to our novel Coulomb kernel but lacks its weak singularities. Figure 2: Two class data with a dense region and trained with a SVM using the new kernel. Gray-scales indicate the weights — support vectors are dark. Boundary curves are given for the novel kernel (solid), best RBF-kernel SVM which overfits at high density regions where the resulting boundary goes through a dark circle (dashed), and optimal boundary (dotted). 2.2 Novel SVM models Our electrostatic framework can be used to derive novel SVM approaches [4], two representative examples of which we illustrate here. 2.2.1 κ–Support Vector Machine (κ–SVM): We can exploit the physical interpretation of Pii as conductor i’s self–potential. The Pii’s determine the smoothness of the charge distribution at the energy minimum. We can introduce a parameter κ to rescale the self potential – P new ii = κ P old ii . κ controls the complexity of the corresponding SVM. With this modification, and with C = ∞, electrostatic model 3 becomes what we call the κ–SVM. 2.2.2 p–Support Vector Machine (p–SVM): At the Coulomb energy minimum the electrostatic potentials equalize: ϕi −yi = 0, ∀i (y is the label vector). This motivates the introduction of potential difference, 1 2 ∥PQ + y∥2 2 = 1 2QT P T P Q + QT P T y + 1 2yT y as the objective. We obtain min α 1 2αT Y P T P Y α −1T Y P Y α subject to 1T P Y α = 0 , |αi| ≤C, where 1 is the vector of ones and Y := diag(y). We call this variant of the optimization problem the potential-SVM (p-SVM). Note that the p-SVM is similar to the “empirical kernel map” [9]. However P appears in the objective’s linear term and the constraints. We classify in a space where P is a dot product matrix. The constraint 1T P Y α = 0 ensures that the average potential for each class is equal. By construction, P T P is positive definite; consequently, this formulation does not require positive definite kernels. This characteristic is useful for problems in which the properties of the objects to be classified are described by their pairwise proximities. That is, suppose that instead of representing each input object by an explicit feature vector, the objects are represented by a matrix which contains a real number indicating the similarity of each object to each other object. We can interpret the entries of the matrix as being produced by an unknown kernel operating on unknown feature vectors. In such a matrix, however, positive definiteness cannot be assured, and the optimal hyperplane must be constructed in Minkowski space. 3 Experiments UCI Benchmark Repository. For the representative models we have introduced, we perform simulations and make comparisons to standard SVM variants. All datasets (except “banana” from [7]) are from the UCI Benchmark Repository and were preprocessed in [7]. We did 100-fold validation on each data set, restricting the training set to 200 examples, and using the remainder of examples for testing. We compared two standard architectures, the C–SVM and the ν–SVM, to our novel architectures: to the κ–SVM, to the p–SVM, and to a combination of them, the κ–p–SVM. The κ–p–SVM is a p–SVM regularized like a κ–SVM. We explored the use of radial basis function (RBF), polynomial (POL), and Plummer (PLU) kernels. Hyperparameters were determined by 5–fold cross validation on the first 5 training sets. The search for hyperparameter was not as intensive as in [7]. Table 1 shows the results of our comparisons on the UCI Benchmarks. Our two novel architectures, the κ–SVM and the p–SVM, performed well against the two existing architectures (note that the differences between the C– and the ν–SVM are due to model selection). As anticipated, the p–SVM requires far fewer support vectors. Additionally, the Plummer kernel appears to be more robust against hyperparameter and SVM choices than the RBF or polynomial kernels. C ν κ p κ-p C ν κ p κ-p thyroid heart RBF 6.4 9.4 7.7 5.4 8.6 21.4 19.1 17.9 22.4 17.8 POL 22.8 12.6 7.0 13.3 6.9 20.4 20.4 19.3 23.0 19.3 PLU 6.1 6.2 6.1 5.7 6.1 16.3 16.3 16.3 17.4 16.3 breast–cancer banana RBF 33.6 31.6 33.8 32.4 33.7 13.2 36.7 13.2 11.6 13.4 POL 36.0 25.7 29.6 27.1 29.1 35.3 35.0 11.5 22.4 11.5 PLU 33.4 33.1 33.4 30.6 33.4 15.7 15.7 15.7 21.9 15.7 german RBF 28.7 29.3 29.0 27.8 28.8 POL 33.7 29.6 26.2 31.8 26.2 PLU 28.8 28.5 33.3 27.1 33.3 Table 1: Mean % misclassification on 5 UCI Repository data sets. Each cell in the table is obtained via 100 replications splitting the data into training and test sets. The comparison is among five SVMs (the table columns) using three kernel functions (the table rows). Cells in bold face are the best result for a given data set and italicized the second and third best. Pairwise Proximity Data. We applied our p–SVM and the generalized SVM (G–SVM) [3] to two pairwise-proximity data sets. The first data set, the “cat cortex” data, is a matrix of connection strengths between 65 cat cortical areas and was provided by [8], where the available anatomical literature was used to determine proximity values between cortical areas. These areas belong to four different coarse brain regions: auditory (A), visual (V), somatosensory (SS), and frontolimbic (FL). The goal was to classify a given cortical area as belonging to a given region or not. The second data set, the “protein” data, is the evolutionary distance of 226 sequences of amino acids of proteins obtained by a structural comparison [5] (provided by M. Vingron). Most of the proteins are from four classes of globins: hemoglobin-ff (H-ff), hemoglobin-fi(H-fi), myoglobin (M), and heterogenous globins (GH). The goal was to classify a protein as belonging to a given globin class or not. As Table 2 shows, our novel architecture, the p–SVM, beats out an existing architecture in the literature, the G–SVM, on 5 of 8 classification tasks, and ties the G–SVM on 2 of 8; it loses out on only 1 of 8. cat cortex protein data Reg. V A SS FL Reg. H-α H-β M GH Size — 18 10 18 19 — 72 72 39 30 G-SVM 0.05 4.6 3.1 3.1 1.5 0.05 1.3 4.0 0.5 0.5 G-SVM 0.1 4.6 3.1 6.1 1.5 0.1 1.8 4.5 0.5 0.9 G-SVM 0.2 6.1 1.5 3.1 3.1 0.2 2.2 8.9 0.5 0.9 p-SVM 0.6 3.1 1.5 6.1 3.1 300 0.4 3.5 0.0 0.4 p-SVM 0.7 3.1 3.1 4.6 1.5 400 0.4 3.1 0.0 0.9 p-SVM 0.8 3.1 3.1 4.6 1.5 500 0.4 3.5 0.0 1.3 Table 2: Mean % misclassifications for the cat-cortex and protein data sets using the p–SVM and the G–SVM and a range of regularization parameters (indicated in the column labeled “Reg.”). The result obtained for the cat-cortex data is via leaveone-out cross validation, and for the protein data is via ten-fold cross validation. The best result for a given classification problem is printed in bold face. 4 Conclusion The electrostatic framework and its analogy to SVMs has led to several important ideas. First, it suggests SVM methods for kernels that are not positive definite. Second, it suggests novel approaches and kernels that perform as well as standard methods (will undoubtably perform better on some problems). Third, we demonstrated a new classification technique working in Minkowski space which can be used for data in form of pairwise proximities. The novel approach treats the proximity matrix as an SVM Gram matrix which lead to excellent experimental results. We argued that the electrostatic framework not only characterizes a family of support-vector machines, but it also characterizes other techniques such as nearest neighbor classification. Perhaps the most important contribution of the electrostatic framework is that, by interrelating and encompassing a variety of methods, it lays out a broad space of possible algorithms. At present, the space is sparsely populated and has barely been explored. But by making the dimensions of this space explicit, the electrostatic framework allows one to easily explore the space and discover novel algorithms. In the history of machine learning, such general frameworks have led to important advances in the field. Acknowledgments We thank G. Hinton and J. Schmidhuber for stimulating conversations leading to this research and an anonymous reviewer who provided helpful advice on the paper. References [1] M. A. Aizerman, E. M. Braverman, and L. I. Rozono´er. Theoretical foundations of the potential function method in pattern recognition learning. Automation and Remote Control, 25:821–837, 1964. [2] C. J. C. Burges. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 2(2):1–47, 1998. [3] T. Graepel, R. Herbrich, B. Sch¨olkopf, A. J. Smola, P. L. Bartlett, K.-R. M¨uller, K. Obermayer, and R. C. Williamson. Classification on proximity data with LP–machines. In Proceedings of the Ninth International Conference on Artificial Neural Networks, pages 304–309, 1999. [4] S. Hochreiter and M. C. Mozer. Coulomb classifiers: Reinterpreting SVMs as electrostatic systems. Technical Report CU-CS-921-01, Department of Computer Science, University of Colorado, Boulder, 2001. [5] T. Hofmann and J. Buhmann. Pairwise data clustering by deterministic annealing. IEEE Trans. Pattern Anal. and Mach. Intelligence, 19(1):1–14, 1997. [6] J. Mercer. Functions of positive and negative type and their connection with the theory of integral equations. Philosophical Transactions of the Royal Society of London A, 209:415–446, 1909. [7] G. R¨atsch, T. Onoda, and K.-R. M¨uller. Soft margins for AdaBoost. Technical Report NC-TR-1998-021, Dep. of Comp. Science, Univ. of London, 1998. [8] J. W. Scannell, C. Blakemore, and M. P. Young. Analysis of connectivity in the cat cerebral cortex. The Journal of Neuroscience, 15(2):1463–1483, 1995. [9] B. Sch¨olkopf and A. J. Smola. Learning with Kernels — Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, 2002. [10] M. Schwartz. Principles of Electrodynamics. Dover Publications, NY, 1987. Republication of McGraw-Hill Book 1972. [11] V. Vapnik. The nature of statistical learning theory. Springer, NY, 1995.
|
2002
|
183
|
2,195
|
Robust Novelty Detection with Single-Class MPM Gert R.G. Lanckriet EECS, V.C. Berkeley gert@eecs.berkeley. edu Laurent EI Ghaoui EECS, V.C. Berkeley elghaoui@eecs.berkeley.edu Abstract Michael I. Jordan Computer Science and Statistics, V.C. Berkeley jordan@cs. berkeley. edu In this paper we consider the problem of novelty detection, presenting an algorithm that aims to find a minimal region in input space containing a fraction 0: of the probability mass underlying a data set. This algorithm- the "single-class minimax probability machine (MPM)"- is built on a distribution-free methodology that minimizes the worst-case probability of a data point falling outside of a convex set, given only the mean and covariance matrix of the distribution and making no further distributional assumptions. We present a robust approach to estimating the mean and covariance matrix within the general two-class MPM setting, and show how this approach specializes to the single-class problem. We provide empirical results comparing the single-class MPM to the single-class SVM and a two-class SVM method. 1 Introduction Novelty detection is an important unsupervised learning problem in which test data are to be judged as having been generated from the same or a different process as that which generated the training data. In essence, we wish to estimate a quantile of the distribution underlying the training data: for a fixed constant 0: E (0,1], we attempt to find a (small) set Q such that Pr{y E Q} = 0:, where, for novelty detection, 0: is typically chosen near one (Scholkopf and Smola, 2001, Ben-David and Lindenbaum, 1997). This formulation of novelty detection in terms of quantile estimation is to be compared to the (costly) approach of estimating a density based on the training data and thresholding the estimated density. Although of reduced complexity when compared to density estimation, multivariate quantile estimation is still a challenging problem, necessitating computationally efficient methods for representing and manipulating sets in high dimensions. A significant step forward in this regard was provided by Scholkopf and Smola (2001), who treated novelty detection as a "single-class" classification problem in which data are separated from the origin in feature space. This allowed them to invoke the computationally-efficient technology of support vector machines. In the current paper we adopt the "single-class" perspective of Scholkopf and Smola (2001), but make use of a different kernel-based technique for finding discriminant boundaries- the minimax probability machine (MPM) of Lanckriet et al. (2002). To see why the MPM should be particularly appropriate for quantile estimation, consider the following theorem, which lies at the core of the MPM. Given a random vector y with mean y and covariance matrix ~y , and given arbitrary constants a¥- 0, b such that aTy :S b, we have (for a proof, see Lanckriet et al., 2002): inf Pr{aTy:Sb}2::a {:} b-aTY2::,,;(a) /aT"5:,ya, (1) y~(y,:Ey) V where ,,;(a) = Ja/1 - a, and a E [0, 1). Note that this is a "distribution-free" result- the infimum is taken over all distributions for y having mean y and covariance matrix "5:,y (assumed to be positive definite for simplicity). While Lanckriet et al. (2002) were able to exploit this theorem to design a binary classification algorithm, it is clear that the theorem provides even more direct leverage on the "single-class" problem- it directly bounds the probability of an observation falling outside of a given set. There is one important aspect of the MPM formulation that needs further consideration, however, if we wish to apply the approach to the novelty detection problem. In particular, y and ~y are usually unknown in practice and must be estimated from data. In the classification setting, Lanckriet et al. (2002) successfully made use of plug-in estimates of these quantities- in some sense the bias incurred by the use of plug-in estimates in the two classes appears to "cancel" and have diminished overall impact on the discriminant boundary. In the one-class setting, however, the uncertainty due to estimation of y and ~y translates directly into movement of the discriminant boundary and cannot be neglected. We begin in Section 2 by revisiting the MPM and showing how to account for uncertainty in the means and covariance matrices within the framework of robust estimation. Section 3 then applies this robust estimation approach to the singleclass MPM problem. We present empirical results in Section 4 and present our conclusions in Section 5. 2 Robust Minimax Probability Machine (R-MPM) Let x, y E jRn denote random vectors in a binary classification problem, modelling data from each of two classes, with means and covariance matrices given by X, Y E jRn, and "5:, x , "5:,y E jRnxn (both symmetric and positive semidefinite), respectively. We wish to determine a hyperplane H(a, b) = {z I aTz = b}, where a E jRn\{o} and b E jR, that maximizes the worst-case probability a that future data points are classified correctly with respect to all distributions having these means and covariance matrices: max a S.t. inf Pr{ aT x 2:: b} 2:: a a,a,cO,b x~(x,:Ex) (2) inf Pr{aTy:Sb} 2:: a, y~(y , :Ey) where x '" (x, "5:,x) refers to the class of distributions that have mean x and covariance "5:,x, but are otherwise arbitrary; likewise for y. The worst-case probability of misclassification is explicitly obtained and given by 1 - a. Solving this optimization problem involves converting the probabilistic constraints in Eq. (2) into deterministic constraints, a step which is achieved via the theorem referred to earlier in Eq. (1). This eventually leads to the following convex optimization problem, whose solution determines an optimal hyperplane H(a, b) (Lanckriet et al., 2002): (3) where b is set to the value b* = arx x:*Jar~xa*, with a* an optimal solution of Eq. (3). The optimal worst-case misclassification probability is obtained via 1 - a* = 1/(1 + x:;). Once an optimal hyperplane is found, classification of a new data point Znew is done by evaluating sign( ar Znew b*): if this is + 1, Znew is classified as belonging to class x, otherwise Znew is classified as belonging to class y. While in our earlier work, we simply computed sample-based estimates of means and covariance matrices and plugged them into the MPM optimization problem in Eq. (3), we now show how to treat this estimation problem within the framework of robust optimization. Assume the mean and covariance matrix of each class are unknown but lie within specified convex sets: (x, ~x) E X, with X C jRn X {M E jRnxnlM = MT,M ~ O}, and (y,~y) E y, with Y c jRn X {M E jRnxnlM = M T , M ~ O}. We now want the probabilistic guarantees in Eq. (2) to be robust against variations of the mean and covariance matrix within these sets: max a S.t. inf Pr{aTx2b}2aV(x,~x)EX, (4) a,a#O,b x~(x,Ex) inf Pr{aTy::; b} 2 a V(y,~y) E y. x~(y , Ey) In other words, we would like to guarantee a worst-case misclassification probability for all distributions which have unknown-but-bounded mean and covariance matrix, but which are otherwise arbitrary. The complexity of this problem depends obviously on the structure of the uncertainty sets X, y. We now consider a specific choice for X and y, motivated both statistically and numerically: X {(x,~x): (x-xO)T~x-1(X_XO)::;v2, II~x-~xoIIF::;p}, Y {(y,~y): (y_yO)T~y-1(y_yO)::;v2, II~Y-~/IIF::;p}, (5) with xO, ~x 0 the "nominal" mean and covariance estimates and with v, p 2 0 fixed and, for simplicity, assumed equal for X and y. Section 4 discusses how their values can be determined. The matrix norm is the Frobenius norm: IIAIIj" = Tr(AT A). Our model for the uncertainty in the mean assumes the mean of class y belongs to an ellipsoid a convex set centered around yO, with shape determined by the (unknown) ~Y' This is motivated by the standard statistical approach to estimating a region of confidence based on Laplace approximations to a likelihood function. The covariance matrix belongs to a matrix norm ball a convex set centered around ~Y o. This uncertainty model is perhaps less classical from a statistical viewpoint, but it will lead to a regularization term of a classical form. In order to solve Eq. (4), we apply Eq. (1) and notice that b-aTy 2 x:(ah/aT~ya, V(y, ~y) E Y {:} bmax aTy 2 x:(a) max aT~ya, (y,Ey)EY (y,Ey)EY where the right-hand side guarantees the constraint for the worst-case estimate of the mean and covariance matrix within the bounded set y. For given a and yO: (6) Indeed, the Lagrangian is £(y, >.) = _aTy + >.((y yO)T~y -l(y - yO) - v2) and is to be maximized with respect to >. 2 0 and minimized with respect to y. At the optimum, we have /y £(y, A) = 0 and t>.. £(y, A) = 0, leading to y = yO + A ~ya and A = JaT~ya/4v which eventually leads to Eq. (6). For given a and ~/: (7) where In is the n x n identity matrix. Indeed, without loss of generality, we can let ~ be of the form ~ = ~o + p~~. We then obtain max aT~ a aT~ °a+p max aT ~~ a aT~ °a+paT a Ey : I I Ey-EyOIlF~P y y .6.Ey : II.6.EYI IF~ l y Y , (8) using the Cauchy-Schwarz inequality and compatibility of the Frobenius matrix norm and the Euclidean vector norm: aT ~~a::::: IlaI1211~~aI12 ::::: IlaI1211~~IIFllaI12 ::::: lIall~, because II~~IIF ::::: 1. For ~~ = In , this upper bound is attained and we get Eq. (7). Combining this with Eq. (6) leads to the robust version of Eq. (1): inf Pr{aTy ::::: b} :2: a, \fey, ~y) E Y ¢} b_aTyO :2: (",(a)+v)JaT(~/ + pln)a. y~(y , Ey) (9) Applying this result to Eq. (4) thus shows that the optimal robust minimax probability classifier for X, Y given by Eq. (5) can be obtained by solving problem Eq. (3), with ~x = ~x 0 + pIn' ~y = ~y 0 + pIn. If ",:;-1 is the optimal value of that problem, the corresponding worst-case misclassification probability is 1 1 - a* = . 1 + max(O, ("'* - V))2 With only uncertainty in the mean (p = 0), the robust hyperplane is the same as the non-robust one; the only change is in the increase in the worst-case misclassification probability. Uncertainty in the covariance matrix adds a term pIn to the covariance matrices, which can be interpreted as regularization term. This affects the hyperplane and increases the worst-case misclassification probability as well. If there is too much uncertainty in the mean (i.e., "'* < v) , the robust version is not feasible: no hyperplane can be found that separates the two classes in the robust minimax probabilistic sense and the worst-case misclassification probability is 1 - a* = 1. This robust approach can be readily generalized to allow nonlinear decision boundaries via the use of Mercer kernels (Lanckriet et al., 2002). 3 Single-class MPM for robust novelty detection We now turn to the quantile estimation problem. Recall that for a E (0,1], we wish to find a small region Q such that Pr{ x E Q} = a. Let us consider data x ,..., (x, ~x) and let us focus (for now) on the linear case where Q is a half-space not containing the origin. We seek a half-space Q(a,b) = {z I aTz :2: b}, with a E JRn\{o} and b E JR, and not containing 0, such that with probability at least a, the data lies in Q, for every distribution having mean x and covariance matrix ~x. We assume again that the real x, ~x are unknown but bounded in a set X as specified in Eq. (5): inf Pr{aTx:2:b}:2:a \f(x,~x)EX. x~(x , Ex) We want the region Q to be tight, so we maximize its Mahalanobis distance (with respect to ~x) to the origin in a robust way, i.e., for the worst-case estimate of ~x-the matrix that gives us the smallest Mahalanobis distance: s.t. inf Pr{ aT x 2:: b} 2:: a \I(x, ~x) EX. (10) x~(x , Ex) Note that Q(a, b) does not contain 0 if and only if b > o. Also, the optimization problem in Eq. (10) is positively homogeneous in (a, b). Thus, without loss of generality, we can set b = 1 in problem Eq. (10). Furthermore, we can use Eq. (7) and Eq. (9) and get (where superscript 0 for the estimates has been omitted): mln JaT(~x + pIn)a s.t. aTx -12:: (,..(a) + v)JaT(~x + pIn)a, (11) where a-::/:-O can be omitted since the constraint never holds in this case. Again, we obtain a (convex) second order cone programming problem. The worst-case probability of occurrence outside region Q is given by 1 a. Notice that the particular choice of a E (0,1] must be feasible, i.e., :3 a : aTx -12:: (,..(a) + v)JaT(~x + pIn)a. For p -::/:- 0, ~x + pIn is certainly positive definite and the halfspace is unique. Furthermore, it can be determined explicitly. To see this, we write Eq. (11) as: min 11(~x + pIn?/2 aI12 s.t. aTx 2:: 1 + (,..(a) + v) 11(~x + pIn)1/2a I12 (12) a Decomposing a as A(~x + pIn)-lx + z, where the variable z satisfies zT X = 0, we easily obtain that at the optimum, z = O. In other words, the optimal a is parallel to x, in the form a = A(~x + pIn) - lx, and the problem reduces to the one-dimensional problem: mIn IAIII(~x+pIn) -1/2 xI12 : AxT (~x+pIn)-lx 2:: l+(,..(a)+v) 11(~x+pIn)-1/2xIl2IAI· The constraint implies that A 2:: 0, hence the problem reduces to min A : A ((2 - (,..(a) + v)() 2:: l. >.::::0 (13) with (2 = xT(~x + pIn)- lx > 0 (because Eq. (12) implies x -::/:- 0). Because A 2:: 0, this can only be satisfied if (2 - (,..(a) + v)( 2:: 0, which is nothing other than the feasibility condition for a: If this is fulfilled, the optimization in Eq. (13) is feasible and boils down to: . 1 mm A s.t. A 2:: (2 (() )( >.::::0 ,.. a + v It's easy to see that the optimal A is given by A* = 1/((2 - (,..(a) + v)(), yielding: a* = (~x + pIn)-lX, b* = 1, with (= /xT(~x + pIn)-lX. (14) (2 _ (,..(a) + v)( V Notice that the uncertainty in the covariance matrix ~x leads to the typical, wellknown regularization for inverting this matrix. If the choice of a is not feasible or if x = 0 (in this case, no a E (0,1] will be feasible), Eq. (10) has no solution. Future points z for which a; z :::; b* can then be considered as outliers with respect to the region Q, with worst-case probability of occurrence outside Q given by 1- 0:. One can obtain a nonlinear region Q in ]Rn for the single-class case, by mapping the data into a feature space ]Rf: x f-t <p(x) ~ (<p(X) , ~ 'P(x)), and expressing and solving Eq. (10) in the feature space, using <p(x), <p(x) and ~ 'P(x). This is achieved using a kernel function K(Zl' Z2) = <p(zt)T <p(Z2) satisfying Mercer's condition as in the classification setting. Notice that maximizing the Mahanalobis distance of Q to the origin in ]Rf makes sense for novelty detection. For example, if we consider a Gaussian kernel K(x,y) = e-lIx-YI12/0", all mapped data points have unit length and positive dot products, so they all lie in the same orthant, on the unit ball, and are linearly separable from the origin. Our final result is thus the following: If the choice of 0: is feasible, i.e., 3, : ,Tk - 12: ("(0:) + IIh/,T(LTL + pK)r, then an optimal region Q(r, b) can be determined by solving the (convex) second order cone programming problem: m~n V ,T(LTL + pK)r s.t. ,Tk - 1 2: ("(0:) + II)V,T(LTL + pK)r, (15) where "(0:) = .}0:/1- 0: and b = 1, with " k E ]RN, [kli = iJ 2::;:1 K(Xj,Xi) and {Xd~l the N given data points. L is defined as L = (K -lNkT)/~, where 1m is a column vector with ones of dimension m. K is the Gram matrix and defined as Kij = <p(zdT<p(zj) = K(Zi,Zj). The worst-case probability of a point lying outside the region Q is given by 1 - 0:. If LTL + pK is positive definite, the optimal half-space is unique and determined by: (LTL + pK)- lk . / '* = (2 _ ("(0:) + 11)( with (= V kT(LTL + pK)-lk, (16) ifthe choice of 0: is such that "(0:) :::; ( - II or 0: :::; 1~(((~~)2. If the choice of 0: is not feasible or if k = 0 (in this case, no 0: E (O,ll will be feasible) , the problem does not have a solution. To solve the single-class problem, we can solve the second-order cone progam Eq. (15) or directly use result Eq. (16): when numerically regularizing LTL + pK with an extra term ElN , this unique solution can always be determined. Instead of explicitly inverting the matrix, we can solve a system iteratively. All of these approaches have a worst-case complexity of O(N3), comparable to the quadratic program for single-class SVM (Sch6lkopf and Smola, 2001). Once an optimal decision region is found, future points Z for which a; <p(z) = 2::~ 1 b*liK(Xi, z) :::; b* (notice that this can be evaluated only in terms of the kernel function) , can then be considered as outliers with respect to the region Q, with the worst-case probability of occurrence outside Q given by 1 - 0:. 4 Experiments In this section we report the results of experiments comparing the robust singleclass MPM to the single-class SVM of Sch6lkopf and Smola (2001) and to a twoclass SVM approach where an artificial "negative class" is obtained by generating data points uniformly in T = {z E ]Rnlmin{[xdi,[x2li, ... ,[xNld :::; [Zli :::; max{[x1li' [x2li, ... , [xNli}}. For the benchmark binary classification data sets we studied, we converted the data sets into two single-class problems by treating each class in a separate experiment. We chose 80% of the data points as training and the remaining 20% of the data points as test, lumping the latter with the data points ofthe negative class (the class of the binary classification data, not used for training). We report false positive and false negative rates averaged over 30 random partitions in Table 1.1 We used a Gaussian kernel, K(x,y) = e- llx-yI12/0", of width (J. The kernel parameter (J was tuned using cross-validation over 20 random partitions, as was the hyperparameter p. For simplicity, we set the hyperparameter v = 0 for the robust single-class MPM. Note that this choice has no impact on the MPM solution; according to Eq. (16) its only effect is to alter the estimated false-negative rate. The parameter a was varied throughout a range of values so as to explore the tradeoff between the false positive (FP) rate and the false negative (FN) rate. A small value a yields a good FP but poor FN, and large a yields good FN but poor FP. For the single-class SVM and the two-class SVM, we varied the analogous parameters- v (the fraction of support vectors and outliers) and C (the soft margin weight parameter)-to cover a similar range of the FP /FN tradeoff. We envision the end user deciding where he or she wishes to operate along the FP /FN tradeoff, and tuning a, v or C accordingly. Thus we compare the different algorithms by presenting in Table 1 an overview of the full tradeoff curves (essentially the ROC curves). The specific values of a, v and C are chosen in each row so as to roughly match corresponding points on the ROC curves. We use italic font to indicate the best performing algorithm on a given row, choosing the algorithm with the best FP rate if FN rates are similar and with the best FN rate if FP rates are similar. The performance of the single-class MPM is clearly competitive with that of the other algorithms, providing joint FP /FN values that equal or improve upon the other algorithms in many cases, and spanning a broad range of FP /FN tradeoff. Note that the two-class SVM can perform well if low FP rate is desired and high FN rate is tolerated. However, the two-class SVM sometimes fails to provide an extensive range of FP /FN tradeoff; in particular, with the twonorm dataset, the algorithm is unable to provide solutions with small FN rate and large FP rate. Note that the value I-a (the worst-case probability offalse negatives for the robust single-class MPM) is indeed an upper bound for the average FN rate in all cases except for the sonar dataset. Thus the simplifying assumption v = 0 appears to be reasonable in all cases except the sonar case. Finally, it is also worth noting that while the MPM algorithm is insensitive to the choice of v, it is sensitive to the choice of p. When we fixed p = 0 (allowing no uncertainty in the covariance estimate) we obtained poor performance, in particular obtaining a small FP rate but a very poor FN rate. 5 Conclusions We have presented a new algorithm for novelty detection, an important machine learning problem with numerous real-world applications. Our "single-class MPM" joins the "single-class SVM" of Scholkopf and Smola (2001) as a computationallyefficient, kernel-based method for solving this problem and the more general quantile estimation problem. We view the single-class MPM as particularly appropriate for these problems, given its formulation directly in terms of a worst-case probability lThe Wisconsin breast cancer dataset contained 16 missing examples which were not used. Data for the twonorm problem were generated as specified by Breiman (1997). Table 1: Performance for single-class problems; the best performance in each row is indicated in italic; FP = false positives (out-of-class data detected as in-class-data); FN = false negatives (in-class-data detected as out-of-class-data). Dataset Sin9le Class MPM Sin9le Class SVM Two-Class SVM approach a FP FN v FP FN V FP FN Sonar 0.2 24·7 % 64·0 % 0.6 26.9 % 65.4 % 0.1 23.8 % 68.6 % class +1 0.8 44-6 % 39.6 % 0.2 47.3 % 42.1 % 0.2 48.3 % 42.3 % 0.95 69.3 % 17.3 % 0.0005 75.4 % 16.2 % 75.2 % 16.0 % Sonar 0.6 5·4 % 51.7 % 0.4 8.5 % 53.7 % 0.1 9.7 % 70.0 % class -1 0.9 10.0 % 37·4 % 0.001 15.7 % 41.3 % 0.2 34.6 % 40.6 % 0.95 19.1 % 29.7% 0.0006 36.1 % 28.4 % 0.35 47.7 % 26.0 % 0.99 56.1 % 5.7% 0.0003 82.6 % 6.3 % 1 67.9 % 6.1 % Breast 0.6 0.0 % 8.8 % 0.14 0.0 % 14.6 % 0.005 0.4 % 8.0 % Cancer 0.8 1.8 % 5.9 % 0.001 2.4 % 6.1 % 0.1 0.9 % 4·3 % class +1 0.2 10.5 % 2.7% 0.0003 11.5 % 3.1 % 10 12.3 % 3.1 % Breast 0.01 2.4 % 26.5 % 0.4 2.5 % 41.4 % 0.8 0.9 % 47.9 % Cancer 0.03 2.9 % 13.5 % 0.2 2.8 % 25.0 % 1 11.0 % 45 % class -1 0.05 3.0 % 8.3 % 0.1 3.1 % 11.3 % 2 89.2 % 38.2 % 0.14 5.9 % 1.9 % 0.0005 9.2 % 3.4 % 100 98.0 % 23.5 % Twonorm 0.01 6.3 % 43.2 % 0.4 6.2 % 42.8 % 0.13 6.8 % 37.3 % class +1 0.2 13.9 % 22.5 % 0.2 12.7 % 22.8 % 0.17 12.0 % 24.2 % 0.4 22.5 % 11.9 % 0.0008 23.3 % 9.6 % 5 25.9 % 10.5 % 0.6 36.9 % 4.5 % 0.0003 33·4 % 4·5 % Twonorm 0.1 5.6 % 43.7 % 0.4 6.0 % 44.1 % 0.35 6.1 % 49.8 % class -1 0.4 11.3 % 23.1 % 0.15 11.8 % 24.6 % 0.5 24.5 % 23.7 % 0.6 16.9 % 12.1 % 0.0005 35.9 % 12.0 % 10 30.1 % 10.0 % 0.8 30.1 % 6.9 % 0.0003 39.3 % 6.9 % Heart 0.46 13.4 % 46.2 % 0.4 13.5 % 47.8 % 0.05 11.9 % 46·4 % class + 1 0.52 24.0 % 30.9 % 0.05 24.8 % 36.7 % 0.07 22. 1 % 30. 3 % 0.54 33.5 % 22.6 % 0.0008 38.8 % 27.0 % 0.1 35.8 % 22.9 % Heart 0.0001 15.9 % 41.3 % 0.4 20.8 % 50.7 % 0.08 13.9 % 43.8 % class -1 0.0006 21.2 % 37.2 % 0.002 26.3 % 43.8 % 0.09 21 .0 % 37.5 % 0.003 36.3 % 27.2 % 0.0007 43.7 % 29.2 % 0.11 39.2 % 31.8 % 0.01 56.9 % 15.9 % 0.0005 58.4 % 18.09 % 0.2 68.6 % 16.7 % of falling outside of a given convex set in feature space. While our simulation experiments illustrate the application of generic classification techniques to the novelty detection problem- via the generation of data from an artificial "negative class" enclosing the data- we view the single-class methods as the more viable general technology. In particular, in high-dimensional problems it is difficult to specify a "negative class" in a way that yields comparable size training sets while still yielding a good characterization of a discriminant boundary. Acknowledgements We acknowledge support from ONR MURI N00014-00-1-0637 and NSF grant IIS9988642. Sincere thanks to Alex Smola for helpful conversations and suggestions. References S. Ben-David and M. Lindenbaum. Learning distributions by their density levels: A paradigm for learning without a teacher. Journal of Computer and System Sciences, 55: 171- 182, 1997. L. Breiman. Arcing classifiers. Technical Report Technical Report 460, Statistics Department, University of California, 1997. G. Lanckriet, L. El Ghaoui, C. Bhattacharyya, and M. 1. Jordan. A robust minimax approach to classification. Journal of Machin e Learning Research, 3:555- 582, 2002. B. SchOlkopf and A. Smola. Learning with Kernels. MIT Press, Cambridge, MA, 2001.
|
2002
|
184
|
2,196
|
Binary Thning is Optimal for eural Rate Coding with High Temporal Resolution Matthias Bethge:David Rotermund, and Klaus Pawelzik Institute of Theoretical Physics University of Bremen 28334 Bremen {mbethge,davrot,pawelzik}@physik.uni-bremen.de Abstract Here we derive optimal gain functions for minimum mean square reconstruction from neural rate responses subjected to Poisson noise. The shape of these functions strongly depends on the length T of the time window within which spikes are counted in order to estimate the underlying firing rate. A phase transition towards pure binary encoding occurs if the maximum mean spike count becomes smaller than approximately three provided the minimum firing rate is zero. For a particular function class, we were able to prove the existence of a second-order phase transition analytically. The critical decoding time window length obtained from the analytical derivation is in precise agreement with the numerical results. We conclude that under most circumstances relevant to information processing in the brain, rate coding can be better ascribed to a binary (low-entropy) code than to the other extreme of rich analog coding. 1 Optimal neuronal gain functions for short decoding time windows The use of action potentials (spikes) as a means of communication is the striking feature of neurons in the central nervous system. Since the discovery by Adrian [1] that action potentials are generated by sensory neurons with a frequency that is substantially determined by the stimulus, the idea of rate coding has become a prevalent paradigm in neuroscience [2]. In particular, today the coding properties of many neurons from various areas in the cortex have been characterized by tuning curves, which describe the average firing rate response as a function of certain stimulus parameters. This way of description is closely related to the idea of analog coding, which constitutes the basis for many neural network models. Reliablvinference from the observed number of spikes about the underlying firing rate of a neuronal response, however, requires a sufficiently long time interval, while integration times of neurons in vivo [3] as well as reaction times of humans or animals when performing classification tasks [4, 5] are known to be rather short. Therefore, it is important to understand, how neural rate coding is affected by a limited time window available for decoding. While rate codes are usually characterized by tuning functions relating the intensity of the ,f *http://www.neuro.urn-bremen.dermbethge neuronal response to a particular stimulus parameter, the question, how relevant the idea of analog coding actually is does not depend on the particular entity represented by a neuron. Instead it suffices to determine the shape of the gain function, which displays the mean firing rate as a function ofthe actual analog signal to be sent to subsequent neurons. Here we seek for optimal gain functions that minimize the minimum average squared reconstruction error for a uniform source signal transmitted through a Poisson channel as a function of the maximum mean number of spikes. In formal terms, the issue is to optimally encode a real random variable x in the number of pulses emitted by a neuron within a certain time window. Thereby, x stands for the intended analog output of the neuron that shall be signaled to subsequent neurons. The latter, however, can only observe a number of spikes k integrated within a time interval of length T. The statistical dependency between x and k is specified by the assumption of Poisson noise p(kIJL(x)) = (JL~))k exp{-JL(X)} , (1) and the choice of the gain function f(x), which together with T determines the mean spike count J.L(x) == T f(x) . An important additional constraint is the limited output range of the neuronal firing rate, which can be included by the requirement of a bounded gain function (fmin :::; f (x) :::; fmax, VX). Since inhibition can reliably prevent a neuron from firing, we will here consider the case fmin == 0 only. Instead of specifying fmax, we impose a bound directly on the mean spike count (i.e. J.L(x) :::; /l), because f max constitutes a meaningful constraint only in conjunction with a fixed time window length T. As objective function we consider the minimum mean squared error (MMSE) with respect to Lebesgue measure for x E [0, 1], 2 X _ E x2 _ E (i2 _ ~ _ (Xl (J01 xp(kIJL(x)) dxr X [jt( )] [] [] 3 ~ J01p(kIJL(x)) dx' (2) where x(k) == E[xlk] denotes the mean square estimator, which is the conditional expectation (see e.g. [6]). 1.1 Tunings and errors As derived in [7] on the basis of Fisher information the optimal gain function for a single neuron in the asymptotic limit T -+ 00 has a parabolic shape: fasymp(x) == fmaxx2 . (3) For any finite /l, however, this gain function is not necessarily optimal, and in the limit T -+ 0, it is straight forward to show that the optimal tuning curve is a step function f step(xl'19) == fmax 8 (x - {)) , (4) where 8(z) denotes the Heaviside function that equals one, if z > 0 and zero if z < O. The optimal threshold 'l9(p,) of the step tuning curve depends on /l and can be determined analytically 11(-) = 1 _ 3 - V8e-J.' + 1 It 4(1 - e-il ) as well as the corresponding MMSE [8]: 2[fstep] _ 1 ( 3'19 2 (p,) ) X 12 1 - [(1 -11(p))(l - e-iL)]-1 - 1 . (5) (6) o'------'-----'---'---'--'~----'----'--...............~---'---'---'--'~ 10-1 ~---,.---,---.,...............---.----.---.---.-.......-.-.--.-~ ...............~ 1 S +1 0.5 CJ;) Figure 1: The upper panel shows a bifurcation plot for {}(Jt) - wand {}(Jt) + w of the optimal gain function in 51 as a function of {t illustrating the phase transition from binary to continuous encoding. The dotted line separates the regions before and after the phase transition in all three panels. Left from this line (i.e. for Jt < JtC) the step function given by Eq. 4+5 is optimal. The middle panel shows the MMSE of this step function (dashed) and of the optimal gain function in 52 (solid), which becomes smaller than the first one after the phase transition. The relative deviation between the minimal errors of 51 and 52 (i.e. (X~l X~2)/X~2) is displayed in the lower panel and has a maximum below 0.035. The binary shape for small {t and the continuous parabolic shape for large {t implies that there has to be a transition from discrete to analog encoding with increasing {to Unfortunately it is not possible to determine the optimal gain function within the set of all bounded functions B :== {fli : [0, 1] -+ [0, fmax]} and hence, one has to choose a certain parameterized function space 5 c B in advance that is feasible for the optimization. In [8], we investigated various such function-'spaces and for {t < 2.9, we did not find any gain function with an error smaller than the MMSE of the step function. Furthermore, we always observed a phase transition from binary to analog encoding at a critical {t C that depends only slightly on the function space. As one can see in Fig. 1 (upper) pc is approximately three. In this paper, we consider two function classes 51, 52, which both contain the binary gain function as well as the asymptotic optimal parabolic function as special cases. Furthermore 51 is a proper subset of 52. Our interest in 51 results from the fact that we can analyze the phase transition in this subset analytically, while 52 is the most general parameterization for which we have. determined the optimal encoding numerically. The latter has six free parameters a :::; b :::; c E [0, 1], fmid E (0, fmax), a, f3 E [0,00) and the parameterization of the gain functions is given by fS2 (xla, b, c, fmid, a, (3) == o fmid ( ~=:)<> fmid + (Imam - fmid) (~=:)f3 fmax , O<x<a , a<x<b , b<x<c , c<x<l (7) The integrals entering Eq. 2 for the !v1!v1SE in case of the gain function fS2 then read 1 1 x p(klx) dx ~ {a28 + (b-a)2rO'!"'id (k+~) k! 2 O,k a(v'fmid)2 + a(b - a) rO,fTnid (k +~) a v'fmid (c-b)2r (k+~) + fTnid'!Tnaz {3 (8) f3( ~fmax - {/fmid)2 + ( {Ifmid(C - b) ) (c - b) r !",id,!",a.. (k + ~) b - ({lfmam ~fmid) fJ( {lfmam ~fmid) + (1 - 2) fk e-f",a.. } 2 max 1 1 p(klx) dx ~ { a8 + (b - a) rO,fn>id (k + ~) (9) k' 0,k vrr;;;;a. . a m~d + (c - b) r !n>id,f", a .. (k + ~) k -!n>a.. } fJ( ijfmam - {Ifmid) + (1 - c)fmame , where ru,v(z) == J~ sz-l e-s.ds denotes the truncated Gamma function. Numericaloptimization leads to the minimal MMSE as a function of Jl as displayed in Fig. 1 (middle). The parameterization of the gain functions in 51 is given by o< x < 'l9(p) - w , iJ(jj) W < x < f}(Jl) + w , iJ(Jl) + w < x < 1 (10) with W E [0, 1] and, E [0, 00). The integrals entering Eq. 2 for the MMSE in case of the gain function i S1 read 1 1 x p(klx) dx 1 1 p(klx) dx + + + 1 {(1?(jl) - W)2 4w2rO,t",az (k + ~) k! 2 80,k + ,(~)2 2w(1?(jl) - w)ro,f",az (k + ~) ,?/fmax 1 - (i1(JL) + W)2 fk -f'TTLa~} 2 maxe 1 { _ 2wro,t",az (k + ~) kl (1?(J.t) - W)80,k + ~ . . , max (1 - i1(JL) w)f~axe-fTnatD } (11) (12) The minimal MMSE for these gain functions is only slightly worse than that for 52. The relative difference between both is plotted in Fig. 1 (lower) showing a maximum deviation of 3.2%. In particular, the relative deviation is extremely small around the phase transition. This comparison suggests that a restriction to 51, which is a necessary simplification for the following analytical investigation, does not change the qualitative results. 2 A phase transition The phase transition from binary to analog encoding corresponds to a structural change of the objective function X2 (w, ,). In particular, the optimality of binary encoding for JL < JL C implies that X2 (w, ,) has a minimum at w == O. The existence of a phase transition implies that with increasing JL this minimum changes into a local maximum at a certain critical point JL == fic. Therefore, the critical point can be determined by a local expansion of 00 k X2(w",jl) - X2(O",jl) = L9k(A,jl) ~! k==1 (13) around w == 0, because the sign ofits leading coefficient A, (JL) (i.e. the coefficient 9k with minimal k that does not vanish identically) determines, whether X2 (w", p,) has a local minimum or maximum at w == O. Accordingly, the critical point is given as the solution of A,(JL) == O. With quite a bit of efforts one can prove that the first derivative of X2 (w, " fi) vanishes for all fi. The second derivative, however, is a decreasing function of JL and hence constitutes the wanted leading coefficient _I {8 _ 7eli + I6e2li + e3li 4(eP - 1)2 VI + 8e-P (2 + eP (-3 + eP(6 + eP))) + (I6eli - 48e2li - 4e3li + VI + 8e-~ (4eli - 8 (4 + eli))) jl~~ rO,li (~) + (8e2li + 2(5 - 3VI + 8e-li) e3li) jl~2~r~'li (~) 5 4 ~c 3 2 1 2 3 V Figure 2: The critical maximum mean spike count J-lc is shown as a function of, (numerical evaluation at, E {O.5, 0.505, 0.51, ... , 3.5}). The minimum J-lc = 2.98291 ± 10-7 at , = 1.9 determines the phase transition in 8 1 . 16eft (eft - 1) (VI + 8e-ft - 3) p~~ rO,ft (~) 2 + 2e2ft (eft - 1) (VI + 8e-1t - 3) P,;2'i 1 ft e-S/~ry (1-~) -~ rO,ft-S (~) dS} (14) Obviously, it is not possible to write the zeros of A,(p,) in a closed form. The numerical evaluation of the critical point jJ C ( ,) as a function of, is displayed in Fig. 2. Note, that we have treated, as a fixed parameter, which means that we determine the critical point of the phase transition in all subsets 8 1 (,) of 8 1 that correspond to a fixed,. It is straight forward to show that the critical point [t C with respect to the entire class 81 is given by the minimum of [tC(,). We determined this value up to a precision of ±O.OOOl to be pc = 2.9857. 3 Conclusion Our study reveals that optimal encoding with respect to the minimum mean squared error is binary for maximum mean spike counts smaller than approximately three. Within the function class 8 1 we determined a second-order phase transition from binary to continuous encoding analytically. With respect to mutual information the advantage of binary encoding holds even up to a maximum mean spike count of about 3.5 (results not shown) and remains discrete also for larger [t. In a related work [9], Softky compared the information capacity of the Poisson channel with the information rate of a (noiseless) binary pulse code. The rate of the latter turned out to exceed the capacity of the former at a factor of at least 72 demonstrating a clear superiority of binary coding over analog rate coding. Our rate-distortion analysis of the Poisson channel differs from that comparison in a twofold way: First, we do not change-the noise model and second, the MMSE is often more appropriate to account for the coding efficiency than the channel capacity [10]. In particular, the assumption of a real random variable to be encoded with minimal mean squared error loss appears to introduce a bias for analog coding rather than for binary coding. Nevertheless, assuming a high temporal precision (i.e. small integration times T), our results hint into a similar direction, namely that binary coding seems to be a more reasonable choice even if one supposes that the only means of neuronal communication would be the transmission of Poisson distributed spike counts. Methodologically, our analysis is similar to many theoretical studies of population coding if f(x) == J-l(x)/T is not interpreted as the neuron's gain function, but as a tuning function with respect to a stimulus parameter x. Though conceptually different, s9me readers may therefore wish to know whether binary coding is still advantageous if many neurons, say N, together encode for a single analog. value. While the approach chosen in this paper is not feasible in case of large N, a partial answer can be given: For the efficiency of population coding redundancy reduction is most important [7,8, 11]. Smooth tuning curves, which have a dynamic range at about the same size as the signal range always lead to a large amount of redundancy so that the MMSE can not decrease faster than N-1 . In contrast the MMSE of binary tuning functions scales proportional to N-2 or even faster. This holds also true for tuning functions, which are not perfectly binary, but have a dynamic range that is at least smaller than the signal range divided by N. Independent from jj this implies that a small dynamic range is always advantageous in case of population coding. In contrast, most experimental studies do not report on binary or steep tuning functions, but show smooth tuning curves only. However, the shape of a tuning function always depends on the stimulus set used. Only recently, experimental studies under natural stimulus conditions provided evidence for the idea that neuronal encoding is essentially binary [12J. Particularly striking is this observation for the HI neuron of the fly [13J, for which the functional role is probably better understood than for most other neurons that have been characterized by tuning functions. While the noise level of the Poisson channel studied in this paper is rather large, the HI neuron can respond very reliably under optimal stimulus conditions [13J. Another example of a low-noise binary code has been found in the auditory cortex [14J. If we drop the restriction to Poisson noise and impose a hard constraint on the maximum number of spikes instead, optimal encoding is always discrete with J-l(x) taking integer values only [15]. This is easy to grasp, because any rational J-l can not serve to increase the entropy ofthe available symbol set (i.e. the candidate spike counts), but only increases the noise entropy instead. In other words, it is the simple fact that spike counts are discrete by nature, which already severely limits the possibility of graded rate coding. Clearly, this is not so obvious in case of the Poisson channel, if there is no hard constraint imposed on the maximum spike count. A remarkable aspect of the neuronal response of HI shown in [13J is that it becomes the more binary the less noisy the stimulus conditions are (the noise level is determined by the different light conditions at midday, half an hour before, and half an hour after sunset). This suggests an interesting hypothesis why choosing a binary code with very high temporal precision might be advantageous even if the signal of interest by itself does not change at that time scale: the sensory inputmay sometimes be too noisy, so that repeated, independent samples from the signal of interest may sometimes lead to neuronal firing and sometimes not. In other words, a binary code at the short time scale is useful independent from the correlation time of the signal to be encoded, if uncertainties have to be taken into account, because any surplus available amount of temporal precision is maximally used for uncertainty representation in a self-adjusting manner. Furthermore, this Monte-Carlo type of uncertainty representation features several computational advantages [16]. Finally, it is a remarkable fact that this property is unique for a binary code, because the representation of uncertainty is necessary for many information processing tasks solved by the brain. Additional support for the potential relevance of a binary neural code comes from intracellular recordings in vivo revealing that the subthreshold membrane potential ofmany cortical cells switches between up and down states [17J depending on the stimulus. Furthermore, the dynamics of bursting cells plays an important role for neuronal signal transmission [18] and may also be seen as evidence for binary rate coding. In light of these experimental facts, we conclude from our results that the idea of binary tuning constitutes· an important hypothesis for neural coding. Acknowledgments This work/was supported by the Deutsche Forschungsgesellschaft SFB 517. References [1] E.D. Adrian. The impulses produced by sensory nerve endings: Part i. J. Physiol. (London), 61:49-72,1926. [2] D.H. Perkel and T.H. Bullock. Neural coding: a report based on an nrp work session. Neurosci. Research Prog. Bull., 6:220-349, 1968. [3] W.R. Softky and C. Koch. The hihgly irregular firing of cortical cells is inconsistent with temporal integration of random epsps. J Neurosci., 13:334-350,1993. [4] C. Keysers, D. Xiao, P. Foldiak, and D. Perrett. The speed of sight. J. Cog. Neurosci., 13:90-101,2001. [5] S. Thorpe, D. Fize, and Marlot. Speed of processing in the human visual system. Nature, 381:520-522,1996. [6] E.L. Lehmann and G. Casella. Theory ofpoint estimation. Springer, New York, 1999. [7] M. Bethge, D. Rotermund, and K. Pawelzik. Optimal short-term population coding: when fisher information fails. Neural Comput., 14(10):2317-2351,2002. [8] M. Bethge, D. Rotermund, and K. Pawelzik. Optimal neural rate coding leads to bimodal firing rate distributions. Network: Comput. Neural Syst., 2002. in press. [9] W.R. Softky. Fine analog coding minimizes information transmission. Neural Networks, 9:15-24, 1996. [10] D.H. Johnson. Point process models of single-neuron discharges. J. Comput. Neurosci., 3:275-299,1996. [11] M. Bethge and K. Pawelzik. Population coding with unreliable spikes. Neurocomputing, 44-46:323-328,2002. [12] P. Reinagel. How do visual neurons respond in the real world. Curro Gp. Neurobiol., 11:437-442,2001. [13] G.D. Lewen, W. Bialek, and R.R. de Ruyter van Steveninck. Neural coding of natural stimuli. Network: Comput. Neural Syst., 12:317-329,2001. [14] M.R. DeWeese and A.M. Zador. Binary coding in auditory cortex. In S. Becker, S. Thrun, and K. Obermayer, editors, Advances in Neural Information Processing Systems, volume 15, 2002. [15] A. Gersho and R.M. Grey. Vector quantization and signal compression. Kluwer, Boston, 1992. [16] P.O. Hoyer and A. Hyvarinen. Interpreting neural response variability as monte carlo sampling of the posterior. In S. Becker, S. Thrun, and K. Obermayer, editors, Advances in Neural Information Processing Systems, volume 15, 2002. [17] J. Anderson, 1. Lampl, 1. Reichova, M. Carandini, and D. Ferster. Stimulus dependence of two-state fluctuations of membrane potential in cat visual cortex. Nature Neurosci., 3:617-621,2000. [18] J.E. Lisman. Bursts as a unit of neural information processing: making unreliable synapses reliable. TINS, 20:38-43, 1997.
|
2002
|
185
|
2,197
|
Feature Selection in Mixture-Based Clustering Martin H. Law, Anil K. Jain Dept. of Computer Science and Eng. Michigan State University, East Lansing, MI 48824 U.S.A. M´ario A. T. Figueiredo Instituto de Telecomunicac¸˜oes, Instituto Superior T´ecnico 1049-001 Lisboa Portugal Abstract There exist many approaches to clustering, but the important issue of feature selection, i.e., selecting the data attributes that are relevant for clustering, is rarely addressed. Feature selection for clustering is difficult due to the absence of class labels. We propose two approaches to feature selection in the context of Gaussian mixture-based clustering. In the first one, instead of making hard selections, we estimate feature saliencies. An expectation-maximization (EM) algorithm is derived for this task. The second approach extends Koller and Sahami’s mutual-informationbased feature relevance criterion to the unsupervised case. Feature selection is then carried out by a backward search scheme. This scheme can be classified as a “wrapper”, since it wraps mixture estimation in an outer layer that performs feature selection. Experimental results on synthetic and real data show that both methods have promising performance. 1 Introduction In partitional clustering, each pattern is represented by a vector of features. However, not all the features are useful in constructing the partitions: some features may be just noise, thus not contributing to (or even degrading) the clustering process. The task of selecting the “best” feature subset, known as feature selection (FS), is therefore an important task. In addition, FS may lead to more economical clustering algorithms (both in storage and computation) and, in many cases, it may contribute to the interpretability of the models. FS is particularly relevant for data sets with large numbers of features; e.g., on the order of thousands as seen in some molecular biology [22] and text clustering applications [21]. In supervised learning, FS has been widely studied, with most methods falling into two classes: filters, which work independently of the subsequent learning algorithm; wrappers, which use the learning algorithm to evaluate feature subsets [12]. In contrast, FS has received little attention in clustering, mainly because, without class labels, it is unclear how to assess feature relevance. The problem is even more difficult when the number of clusters is unknown, since the number of clusters and the best feature subset are inter-related [6]. Some approaches to FS in clustering have been proposed. Of course, any method not Email addresses: lawhiu@cse.msu.edu, jain@cse.msu.edu, mtf@lx.it.pt This work was supported by the U.S. Office of Naval Research, grant no. 00014-01-1-0266, and by the Portuguese Foundation for Science and Technology, project POSI/33143/SRI/2000. relying on class labels (e.g., [16]) can be used. Dy and Brodley [6] suggested a heuristic to compare feature subsets, using cluster separability. A Bayesian approach for multinomial mixtures was proposed in [21]; another Bayesian approach using a shrinkage prior was considered in [8]. Dash and Liu [4] assess the clustering tendency of each feature by an entropy index. A genetic algorithm was used in [11] for FS in -means clustering. Talavera [19] addressed FS for symbolic data. Finally, Devaney and Ram [5] use a notion of “category utility” for FS in conceptual clustering, and Modha and Scott-Spangler [17] assign weights to feature groups with a score similar to Fisher discrimination. In this paper, we introduce two new FS approaches for mixture-based clustering [10, 15]. The first is based on a feature saliency measure which is obtained by an EM algorithm; unlike most FS methods, this does not involve any explicit search. The second approach extends the mutual-information based criterion of [13] to the unsupervised context; it is a wrapper, since FS is wrapped around a basic mixture estimation algorithm. 2 Finite Mixtures and the EM algorithm Given i.i.d. samples
, the log-likelihood of a -component mixture is ! " # $ %& % ' ( ) %& #+* ) ,-& '. , / % , 0 (1) where: 1 , . ,3254 ; 6 , . , 87 ; each , is the set of parameters of the 9 -th component; and ;:< =
> * . =
. * is the full parameter set. Each % is a ? -dimensional feature vector @ A %CB =
-A %DB EGFIH and all components have the same form (e.g., Gaussian). Neither maximum likelihood ( J ML LKNM PO KQSRT #U V ! W ) nor maximum a posteriori ( J MAP XKNM O KQ R Y Z #/ ! W ) estimates can be found analytically. The usual choice is the EM algorithm, which finds local maxima of these criteria. Let [\ ^]
>] be a set of missing labels, where ] % _@ ` %CB =
>` %CB * F , with ` %CB , _7 and ` %DB a 4 , for "b c9 , meaning that % is a sample of /ed , . The complete log-likelihood is #/ f-[g ' ( ) %& * ) ,-& ` %DB , # @ . , % , F
(2) EM produces a sequence of estimates hJ Di 0 i 4 7#>jk=
using two alternating steps: l E-step: Computes mnpoq@ [g >J Ci F , and plugs it into #/ >[g ' yielding the r function r /J Ci ( f-ms! . Since the elements of [ are binary, we have t %CB , :uosvw` %DB , e J Ci yx Pr vw` %DB , <7z{ % J Di {x}| J . , Di % J , Ci 0 (3) followed by normalization so that 6 , t %DB , s7 . Notice that . , is the a priori probability that ` %DB , ~7 (i.e., % belongs to cluster 9 ) while t %DB , is the corresponding a posteriori probability, after observing % . l M-step: Updates the parameter estimates, J Ci Zg7^ +KNM O KNQkRhr U J Di ^Z ' in the case of MAP estimation, or without #/ ' in the ML case. 3 A Mixture Model with Feature Saliency In our first approach to FS, we assume conditionally independent features, given the component label (which in the Gaussian case corresponds to diagonal covariance matrices), / . , 0 , P * ) ,-& Y. , / , P * ) ,-& Y. , E $
& A , 0 (4) where ed , is the pdf of the -th feature in the 9 -th component; in general, this could have any form, although we only consider Gaussian densities. In the sequel, we will use the indices , 9 and to run through data points, mixture components, and features, respectively. Assume now that some features are irrelevant, in the following sense: if feature is irrelevant, then A , A , for 9q 7=
- , where A is the common (i.e., independent of 9 ) density of feature . Let
E be a set of binary parameters, such that 7 if feature is relevant and 4 otherwise; then, = . , = , = h P * ) ,-& . , E $
& / A , A -
(5) Our approach consists of: (i) treating the ’s as missing variables rather than as parameters; (ii) estimating 7h from the data; is the probability that the -th feature is useful, which we call its saliency. The resulting mixture model (see proof in [14]) is / . , 0 , 0 = h P * ) ,-& Y. , E $
& / A , UZ 7 A
(6) The form of
w
reflects our prior knowledge about the distribution of the non-salient features. In principle, it can be any 1-D pdf (e.g., Gaussian or student-t); here we only consider
I
to be a Gaussian. Equation (6) has a generative interpretation. As in a standard finite mixture, we first select the component label 9 by sampling from a multinomial distribution with parameters . =
=
= . * . Then, for each feature 7#=
>? , we flip a biased coin whose probability of getting a head is ; if we get a head, we use the mixture component
w , to generate the -th feature; otherwise, the common component
I is used. Given a set of observations =
=
0- , with % @ A %CB
-A %CB E FwH , the parameters . , 0^ , 0 0 h can be estimated by the maximum likelihood criterion, J uKNM PO KQ ) %& # * ) ,-& Y. , E $
& / A % , UZ 7 A % N
(7) In the absence of a closed-form solution, an EM algorithm can be derived by treating both the ` % ’s and the ’s as missing data (see [14] for details). 3.1 Model Selection Standard EM for mixtures exhibits some weaknesses which also affect the EM algorithm just mentioned: it requires knowledge of , and a good initialization is essential for reaching a good local optimum. To overcome these difficulties, we adopt the approach in [9], which is based on the MML criterion [23, 24]. The MML criterion for the proposed model (see details in [14]) consists of minimizing, with respect to , the following cost function #/ V Z Z ? j # sZ j E )
& * ) ,-& #' . , Z j E )
& 7 W (8) where and are the number of parameters in , and , respectively. If /
I
and
w
are univariate Gaussians (arbitrary mean and variance), j . From a parameter estimation viewpoint, this is equivalent to a MAP estimate with conjugate (improper) Dirichlet-type priors on the . , ’s and ’s (see details in [14]); thus, the EM algorithm undergoes a minor modification in the M-step, which still has a closed form. The terms in equation (8), in addition to the log-likelihood ! V , have simple interpretations. The term * E ! is a standard MDL-type parameter code-length corresponding to . , values and ?" values. For the -th feature in the 9 -th component, the “effective” number of data points for estimating , is . , . Since there are parameters in each , , the corresponding code-length is ! . , . Similarly, for the -th feature in the common component, the number of effective data points for estimation is 7 . Thus, there is a term ! 7 in (8) for each feature. One key property of the EM algorithm for minimizing equation (8) is its pruning behavior, forcing some of the . , to go to zero and some of the to go to zero or one. Worries that the message length in (8) may become invalid at these boundary values can be circumvented by the arguments in [9]. When goes to zero, the -th feature is no longer salient and and =
=
=- * are removed. When goes to 1, and are dropped. Finally, since the model selection algorithm determines the number of components, it can be initialized with a large value of , thus alleviating the need for a good initialization [9]. Because of this, as in [9], a component-wise version of EM [2] is adopted (see [14]). 3.2 Experiments and Results The first data set considered consists of 800 points from a mixture of 4 equiprobable Gaussians with mean vectors , , , , and identity covariance matrices. Eight “noisy” features (sampled from a 4 7^ density) were appended to this data, yielding a set of 800 10-D patterns. The proposed algorithm was run 10 times, each initialized with
4 ; the common component is initialized to cover all data, and the feature saliencies are initialized at 0.5. In all the 10 runs, the 4 components were always identified. The saliencies of all the ten features, together with their standard deviations (error bars), are shown in Fig. 1. We conclude that, in this case, the algorithm successfully locates the clusters and correctly assigns the feature saliencies. See [14] for more details on this experiment. 1 2 3 4 5 6 7 8 9 10 0 0.2 0.4 0.6 0.8 1 Feature Number Feature Saliency Figure 1: Feature saliency for 10-D 4-component Gaussian mixture. Only the first two features are relevant. The error bars show one standard deviation. 5 10 15 20 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Feature no Feature saliency Figure 2: Feature saliency for the Trunk data. The smaller the feature number, the more important is the feature. In the next experiment, we consider Trunk’s data [20], which has two 20-dimensional Gaussians classes with means 7 ! =
=
= ! and ! , and covariances ! . Data is obtained by sampling 5000 points from each of these two Gaussians. Note that these features have a descending order of relevance. As above, the initial is set to 30. In all the 10 runs performed, two components were always detected. The values of the feature saliencies are shown in Fig. 2. We see the general trend that as the feature number increases, the saliency decreases, following the true characteristics of the data. Feature saliency values were also computed for the “wine” data set (available at the UCI repository at www.ics.uci.edu/˜mlearn/MLRepository.html), consisting of 178 13-dimensional points in three classes. After standardizing all features to zero mean and unit variance, we applied the LNKnet supervised feature selection algorithm (available at www.ll.mit.edu/IST/lnknet/). The nine features selected by LNKnet are 7, 13, 1, 5, 10, 2, 12, 6, 9. Our feature saliency algorithm (with no class labels) yielded the values Table 1: Feature saliency of wine data 1 2 3 4 5 6 7 8 9 10 11 12 13 0.94 0.77 0.10 0.59 0.14 0.99 1.00 0.66 0.94 0.85 0.88 1.00 0.83 in Table 1. Ranking the features in descending order of saliency, we get the ordering: 7, 12, 6, 1, 9, 11, 10, 13, 2, 8, 4, 5, 3. The top 5 features (7, 12, 6, 1, 9) are all in the subset selected by LNKnet. If we skip the sixth feature (11), the following three features (10, 13, 2) were also selected by LNKnet. Thus we can see that for this data set, our algorithm, though totally unsupervised, performs comparably with a supervised feature selection algorithm. 4 A Feature Selection Wrapper Our second approach is more traditional in the sense that it selects a feature subset, instead of estimating feature saliency. The number of mixture components is assumed known a priori, though no restriction on the covariance of the Gaussian components is imposed. 4.1 Irrelevant Features and Conditional Independence Assume that the class labels, ] , and the full feature vector, , follow some joint probability function ] . In supervised learning [13], a feature subset is considered irrelevant if it is conditionally independent of the label ] , given the remaining features , that is, if ] / ] T / ]Y ( , where is split into two subsets: “useful” features and “non-useful” features (here, 7#
-?; is the index set of the non-useful features). It is easy to show that this implies G ] / ] / ] P / ] 0
(9) To generalize this notion to unsupervised learning, we propose to let the expectations t , (a byproduct of the EM algorithm) play the role of the missing class labels. Recall that the t , (see (3)) are posterior class probabilities, Prob @ class 9' (> F . Consider the posterior probabilities based on all the features, and only on the useful features, respectively t %DB , | J . , / % J , 0 %CB , | J . , %DB J ,>B W (10) where %DB is the subset of relevant features of sample % (of course, the %DB , and t %CB , have to be normalized such that 6 , %CB , <7 and 6 , t %CB , <7 ). If is a completely irrelevant feature subset, then %CB , equals t %CB , exactly, because of the conditional independence in (9), applied to (3). In practice, such features rarely exist, though they do exhibit different degrees of irrelevance. So we follow the suggestion in [13], and find that gives %CB as close to t %CB as possible. As both t %CB , and %DB , are probabilities, a natural criterion for assessing their closeness is the expected value of the Kullback-Leibler divergence (KLD, [3]). This criterion is computed as a sample mean ( ) %& * ) ,-& t %DB , # t %CB , %CB , (11) in our case. A low value of indicates that the features in are “almost” conditionally independent from the expected class labels, given the features in
. In practice, we start by obtaining reasonable initial estimates of t %DB , by running EM using all the features, and set P . At each stage, we find the feature b such that h is smallest and add it to . EM is then run again, using the features not in , to update the posterior probabilities t %DB , . The process is then repeated until only one feature remains, in what can be considered as a backward search algorithm that yields a sorting of the features by decreasing order of irrelevance. 4.2 The assignment entropy Given a method to sort the features in the order of relevance, we now require a method to measure how good each subset is. Unlike in supervised learning, we can not resort to classification accuracy. We adopt the criterion that a clustering is good if the clusters are “crisp”, i.e., if, for every , t %DB , 7 for some 9 . A natural way to formalize this is to consider the mean entropy of the t %CB , ; that is, the clustering is considered to be good if t %DB ,
6 %& 6 * ,-& t %CB , t %DB , is small. In the sequel, we call “the entropy of the assignment”. An important characteristic of the entropy is that it cannot increase when more features are used (because, for any random variables , , and , 5 c , a fundamental inequality of information theory [3]; note that t % is a conditional entropy t % S ^ %DB ). Moreover, t % exhibits a diminishing returns behavior (decreasing abruptly as the most relevant features are included, but changing little when less relevant features are used). Our empirical results show that indeed has a strong relationship with the quality of the clusters. Of course, during the backward search, one can also consider picking the next feature whose removal least increases , rather than the one yielding the smallest KLD; both options are explored in the experiments. Finally, we mention that other minimum-entropy-type criteria have been recently used for clustering [7], [18], but not for feature selection. 4.3 Experiments We have conducted experiments on data sets commonly used for supervised learning tasks. Since we are doing unsupervised learning, the class labels are, of course, withheld and only used for evaluation. The two heuristics for selecting the next feature to be removed (based on minimum KLD and minimum entropy) are considered in different runs. To assess clustering quality, we assign each data point to the Gaussian component that most likely generated it and then compare this labelling with the ground-truth. Table 2 summarizes the characteristics of the data sets for which results are reported here (all available from the UCI repository); we have also performed tests on other data sets achieving similar results. The experimental results shown in Fig. 3 reveal that the general trend of the error rate agrees well with . The error rates either have a minimum close to the “knee” of the H curve, or the curve becomes flat. The two heuristics for selecting the feature to be removed perform comparably. For the cover type data set, the DKL heuristic yields lower error rates than the one based on , while the contrary happens for image segmentation and WBC datasets. 5 Concluding Remarks and Future Work The two approaches for unsupervised feature selection herein proposed have different advantages and drawbacks. The first approach avoids explicit feature search and does not require a pre-specified number of clusters; however, it assumes that the features are conditionally independent, given the components. The second approach places no restriction on the covariances, but it does assume knowledge of the number of components. We believe that both approaches can be useful in different scenarios, depending on which set of assumptions fits the given data better. Several issues require further work: weakly relevant features (in the sense of [12]) are not removed by the first algorithm while the second approach relies on a good initial clustering. Overcoming these problems will make the methods more generally applicable. We also need to investigate the scalability of the proposed algorithms; ideas such as those in [1] can be exploited. Table 2: Some details of the data sets (WBC stands for Wisconsin breast cancer). Name cover type image segmentation WBC wine No. points used 2000 1000 569 178 No. of features 10 18 30 13 No. of classes 4 7 2 3 2 4 6 8 10 0 500 1000 1500 2000 2500 3000 Entropy 2 4 6 8 10 25 30 35 40 45 50 55 60 % Erorr No. of features (a) 2 4 6 8 10 0 500 1000 1500 2000 2500 3000 3500 4000 Entropy 2 4 6 8 10 35 40 45 50 55 60 65 % Error No. of features (b) 5 10 15 0 100 200 300 400 500 600 700 800 900 Entropy 5 10 15 35 40 45 50 55 60 65 70 % Error No. of features (c) 5 10 15 0 200 400 600 800 1000 1200 Entropy 5 10 15 25 30 35 40 45 50 55 60 % Error No. of features (d) 5 10 15 20 25 30 0 50 100 150 200 250 300 350 400 450 Entropy 5 10 15 20 25 306 8 10 12 14 16 18 20 22 % Error No. of features (e) 5 10 15 20 25 30 0 100 200 300 400 500 Entropy 5 10 15 20 25 306 8 10 12 14 16 % Error No. of features (f) 2 4 6 8 10 12 0 10 20 30 40 50 60 70 80 Entropy 2 4 6 8 10 12 0 5 10 15 20 25 30 35 % Error No. of features (g) 2 4 6 8 10 12 0 10 20 30 40 50 60 70 Entropy 2 4 6 8 10 12 0 5 10 15 20 25 30 35 % Error No. of features (h) Figure 3: (a) and (b): cover type; (c) and (d): image segmentation; (e) and (f): WBC; (g) and (h): wine. Feature removal by minimum KLD (left column) and minimum (right column). Solid lines: error rates; dotted lines: . Error bars correspond to one standard deviation over 10 runs. References [1] P. Bradley, U. Fayyad, and C. Reina. Clustering very large database using EM mixture models. In Proc. 15th Intern. Conf. on Pattern Recognition, pp. 76–80, 2000. [2] G. Celeux, S. Chr´etien, F. Forbes, and A. Mkhadri. A component-wise EM algorithm for mixtures. Journal of Computational and Graphical Statistics, 10:699–712, 2001. [3] T. Cover and J. Thomas. Elements of Information Theory. John Wiley & Sons, 1991. [4] M. Dash and H. Liu. Feature selection for clustering. In Proc. of Pacific-Asia Conference on Knowledge Discovery and Data Mining, 2000, pp. 110–121. [5] M. Devaney and A. Ram. Efficient feature selection in conceptual clustering. In Proc. ICML’1997, pp. 92–97, 1997. [6] J. Dy and C. Brodley. Feature subset selection and order identification for unsupervised learning. In Proc. ICML’2000, pp. 247–254, 2000. [7] E. Gokcay and J. Principe. Information Theoretic Clustering. IEEE Trans. on PAMI, 24(2):158171, 2002. [8] P. Gustafson, P. Carbonetto, N. Thompson, and N. de Freitas. Bayesian feature weighting for unsupervised learning, with application to object recognition. In Proc. of the 9th Intern. Workshop on Artificial Intelligence and Statistics, 2003. [9] M. Figueiredo and A. Jain. Unsupervised learning of finite mixture models. IEEE Trans. on PAMI, 24(3):381–396, 2002. [10] A. K. Jain and R. C. Dubes. Algorithms for Clustering Data. Prentice Hall, 1988. [11] Y. Kim, W. Street, and F. Menczer. Feature Selection in Unsupervised Learning via Evolutionary Search. In Proc. ACM SIGKDD, pp. 365–369, 2000. [12] R. Kohavi and G. John. Wrappers for feature subset selection. Artificial Intelligence, 97(12):273–324, 1997. [13] D. Koller and M. Sahami. Toward optimal feature selection. In Proc. ICML’1996, pp. 284–292, 1996. [14] M. Law, M. Figueiredo, and A. Jain. Feature Saliency in Unsupervised Learning. Tech. Rep., Dept. Computer Science and Eng., Michigan State Univ., 2002. Available at http://www.cse.msu.edu/ lawhiu/papers/TR02.ps.gz. [15] G. McLachlan and K. Basford. Mixture Models: Inference and Application to Clustering. Marcel Dekker, New York, 1988. [16] P. Mitra and C. A. Murthy. Unsupervised feature selection using feature similarity. IEEE Trans. on PAMI, 24(3):301–312, 2002. [17] D. Modha and W. Scott-Spangler. Feature weighting in k-means clustering. Machine Learning, 2002. to appear. [18] S. Roberts, C. Holmes, and D. Denison. Minimum-entropy data partitioning using RJ-MCMC. IEEE Trans. on PAMI, 23(8):909-914, 2001. [19] L. Talavera. Dependency-based feature selection for clustering symbolic data. Intelligent Data Analysis, 4:19–28, 2000. [20] G. Trunk. A problem of dimensionality: A simple example. IEEE Trans. on PAMI, 1(3):306– 307, 1979. [21] S. Vaithyanathan and B. Dom. Generalized model selection for unsupervised learning in high dimensions. In S. Solla, T. Leen, and K. Muller, eds, Proc. of NIPS’12. MIT Press, 2000. [22] E. Xing, M. Jordan, and R. Karp. Feature selection for high-dimensional genomic microarray data. In Proc. ICML’2001, pp. 601–608, 2001. [23] C. Wallace and P. Freeman. Estimation and inference via compact coding. Journal of the Royal Statistical Society (B), 49(3):241–252, 1987. [24] C.S. Wallace and D.L. Dowe. MML clustering of multi-state, Poisson, von Mises circular and Gaussian distributions. Statistics and Computing, 10:73–83, 2000.
|
2002
|
186
|
2,198
|
Minimax Differential Dynamic Programming: An Application to Robust Biped Walking Jun Morimoto Human Information Science Labs, Department 3, ATR International Keihanna Science City, Kyoto, JAPAN, 619-0288 xmorimo@atr.co.jp Christopher G. Atkeson ∗ The Robotics Institute and HCII, Carnegie Mellon University 5000 Forbes Ave., Pittsburgh, USA, 15213 cga@cs.cmu.edu Abstract We developed a robust control policy design method in high-dimensional state space by using differential dynamic programming with a minimax criterion. As an example, we applied our method to a simulated five link biped robot. The results show lower joint torques from the optimal control policy compared to a hand-tuned PD servo controller. Results also show that the simulated biped robot can successfully walk with unknown disturbances that cause controllers generated by standard differential dynamic programming and the hand-tuned PD servo to fail. Learning to compensate for modeling error and previously unknown disturbances in conjunction with robust control design is also demonstrated. 1 Introduction Reinforcement learning[8] is widely studied because of its promise to automatically generate controllers for difficult tasks from attempts to do the task. However, reinforcement learning requires a great deal of training data and computational resources, and sometimes fails to learn high dimensional tasks. To improve reinforcement learning, we propose using differential dynamic programming(DDP) which is a second order local trajectory optimization method to generate locally optimal plans and local models of the value function[2, 4]. Dynamic programming requires task models to learn tasks. However, when we apply dynamic programming to a real environment, handling inevitable modeling errors is crucial. In this study, we develop minimax differential dynamic programming which provides robust nonlinear controller designs based on the idea of H∞control[9, 5] or risk sensitive control[6, 1]. We apply the proposed method to a simulated five link biped robot (Fig. 1). Our strategy is to use minimax DDP to find both a low torque biped walk and a policy or control law to handle deviations from the optimized trajectory. We show that both standard DDP and minimax DDP can find a local policy for lower torque biped walk than a handtuned PD servo controller. We show that minimax DDP can cope with larger modeling error than standard DDP or the hand-tuned PD controller. Thus, the robust controller allows us to collect useful training data. In addition, we can use learning to correct modeling ∗also affiliated with Human Information Science Laboratories, Department 3, ATR International errors and model previously unknown disturbances, and design a new more optimal robust controller using additional iterations of minimax DDP. 2 Minimax DDP 2.1 Differential dynamic programming (DDP) A value function is defined as sum of accumulated future penalty r(xi, ui, i) from current state and terminal penalty Φ(xN), V (xi, i) = Φ(xN) + N−1 X j=i r(xj, uj, j), (1) where xi is the input state, ui is the control output at the i-th time step, and N is the number of time steps. Differential dynamic programming maintains a second order local model of a Q function (Q(i), Qx(i), Qu(i), Qxx(i), Qxu(i), Quu(i)), where Q(i) = r(xi, ui, i) + V (xi+1, i + 1), and the subscripts indicate partial derivatives. Then, we can derive the new control output unew i = ui + δui from arg maxδui Q(xi + δxi, ui + δui, i). Finally, by using the new control output unew i , a second order local model of the value function (V (i), Vx(i), Vxx(i)) can be derived [2, 4]. 2.2 Finding a local policy DDP finds a locally optimal trajectory xopt i and the corresponding control trajectory uopt i . When we apply our control algorithm to a real environment, we usually need a feedback controller to cope with unknown disturbances or modeling errors. Fortunately, DDP provides us a local policy along the optimized trajectory: uopt(xi, i) = uopt i + Ki(xi −xopt i ), (2) where Ki is a time dependent gain matrix given by taking the derivative of the optimal policy with respect to the state [2, 4]. 2.3 Minimax DDP Minimax DDP can be derived as an extension of standard DDP [2, 4]. The difference is that the proposed method has an additional disturbance variable w to explicitly represent the existence of disturbances. This representation of the disturbance provides the robustness for optimized trajectories and policies [5]. Then, we expand the Q function Q(xi + δxi, ui + δui, wi + δwi, i) to second order in terms of δu, δw and δx about the nominal solution: Q(xi + δxi, ui + δui, wi + δwi, i) = Q(i) + Qx(i)δxi + Qu(i)δui + Qw(i)δwi +1 2[δxT i δuT i δwT i ] " Qxx(i) Qxu(i) Qxw(i) Qux(i) Quu(i) Quw(i) Qwx(i) Qwu(i) Qww(i) # " δxi δui δwi # , (3) The second order local model of the Q function can be propagated backward in time using: Qx(i) = Vx(i + 1)Fx + rx(i) (4) Qu(i) = Vx(i + 1)Fu + ru(i) (5) Qw(i) = Vx(i + 1)Fw + rw(i) (6) Qxx(i) = FxVxx(i + 1)Fx + Vx(i + 1)Fxx + rxx(i) (7) Qxu(i) = FxVxx(i + 1)Fu + Vx(i + 1)Fxu + rxu(i) (8) Qxw(i) = FxVxx(i + 1)Fu + Vx(i + 1)Fxw + rxw(i) (9) Quu(i) = FuVxx(i + 1)Fu + Vx(i + 1)Fuu + ruu(i) (10) Qww(i) = FwVxx(i + 1)Fw + Vx(i + 1)Fww + rww(i) (11) Quw(i) = FuVxx(i + 1)Fw + Vx(i + 1)Fuw + ruw(i), (12) where xi+1 = F(xi, ui, wi) is a model of the task dynamics. Here, δui and δwi must be chosen to minimize and maximize the second order expansion of the Q function Q(xi + δxi, ui + δui, wi + δwi, i) in (3) respectively, i.e., δui = −Q−1 uu(i)[Qux(i)δxi + Quw(i)δwi + Qu(i)] δwi = −Q−1 ww(i)[Qwx(i)δxi + Qwu(i)δui + Qw(i)]. (13) By solving (13), we can derive both δui and δwi. After updating the control output ui and the disturbance wi with derived δui and δwi, the second order local model of the value function is given as V (i) = V (i + 1) −Qu(i)Q−1 uu(i)Qu(i) −Qw(i)Q−1 ww(i)Qw(i) Vx(i) = Qx(i) −Qu(i)Q−1 uu(i)Qux(i) −Qw(i)Q−1 ww(i)Qwx(i) Vxx(i) = Qxx(i) −Qxu(i)Q−1 uu(i)Qux(i) −Qxw(i)Q−1 ww(i)Qwx(i). (14) 3 Experiment 3.1 Biped robot model In this paper, we use a simulated five link biped robot (Fig. 1:Left) to explore our approach. Kinematic and dynamic parameters of the simulated robot are chosen to match those of a biped robot we are currently developing (Fig. 1:Right) and which we will use to further explore our approach. Height and total weight of the robot are about 0.4 [m] and 2.0 [kg] respectively. Table 1 shows the parameters of the robot model. 1 2 3 4 5 link1 link2 link3 link4 link5 joint1 joint2,3 joint4 ankle Figure 1: Left: Five link robot model, Right: Real robot Table 1: Physical parameters of the robot model link1 link2 link3 link4 link5 mass [kg] 0.05 0.43 1.0 0.43 0.05 length [m] 0.2 0.2 0.01 0.2 0.2 inertia [kg·m ×10−4] 1.75 4.29 4.33 4.29 1.75 We can represent the forward dynamics of the biped robot as xi+1 = f(xi) + b(xi)ui, (15) where x = {θ1, . . . , θ5, ˙θ1, . . . , ˙θ5} denotes the input state vector, u = {τ1, . . . , τ4} denotes the control command (each torque τj is applied to joint j (Fig.1):Left). In the minimax optimization case, we explicitly represent the existence of the disturbance as xi+1 = f(xi) + b(xi)ui + bw(xi)wi, (16) where w = {w0, w1, w2, w3, w4} denotes the disturbance (w0 is applied to ankle, and wj (j = 1 . . . 4) is applied to joint j (Fig. 1:Left)). 3.2 Optimization criterion and method We use the following objective function, which is designed to reward energy efficiency and enforce periodicity of the trajectory: J = Φ(x0, xN) + N−1 X i=0 r(xi, ui, i) (17) which is applied for half the walking cycle, from one heel strike to the next heel strike. This criterion sums the squared deviations from a nominal trajectory, the squared control magnitudes, and the squared deviations from a desired velocity of the center of mass: r(xi, ui, i) = (xi −xd i )T Q(xi −xd i ) + uiT Rui + (v(xi) −vd)T S(v(xi) −vd), (18) where xi is a state vector at the i-th time step, xd i is the nominal state vector at the i-th time step (taken from a trajectory generated by a hand-designed walking controller), v(xi) denotes the velocity of the center of mass at the i-th time step, and vd denotes the desired velocity of the center of mass. The term (xi −xd i )T Q(xi −xd i ) encourages the robot to follow the nominal trajectory, the term uiT Rui discourages using large control outputs, and the term (v(xi) −vd)T S(v(xi) −vd) encourages the robot to achieve the desired velocity. In addition, penalties on the initial (x0) and final (xN) states are applied: Φ(x0, xN) = F(x0) + ΦN(x0, xN). (19) The term F(x0) penalizes an initial state where the foot is not on the ground: F(x0) = Fh T (x0)P0Fh(x0), (20) where Fh(x0) denotes height of the swing foot at the initial state x0. The term ΦN(x0, xN) is used to generate periodic trajectories: ΦN(x0, xN) = (xN −H(x0))T PN(xN −H(x0)), (21) where xN denotes the terminal state, x0 denotes the initial state, and the term (xN − H(x0))T PN (xN −H(x0)) is a measure of terminal control accuracy. A function H() represents the coordinate change caused by the exchange of a support leg and a swing leg, and the velocity change caused by a swing foot touching the ground (Appendix A). We implement the minimax DDP by adding a minimax term to the criterion. We use a modified objective function: Jminimax = J − N−1 X i=0 wiT Gwi, (22) where wi denotes a disturbance vector at the i-th time step, and the term wiT Gwi rewards coping with large disturbances. This explicit representation of the disturbance w provides the robustness for the controller [5]. 4 Results We compare the optimized controller with a hand-tuned PD servo controller, which also is the source of the initial and nominal trajectories in the optimization process. We set the parameters for the optimization process as Q = 0.25I10, R = 3.0I4, S = 0.3I1, desired velocity vd = 0.4[m/s] in equation (18), P0 = 1000000.0I1 in equation (20), and PN = diag{10000.0, 10000.0, 10000.0, 10000.0, 10000.0, 10.0, 10.0, 10.0, 5.0, 5.0} in equation (21), where IN denotes N dimensional identity matrix. For minimax DDP, we set the parameter for the disturbance reward in equation (22) as G = diag{5.0, 20.0, 20.0, 20.0, 20.0} (G with smaller elements generates more conservative but robust trajectories). Each parameter is set to acquire the best results in terms of both the robustness and the energy efficiency. When we apply the controllers acquired by standard DDP and minimax DDP to the biped walk, we adopt a local policy which we introduced in section 2.2. Results in table 2 show that the controller generated by standard DDP and minimax DDP did almost halve the cost of the trajectory, as compared to that of the original hand-tuned PD servo controller. However, because the minimax DDP is more conservative in taking advantage of the plant dynamics, it has a slightly higher control cost than the standard DDP. Note that we defined the control cost as 1 N PN−1 i=0 ||ui||2, where ui is the control output (torque) vector at i-th time step, and N denotes total time step for one step trajectories. Table 2: One step control cost (average over 100 steps) PD servo standard DDP minimax DDP control cost [(N · m)2 × 10−2 ] 7.50 3.54 3.86 To test robustness, we assume that there is unknown viscous friction at each joint: τ dist j = −µj ˙θj (j = 1, . . . , 4), (23) where µj denotes the viscous friction coefficient at joint j. We used two levels of disturbances in the simulation, with the higher level being 3 times larger than the base level (Table 3). Table 3: Parameters of the disturbance µ2,µ3 (hip joints) µ1,µ4 (knee joints) base 0.01 0.05 large 0.03 0.15 All methods could handle the base level disturbances. Both the standard and the minimax DDP generated much less control cost than the hand-tuned PD servo controller (Table 4). However, only the minimax DDP control design could cope with the higher level of disturbances. Figure 2 shows trajectories for the three different methods. Both the simulated robot with the standard DDP and the hand-tuned PD servo controller fell down before achieving 100 steps. The bottom of figure 2 shows part of a successful biped walking trajectory of the robot with the minimax DDP. Figure 3 shows ankle joint trajectories for the three different methods. Only the minimax DDP successfully kept ankle joint θ1 around 90 degrees more than 20 seconds. Table 5 shows the number of steps before the robot fell down. We terminated a trial when the robot achieved 1000 steps. Table 4: One step control cost with the base setting (averaged over 100 steps) PD servo standard DDP minimax DDP control cost [(N · m)2 × 10−2] 8.97 5.23 5.87 Hand-tuned PD servo Standard DDP Minimax DDP Figure 2: Biped walk trajectories with the three different methods 5 Learning the unmodeled dynamics In section 4, we verified that minimax DDP could generate robust biped trajectories and policies. The minimax DDP coped with larger disturbances than the standard DDP and the hand-tuned PD servo controller. However, if there are modeling errors, using a robust controller which does not learn is not particularly energy efficient. Fortunately, with minimax DDP, we can collect sufficient data to improve our dynamics model. Here, we propose using Receptive Field Weighted Regression (RFWR) [7] to learn the error dynamics of the biped robot. In this section we present results on learning a simulated modeling error (the disturbances discussed in section 4). We are currently applying this approach to an actual robot. We can represent the full dynamics as the sum of the known dynamics and the error dynamics ∆F(xi, ui, i): xi+1 = F(xi, ui) + ∆F(xi, ui, i). (24) We estimate the error dynamics ∆F by using RFWR: ∆ˆF(xi, ui, i) = PNb k=1 αi kφk(xi, ui, i) PNb k=1 αi k , (25) φk(xi, ui, i) = βT k ˜xi k, (26) αi k = exp −1 2(i −ck)Dk(i −ck) , (27) where, Nb denotes the number of basis function, ck denotes center of k-th basis function, Dk denotes distance metric of the k-th basis function, βk denotes parameter of the kth basis function to approximate error dynamics, and ˜xi k = (xi, ui, 1, i −ck) denotes augmented state vector for the k-th basis function. We align 20 basis functions (Nb = 20) at even intervals along the biped trajectories. The learning strategy uses the following sequence: 1) Design the initial controller using minimax DDP applied to the nominal model. 2) Apply that controller. 3) Learn the actual dynamics using RFWR. 4) Redesign the biped controller using minimax DDP with the learned model. 0 2 4 6 8 10 12 14 16 18 20 60 70 80 90 100 ankle [deg] time [sec] (PD servo) 0 2 4 6 8 10 12 14 16 18 20 60 70 80 90 100 ankle [deg] time [sec] (Standard DDP) 0 2 4 6 8 10 12 14 16 18 20 60 70 80 90 100 time [sec] (Minimax DDP) ankle [deg] Figure 3: Ankle joint trajectories with the three different methods Table 5: Number of steps with the large disturbances PD servo standard DDP minimax DDP Number of steps 49 24 > 1000 We compare the efficiency of the controller with the learned model to the controller without the learned model. Results in table 6 show that the controller after learning the error dynamics used lower torque to produce stable biped walking trajectories. Table 6: One step control cost with the large disturbances (averaged over 100 steps) without learned model with learned model control cost [(N · m)2 × 10−2] 17.1 11.3 6 Discussion In this study, we developed an optimization method to generate biped walking trajectories by using differential dynamic programming (DDP). We showed that 1) DDP and minimax DDP can be applied to high dimensional problems, 2) minimax DDP can design more robust controllers, and 3) learning can be used to reduce modeling error and unknown disturbances in the context of minimax DDP control design. Both standard DDP and minimax DDP generated low torque biped trajectories. We showed that the minimax DDP control design was more robust than the controller designed by standard DDP and the hand-tuned PD servo. Given a robust controller, we could collect sufficient data to learn the error dynamics using RFWR[7] without the robot falling down all the time. We also showed that after learning the error dynamics, the biped robot could find a lower torque trajectory. DDP provides a feedback controller which is important in coping with unknown disturbances and modeling errors. However, as shown in equation (2), the feedback controller is indexed by time, and development of a time independent feedback controller is a future goal. Appendix A Ground contact model The function H() in equation (21) includes the mapping (velocity change) caused by ground contact. To derive the first derivative of the value function Vx(xN) and the second derivative Vxx(xN), where xN denotes the terminal state, the function H() should be analytical. Then, we used an analytical ground contact model[3]: ˙θ + −˙θ −= M −1(θ)D(θ)f∆t, (28) where θ denotes joint angles of the robot, ˙θ−denotes angular velocities before ground contact, ˙θ+ denotes angular velocities after ground contact, M denotes the inertia matrix, D denotes the Jacobian matrix which converts the ground contact force f to the torque at each joint, and ∆t denotes time step of the simulation. References [1] S. P. Coraluppi and S. I. Marcus. Risk-Sensitive and Minmax Control of Discrete-Time Finite-State Markov Decision Processes. Automatica, 35:301–309, 1999. [2] P. Dyer and S. R. McReynolds. The Computation and Theory of Optimal Control. Academic Press, New York, NY, 1970. [3] Y. Hurmuzlu and D. B. Marghitu. Rigid body collisions of planar kinematic chains with multiple contact points. International Journal of Robotics Research, 13(1):82– 92, 1994. [4] D. H. Jacobson and D. Q. Mayne. Differential Dynamic Programming. Elsevier, New York, NY, 1970. [5] J. Morimoto and K. Doya. Robust Reinforcement Learning. In Todd K. Leen, Thomas G. Dietterich, and Volker Tresp, editors, Advances in Neural Information Processing Systems 13, pages 1061–1067. MIT Press, Cambridge, MA, 2001. [6] R. Neuneier and O. Mihatsch. Risk Sensitive Reinforcement Learning. In M. S. Kearns, S. A. Solla, and D. A. Cohn, editors, Advances in Neural Information Processing Systems 11, pages 1031–1037. MIT Press, Cambridge, MA, USA, 1998. [7] S. Schaal and C. G. Atkeson. Constructive incremental learning from only local information. Neural Computation, 10(8):2047–2084, 1998. [8] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. The MIT Press, Cambridge, MA, 1998. [9] K. Zhou, J. C. Doyle, and K. Glover. Robust Optimal Control. PRENTICE HALL, New Jersey, 1996.
|
2002
|
187
|
2,199
|
Theory-Based Causal Inference Joshua B. Tenenbaum & Thomas L. Griffiths Department of Brain and Cognitive Sciences MIT, Cambridge, MA 02139 jbt, gruffydd @mit.edu Abstract People routinely make sophisticated causal inferences unconsciously, effortlessly, and from very little data – often from just one or a few observations. We argue that these inferences can be explained as Bayesian computations over a hypothesis space of causal graphical models, shaped by strong top-down prior knowledge in the form of intuitive theories. We present two case studies of our approach, including quantitative models of human causal judgments and brief comparisons with traditional bottom-up models of inference. 1 Introduction People are remarkably good at inferring the causal structure of a system from observations of its behavior. Like any inductive task, causal inference is an ill-posed problem: the data we see typically underdetermine the true causal structure. This problem is worse than the usual statistician’s dilemma that “correlation does not imply causation”. Many cases of everyday causal inference follow from just one or a few observations, where there isn’t even enough data to reliably infer correlations! This fact notwithstanding, most conventional accounts of causal inference attempt to generate hypotheses in a bottom-up fashion based on empirical correlations. These include associationist models [12], as well as more recent rational models that embody an explicit concept of causation [1,3], and most algorithms for learning causal Bayes nets [10,14,7]. Here we argue for an alternative top-down approach, within the causal Bayes net framework. In contrast to standard bottom-up approaches to structure learning [10,14,7], which aim to optimize or integrate over all possible causal models (structures and parameters), we propose that people consider only a relatively constrained set of hypotheses determined by their prior knowledge of how the world works. The allowed causal hypotheses not only form a small set of all possible causal graphs, but also instantiate specific causal mechanisms with constrained conditional probability tables, rather than much more general conditional dependence and independence relations. The prior knowledge that generates this hypothesis space of possible causal models can be thought of as an intuitive theory, analogous to the scientific theories of classical mechanics or electrodynamics that generate constrained spaces of possible causal models in their domains. Following the suggestions of recent work in cognitive development (reviewed in [4]), we take the existence of strong intuitive theories to be the foundation for human causal inference. However, our view contrasts with some recent suggestions [4,11] that an intuitive theory may be represented as a causal Bayes net model. Rather, we consider a theory to be the underlying principles that generate the range of causal network models potentially applicable in a given domain – the abstractions that allow a learner to construct and reason with appropriate causal network hypotheses about novel systems in the presence of minimal perceptual input. Given the hypothesis space generated by an intuitive theory, causal inference then follows the standard Bayesian paradigm: weighing each hypothesis according to its posterior probability and averaging their predictions about the system according to those weights. The combination of Bayesian causal inference with strong top-down knowledge is quite powerful, allowing us to explain people’s very rapid inferences about model complexity in both static and temporally extended domains. Here we present two case studies of our approach, including quantitative models of human causal judgments and brief comparisons with more bottom-up accounts. 2 Inferring hidden causal powers We begin with a paradigm introduced by Gopnik and Sobel for studying causal inference in children [5]. Subjects are shown a number of blocks, along with a machine – the “blicket detector”. The blicket detector “activates” – lights up and makes noise – whenever a “blicket” is placed on it. Some of the blocks are “blickets”, others are not, but their outward appearance is no guide. Subjects observe a series of trials, on each of which one or more blocks are placed on the detector and the detector activates or not. They are then asked which blocks have the hidden causal power to activate the machine. Gopnik and Sobel have demonstrated various conditions under which children successfully infer the causal status of blocks from just one or a few observations. Of particular interest is their “backwards blocking” condition [13]: on trial 1 (the “1-2” trial), children observe two blocks ( and ) placed on the detector and the detector activates. Most children now say that both and are blickets. On trial 2 (the “1 alone” trial), is placed on the detector alone and the detector activates. Now all children say that is a blicket, and most say that is not a blicket. Intuitively, this is a kind of “explaining away”: seeing that is sufficient to activate the detector alone explains away the previously observed association of with detector activation. Gopnik et al. [6] suggest that children’s causal reasoning here may be thought of in terms of learning the structure of a causal Bayes net. Figure 1a shows a Bayes net, , that is consistent with children’s judgments after trial 2. Variables and represent whether blocks and are on the detector;
represents whether the detector activates; the existence of an edge
but no edge
represents the hypothesis that but not is a blicket – that but not has the power to turn on the detector. We encode the two observations as vectors , where if block 1 is on the detector (else ! ), likewise for and block 2, and " if the detector is active (else ! ). Given only the data # $%&' # % ! $$ , standard Bayes net learning algorithms have no way to converge on subjects’s choice () . The data are not sufficient to compute the conditional independence relations required by constraint-based methods [9,13], 1 nor to strongly influence the Bayesian structural score using arbitrary conditional probability tables [7]. Standard psychological models of causal strength judgment [12,3], equivalent to maximum-likelihood parameter estimates for the family of Bayes nets in Figure 1a [15], either predict no explaining away here or make no prediction due to insufficient data. 1Gopnik et al. [6] argue that constraint-based learning could be applied here, if we supplement the observed data with large numbers of fictional observations. However, this account does not explain why subjects make the inferences that they do from the very limited data actually observed, nor why they are justified in doing so. Nor does it generalize to the three experiments we present here. Alternatively, reasoning on this task could be explained in terms of a simple logical deduction. We require as a premise the activation law: a blicket detector activates if and only if one or more blickets are placed on it. Based on the activation law and the data , we can deduce that is a blicket but remains undetermined. If we further assume a form of Occam’s razor, positing the minimal number of hidden causal powers, then we can infer that is not a blicket, as most children do. Other cases studied by Gopnik et al. can be explained similarly. However, this deductive model cannot explain many plausible but nondemonstrative causal inferences that people make, or people’s degrees of confidence in their judgments, or their ability to infer probabilistic causal relationships from noisy data [3,12,15]. It also leaves mysterious the origin and form of Occam’s razor. In sum, neither deductive logic nor standard Bayes net learning provides a satisfying account of people’s rapid causal inferences. We now show how a Bayesian structural inference based on strong top-down knowledge can explain the blicket detector judgments, as well as several probabilistic variants that clearly exceed the capacity of deductive accounts. Most generally, the top-down knowledge takes the form of a causal theory with at least two components: an ontology of object, attribute and event types, and a set of causal principles relating these elements. Here we treat theories only informally; we are currently developing a formal treatment using the tools of probabilistic relational logic (e.g., [9]). In the basic blicket detector domain, we have two kinds of objects, blocks and machines; two relevant attributes, being a blicket and being a blicket detector; and two kinds of events, a block being placed on a machine and a machine activating. The causal principle relating these events and attributes is just the activation law introduced above. Instead of serving as a premise for deductive inference, the causal law now generates a hypothesis space of causal Bayes nets for statistical inference. This space is quite restricted: with two objects and one detector, there are only 4 consistent hypotheses & (Figure 1a). The conditional probabilities for each hypothesis are also determined by the theory. Based on the activation law, if and , or
and ; otherwise it equals 0. Causal inference then follows by Bayesian updating of probabilities over in light of the observed data . We assume independent observations so that the total likelihood factors into separate terms for individual trials. For all hypotheses in , the individual-trial likelihoods also factor into , and we can ignore the last two terms assuming that block positions are independent of the causal structure. The remaining term ( is 1 for any hypothesis consistent with the data and 0 otherwise, because of the deterministic activation law. The posterior for any data set is then simply the restriction and renormalization of the prior to the set of hypotheses consistent with . 2 Backwards blocking proceeds as follows. After the “1-2” trial ( ), at least one block must be a blicket: the consistent hypotheses are % & , and . After the “1 alone” trial ( ), only ) and remain consistent. The prior over causal structures can be written as , assuming that each block has some independent probability of being a blicket. The nonzero posterior probabilities are then given as follows (all others are zero): & !#" !$ !%& !#" !$ , ! % !%'& !(" !$ , !#" !$ ! % &!(" !$ ) , and ! % ! % &*!#" !$ . Finally, the probability that block + is a blicket -,
. may be computed by averaging the predictions of all consistent hypotheses weighted their posterior probabilities:
. 0/
1 . 2/ ' . ,
3/ . . 2More generally, we could allow for some noise in the detector, by letting the likelihood 465879 :<;=>: ?(=A@CB'DFE be probabilistic rather than deterministic. For simplicity we consider only the noiseless case here; a low level of noise would give similar results. In comparing with human judgments in the backwards blocking paradigm, the relevant probabilities are -,
, the baseline judgments before either block is placed on the detector; ,
# , judgments after the “1-2” trial; and ,
# , judgments after the “1 alone” trial. These probabilities depend only on the prior probability of blickets, . Setting qualitatively matches children’s backwards blocking behavior: after the “1-2” trial, both blocks are more likely than not to be blickets ( ,
# ; then, after the “1 alone” trial, is definitely a blicket while is probably not (
). Thus there is no need to posit a special “Occam’s razor” just to explain why becomes like less likely to be a blicket after the “1 alone” trial – this adjustment follows naturally as a rational statistical inference. However, we do have to assume that blickets are somewhat rare ( ). Following the “1 alone” trial the probability of being a blicket returns to baseline ( ), because the unambiguous second trial explains away all the evidence for from the first trial. Thus for
, block 2 would remain likely to be a blicket even after the “1 alone” trial. In order to test whether human causal reasoning actually embodies this Bayesian form of Occam’s razor, or instead a more qualitative rule such as the classical version, “Entities should not be multiplied beyond necessity”, we conducted three new blicket-detector experiments on both adults and 4-year-old children (in collaboration with Sobel & Gopnik). The first two experiments were just like the original backwards blocking studies, except that we manipulated subjects’ estimates of by introducing a pretraining phase. Subjects first saw 12 objects placed on the detector, of which either 2, in the “rare” condition”, or 10, in the “common” condition, activated the detector. We hypothesized that this manipulation would lead subjects to set their subjective prior for blickets to either or
, and thus, if guided by the Bayesian Occam’s razor, to show strong or weak blocking respectively. We gave adult subjects a different cover story, involving “super pencils” and a “superlead detector”, but here we translate the results into blicket detector terms. Following the “rare” or “common” training, two new objects and were picked at random from the same pile and subjects were asked three times to judge the probability that each one could activate the detector: first, before seeing it on the detector, as a baseline; second, after a “1-2” trial; third, after a “1 alone” trial. Probabilities were judged on a 1-7 scale and then rescaled to the range 0-1. The mean adult probability judgments and the model predictions are shown in Figures 2a (rare) and 2b (common). Wherever two objects have the same pattern of observed contingencies (e.g., and at baseline and after the “1-2” trial), subjects’ mean judgments were found not to be significantly different and were averaged together for this analysis. In fitting the model, we adjusted to match subjects’ baseline judgments; the best-fitting values were very close to the true base rates. More interestingly, subjects’ judgments tracked the Bayesian model over both trials and conditions. Following the “1-2” trial, mean ratings of both objects increased above baseline, but more so in the rare condition where the activation of the detector was more surprising. Following the “1 alone” trial, all subjects in both conditions were 100% sure that had the power to activate the detector, and the mean rating of returned to baseline: low in the rare condition, but high in the common condition. Four-year-old children made “yes”/”no” judgments that were qualitatively similar, across both rare and common conditions [13]. Human causal inference thus appears to follow rational statistical principles, obeying the Bayesian version of Occam’s razor rather than the classical logical version. However, an alternative explanation of our data is that subjects are simply employing a combination of logical reasoning and simple heuristics. Following the “1 alone” trial, people could logically deduce that they have no information about the status of and then fall back on the base rate of blickets as a default, without the need for any genuinely Bayesian computations. To rule out this possibility, our third study tested causal explaining way in the absence of unambiguous data that could be used to support deductive reasoning. Subjects again saw the “rare” pretraining, but now the critical trials involved three objects, , , . After judging the baseline probability that each object could activate the detector, subjects saw two trials: a “1-2” trial, followed by a “1-3” trial, in which objects and activated the detector together. The Bayesian hypothesis space is analogous to Figure 1a, but now includes eight ( ) hypotheses representing all possible assignments of causal powers to the three objects. As before, the prior over causal structures can be written as 6 ) , the likelihood 1 reduces to 1 for any hypothesis consistent with (under the activation law) and 0 otherwise, and the probability that block + is a blicket ,
. may be computed by summing the posterior probabilities of all consistent hypotheses, e.g.,
0/ ' . . Figure 2c shows that the Bayesian model’s predictions and subjects’ mean judgments match well except for a slight overshoot in the model. Following the “1-3” trial, people judge that probably activates the detector, but now with less than 100% confidence. Correspondingly, the probability that activates the detector decreases, and the probability that activates the detector increases, to a level above baseline but below 0.5. All of these predicted effects are statistically significant ( ! ! , one-tailed paired t-tests). These results provide strong support for our claim that rapid human inferences about causal structure can be explained as theory-guided Bayesian computations. Particularly striking is the contrast between the effects of the “1 alone” trial and the “1-3 trial”. In the former case, subjects observe unambiguously that is a cause and their judgment about falls completely to baseline; in the latter, they observe only a suspicious coincidence and so explaining away is not complete. A logical deductive mechanism might generate the all-or-none explaining-away observed in the former case, while a bottom-up associative learning mechanism might generate the incomplete effect seen in the latter case, but only our top-down Bayesian approach naturally explains the full spectrum of one-shot causal inferences, from uncertainty to certainty. 3 Causal inference in perception Our second case study argues for the importance of causal theories in a very different domain: perceiving the mechanics of collisions and vibrations. Michotte’s [8] studies of causal perception showed that a moving ball coming to rest next to a stationary ball would be perceived as the cause of the latter’s subsequent motion only if there was essentially no gap in space or time between the end of the first ball’s motion and the beginning of the second ball’s. The standard explanation is that people have automatic perceptual mechanisms for detecting certain kinds of physical causal relations, such as transfer of force, and these mechanisms are driven by simple bottom-up cues such as spatial and temporal proximity. Figure 3a shows data from an experiment described in [2] which might appear to support this view. Subjects viewed a computer screen depicting a long horizontal beam. At one end of the beam was a trap door, closed at the beginning of each trial. On each trial, a heavy block was dropped onto the beam at some position , and after some time , the trap door opened and a ball flew out. Subjects were told that the block dropping on the beam might have jarred loose a latch that opens the door, and they were asked to judge (on a scale) how likely it was that the block dropping was the cause of the door opening. The distance and time separating these two events were varied across trials. Figure 3a shows that as either or increases, the judged probability of a causal link decreases. Anderson [1] proposed that this judgment could be formalized as a Bayesian inference with two alternative hypotheses: , that a causal link exists, and , that no causal link exists. He suggested that the likelihood should be product of decreasing exponentials in space and time,
, while would presumably be constant. This model has three free parameters – the decay constants and , and the prior probability – plus multiplicative and additive scaling parameters to bring the model ouptuts onto the same range as the data. Figure 3c shows that this model can be adjusted to fit the broad outlines of the data, but it misses the crossover interaction: in the data, but not the model, the typical advantage of small distances over large distances disappears and even reverses as increases. This crossover may reflect the presence of a much more sophisticated theory of force transfer than is captured by the spatiotemporal decay model. Figure 1b shows a causal graphical structure representing a simplified physical model of this situation. The graph is a dynamic Bayes net (DBN), enabling inferences about the system’s behavior over time. There are four basic event types, each indexed by time . The door state
- can be either open (
- " ) or closed (
! ), and once open it stays open. There is an intrinsic source of noise in the door mechanism, which we take to be i.i.d., zero-mean gaussian. At each time step , the door opens if and only if the noise amplitude exceeds some threshold (which we take to be 1 without loss of generality). The block hits the beam at position ! (and time ! ), setting up a vibration in the door mechanism with energy - ! . We assume this energy decreases according to an inverse power law with the distance between the block and the door, ! ! . (We can always set , absorbing it into the parameter below.) For simplicity, we assume that energy propagates instantaneously from the block to the door (plausible given the speed of sound relative to the distances and times used here), and that there is no vibrational damping over time ( - - ). Anderson [2] also sketches an account along these lines, although he provides no formal model. At time , the door pops open; we denote this event as . The likelihood of depends strictly on the variance of the noise – the bigger the variance, the sooner the door should pop open. At issue is whether there exists a causal link between the vibration – caused by the block dropping – and the noise – which causes the door to open. More precisely, we propose that causal inference is based on the probabilities under the two hypotheses (causal link) and (no causal link). The noise variance has some low intrinsic level , which under – but not – is increased by some fraction of the vibrational energy . That is, - 6
!
- . We can then solve for the likelihoods ) analytically or through simulation. We take the limit as the intrinsic noise level ! , leaving three free parameters, , , and , plus multiplicative and additive scaling parameters, just as in the spatiotemporal decay model. Figure 3b plots the (scaled) posterior probabilities for the best fitting parameter values. In contrast to the spatiotemporal decay model, the DBN model captures the crossover interaction between space and time. This difference between the two models is fundamental, not just an accident of the parameter values chosen. The spatiotemporal decay model can never produce a crossover effect due to its functional form – separable in and . A crossover of some form is generic in the DBN model, because its predictions essentially follow an exponential decay function on with a decay rate that is a nonlinear function of . Other mathematical models with a nonseparable form could surely be devised to fit this data as well. The strength of our model lies in its combination of rational statistical inference and realistic physical motivation. These results suggest that whatever schema of force transfer is in people’s brains, it must embody a more complex interaction between spatial and temporal factors than is assumed in traditional bottom-up models of causal inference, and its functional form may be a rational consequence of a rich but implicit physical theory that underlies people’s instantaneous percepts of causality. It is an interesting open question whether human observers can use this knowledge only by carrying out an online simulation in parallel with their observations, or can access it in a “compiled” form to interpret bottom-up spatiotemporal cues without the need to conduct any explicit internal simulations. 4 Conclusion In two case studies, we have explored how people make rapid inferences about the causal texture of their environment. We have argued that these inferences can be explained best as Bayesian computations, working over hypothesis spaces strongly constrained by top-down causal theories. This framework allowed us to construct quantitative models of causal judgment – the most accurate models to date in both domains, and in the blicket detector domain, the only quantitatively predictive model to date. Our models make a number of substantive and mechanistic assumptions about aspects of the environment that are not directly accessible to human observers. From a scientific standpoint this might seem undesirable; we would like to work towards models that require the fewest number of a priori assumptions. Yet we feel there is no escaping the need for powerful top-down constraints on causal inference, in the form of intuitive theories. In ongoing work, we are beginning to study the origins of these theories themselves. We expect that Bayesian learning mechanisms similar to those considered here will also be useful in understanding how we acquire the ingredients of theories: abstract causal principles and ontological types. References [1] J. .R. Anderson. The Adaptive Character of Thought. Erlbaum, 1990. [2] J. .R. Anderson. Is human cognition adaptive? Behavioral and Brain Sciences, 14, 471–484, 1991. [3] P. W. Cheng. From covariation to causation: A causal power theory. Psychological Review, 104, 367–405, 1997. [4] A. Gopnik & C. Glymour. Causal maps and Bayes nets: a cognitive and computational account of theory-formation. In Carruthers et al. (eds.), The Cognitive Basis of Science. Cambridge, 2002. [5] A. Gopnik & D. M. Sobel. Detecting blickets: How young children use information about causal properties in categorization and induction. Child Development, 71, 1205–1222, 2000. [6] A. Gopnik, C. Glymour, D. M. Sobel, L. E. Schulz, T. Kushnir, D. Danks. A theory of causal learning in children: Causal maps and Bayes nets. Psychological Review, in press. [7] D. Heckerman. A Bayesian approach to learning causal networks. In Proc. Eleventh Conf. on Uncertainty in Artificial Intelligence, Morgan Kaufmann Publishers, San Francisco, CA, 1995. [8] A. E. Michotte. The Perception of Causality. Basic Books, 1963. [9] H. Pasula & S. Russell. Approximate inference for first-order probabilistic languages. In Proc. International Joint Conference on Artificial Intelligence, Seattle, 2001. [10] J. Pearl. Causality. New York: Oxford University Press, 2000. [11] B. Rehder. A causal-model theory of conceptual representation and categorization. Submitted for publication, 2001. [12] D. R. Shanks. Is human learning rational? Quarterly Journal of Experimental Psychology, 48a, 257–279, 1995. [13] D. Sobel, J. B. Tenenbaum & A. Gopnik. The development of causal learning based on indirect evidence: More than associations. Submitted for publication, 2002. [14] P. Spirtes, C. Glymour, & R. Scheines. Causation, prediction, and search (2nd edition, revised). Cambridge, MA: MIT Press, 2001. [15] J. B. Tenenbaum & T. L. Griffiths. Structure learning in human causal induction. In T. Leen, T. Dietterich, and V. Tresp (eds.), Advances in Neural Information Processing Systems 13. Cambridge, MA: MIT Press, 2001. X1 X2 X1 X1 E E h11 h00 X2 X2 X(0) E(1) Z(1) V(1) E(0) Z(0) V(0) E(n) Z(n) V(n) ... ... t=0 t=1 t=n h h 1 0 present absent E E h h01 10 X2 X1 (a) door state vibrational energy (b) block position noise time Figure 1: Hypothesis spaces of causal Bayes nets for (a) the blicket detector and (b) the mechanical vibration domains. B1,B2 B1,B2 B1 B2 0 0.2 0.4 0.6 0.8 1 (a) Baseline After "12" trial After "1 alone" trial B1,B2 B1,B2 B1 B2 0 0.2 0.4 0.6 0.8 1 (b) Baseline After "12" trial After "1 alone" trial B1,B2,B3 B1,B2 B3 B1 B2,B3 0 0.2 0.4 0.6 0.8 1 (c) Baseline After "12" trial After "13" trial People Bayes Figure 2: Human judgments and model predictions (based on Figure 1a) for one-shot backwards blocking with blickets, when blickets are (a) rare or (b) common, or (c) rare and only observed in ambiguous combinations. Bar height represents the mean judged probability that an object has the causal power to activate the detector. 0.1 0.3 0.9 2.7 8.1 2 3 4 5 6 Time (sec) Causal strength X = 15 X = 7 X = 3 X = 1 0.1 0.3 0.9 2.7 8.1 2 3 4 5 6 Time (sec) P( h1| T, X) 0.1 0.3 0.9 2.7 8.1 2 3 4 5 6 Time (sec) P( h1| T, X) Figure 3: Probability of a causal connection between two events: a block dropping onto a beam and a trap door opening. Each curve corresponds to a different spatial gap between these events; each x-axis value to a different temporal gap . (a) Human judgments. (b) Predictions of the dynamic Bayes net model (Figure 1b). (c) Predictions of the spatiotemporal decay model.
|
2002
|
188
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.