index
int64
0
20.3k
text
stringlengths
0
1.3M
year
stringdate
1987-01-01 00:00:00
2024-01-01 00:00:00
No
stringlengths
1
4
3,500
Multiscale Random Fields with Application to Contour Grouping Longin Jan Latecki Dept. of Computer and Info. Sciences Temple University, Philadelphia, USA latecki@temple.edu ChengEn Lu Dept. of Electronics and Info. Eng. Huazhong Univ. of Sci. and Tech., China luchengen@gmail.com Marc Sobel Statistics Dept. Temple University, Philadelphia, USA marc.sobel@temple.edu Xiang Bai Dept. of Electronics and Info. Eng. Huazhong Univ. of Sci. and Tech., China xiang.bai@gmail.com Abstract We introduce a new interpretation of multiscale random fields (MSRFs) that admits efficient optimization in the framework of regular (single level) random fields (RFs). It is based on a new operator, called append, that combines sets of random variables (RVs) to single RVs. We assume that a MSRF can be decomposed into disjoint trees that link RVs at different pyramid levels. The append operator is then applied to map RVs in each tree structure to a single RV. We demonstrate the usefulness of the proposed approach on a challenging task involving grouping contours of target shapes in images. It provides a natural representation of multiscale contour models, which is needed in order to cope with unstable contour decompositions. The append operator allows us to find optimal image segment labels using the classical framework of relaxation labeling. Alternative methods like Markov Chain Monte Carlo (MCMC) could also be used. 1 Introduction Random Fields (RFs) have played an increasingly important role in the fields of image denoising, texture discrimination, image segmentation and many other important problems in computer vision. The images analyzed for these purposes typically have significant fractal properties which preclude the use of models operating at a single resolution level. Such models, which aim to minimize meansquared estimation error, use only second-order image statistics which fail to accurately characterize the images of interest. Multiscale random fields (MSRFs) resolve this problem by using information at many different resolution levels [2, 15, 5]. In [6], a probabilistic model of multiscale conditional random fields (mCRF) was proposed to segment images by labeling pixels using a predefined set of class labels. The main difference between the proposed interpretation of MSRFs or mCFF as known in the literature, e.g., [2, 15, 6, 5], and the proposed MSRF is the interpretation of the connections between different scales (levels). In the proposed approach, the random variables (RVs) linked by a tree substructure across different levels compete for their label assignments, while in the existing approaches the goal is to cooperate in the label assigns, which is usually achieved by averaging. In other words, usually the label assignment of a parent node is enforced to be compatible with the label assignment of its children by averaging. In contrast, in the proposed approach the parent node and all its children compete for the best possible label assignment. Contour grouping is one of key approaches to object detection and recognition, which is a fundamental goal of computer vision. We introduce a novel MSRF interpretation, and show its benefits in solving the contour grouping problem. The MSRF allows us to cast contour grouping as contour matching. Detection and grouping by shape has been investigated in earlier work. The basic 1 idea common to all methods is to define distance measures between shapes, and then accurately label and/or classify shapes using these measures. Classical methods, of this type, such as shape contexts [1] and chamfer matching [13] can not cope well with clutter and shape deformations. Some researchers described the shape of the entire object using deformable contour fragments and their relative positions [10, 12], but their detection results are always grassy contour edges. The deformable template matching techniques often require either good initial positions or clean images (or both) to avoid (false) local minima [14, 9]. Recently, Ferrari et al. [4] have used the sophisticated edge detection methods of [8]; the resulting edges are linked to a network of connected contour segments by closing small gaps. Wu et al. [16] proposed an active basis model that provides deformable template consisting of a small number of Gabor wavelet elements allowed to slightly perturb their locations and orientations. Our grouping is also based on the edge detection of [8], but we do not perform edge linking directly for purposes of grouping. We perform matching a given contour model to edge segments in images. This allows us to perform grouping and detection at the same time. Our method differs from former sampled-points-based matching methods [14, 3]; we match the contour segments from the given contour to segments in edge images directly. We decompose a given closed contour of a model shape into a group of contour segments, and match the resulting contour segments to edge segments in a given image. Our model contour decomposition is flexible and admits a hierarchical structure, e.g., a parent contour segment is decomposed into two or more child segments. In this way, our model can adapt to different configurations of contour parts in edge images. The proposed MSRF interpretation allows us to formulate the problem of contour grouping as a soft label assignment problem. Since in our approach a parent node and all its children compete for the best possible label assignment, allowing us to examine multiple composite hypotheses of model segments in the image, a successful contour grouping of edge segments is possible even if significant contour parts are missing or are distorted. The competition is made possible by the proposed append operator. It appends the random variables (RVs) representing the parent and all its children nodes to a single new RV. Since the connectivity relation between each pair of model segments is known, the soft label assignment and the competition for best labels make accurate grouping results in real images possible. We also want to stress that our grouping approach is based on matching of contour segments. The advantages of segment matching over alternative techniques based on point matching are at least twofold: 1) it permits deformable matching (i.e., the global shape will not be changed even when some segments shift or rotate a little); 2) it is more stable than point matching, since contour segments are more informative than points as shape cues. 2 Multiscale Random Fields Given a set of data points X = {x1, . . . , xn}, the goal of random fields is to find a label assignment f that maximizes the posterior probability p(f|X) (of that assignment): f = argmaxfp(f|X) (1) Thus, we want to select the label assignment with the largest possible probability given the observed data. Although the proposed method is quite general, for clarity of presentation, we focus on an application of interest to us: contour grouping based on contour part correspondence. We take the contour of an example shape to be our shape model S. We assume that the model is composed of several contour segments s1, . . . , sm. In our application, the data points X = {x1, . . . , xn} are contour segments extracted by some low level process in a given image. The random field is defined by a sequence of random variables F = (F 1, . . . , Fm) associated with nodes si of the model graph F represents the mapping of the nodes (model segments) S = {s 1, , sm} to the data points X = {x1, . . . , xn} (i.e., F : S →X). We write Fi = xj to denote the event that the model segment si is assigned the image segment xj by the map F. (Observe that usually the assignment is defined in the reverse direction, i.e., from an image to the model.) Our goal is to find a label assignment f = (f1, . . . , fm) ∈Xm that maximizes the probability p(f|X) = p(F1 = f1, . . . , Fm = fm|X), i.e., f = ( f1, . . . , fm) = argmax (f1,...,fm) p(F1 = f1, . . . , Fm = fm|X) (2) 2 However, the object contour in the given image (which is composed of some subset of segments in X = {x1, . . . , xn} may have a different decomposition into contour segments than is the case for the model s1, . . . , sm. This is the case, for example, if some parts of the true contour are missing, i.e., some si may not correspond to parts in X. Therefore, a shape model is needed that can provide robust detection and recognition under these conditions. We introduce such a model by imposing a multiscale structure on contour segments of the model shape. Let the lowest level zero represents the finest subdivision of a given model contour S into the segments S 0 = {s0 1, . . . , s0 m0}. The α level partition subdivides the contour into the segments S α = {sα 1 , . . . , sα mα} for α = 1, . . . , β, where β denotes the highest (i.e., most coarse) pyramid level. For each pyramid level α, the segments, S α, partition the model contour S, i.e., S = sα 1 ∪· · · ∪sα mα. The segments Sα in level α refine the segments Sα+1 in level α+1, i.e., segments in the level α+1 are unions of one or more consecutive segments in the level α. On each level α we have a graph structure G α = (Sα, Eα), where Eα is the set of edges governing the relations between segments in S α, and we have a forest composed of trees that link nodes at different levels. The number of the trees corresponds to the number of nodes on the highest level sβ 1, . . . , sβ mβ, since each of these nodes is the root of one tree. We denote these trees with T1, . . . , Tmβ. For example, in Fig. 1 we have eight segments on the level zero s 0 1, . . . , s0 8, and four segments on the level one s1 1 = s0 1 ∪s0 2, s1 2 = s0 3 ∪s0 4, s1 3 = s0 5 ∪s0 6, s1 4 = s0 7 ∪s0 8. This construction leads to a tree structure relation among segments at different levels. For example, T1 is a tree with s1 1 (segment 1) as a parent node and with two children s 0 1, s0 2 (segments 5 and 6). 1 2 3 4 5 6 7 8 9 10 11 12 Model 1 5 6 2 7 8 3 9 10 1T 2 T 3T 0 iS 4 11 12 4 T 1 iS Figure 1: An example of a multiscale random field structure. We associate a random variable F α i with each segment sα i . The range of each random variable F α i is the set of contour segments X = {x1, . . . , xn} extracted in a given image. The random variables inherit the tree structure from the corresponding model segments. Thus, we obtain a multiscale random field with random variables (RVs) F = (F 0 1 , . . . , F 0 m0, . . . , F α 1 , . . . , F α mα, . . . , F β 1 , . . . , F β mβ), (3) the relational structure (RS) Gα = (Sα, Eα), and trees T1, . . . , Tmβ. Our goal remains the same as stated in (2), but the graph structure of the underlying RF is significantly more complicated by the introduction of the multiscale tree relations. Therefore, the maximization in (2) is significantly more complicated as well. Usually, the computation in multiscale random fields is based on modeling the dependencies between the random variables related by the (aforementioned) tree structures. In the proposed approach, we do not explicitly model these tree structure dependencies. Instead, we build relations between them using the construction of a new random variable that explicitly relates all random variables in each given tree. We introduce a new operator acting on random variables, called append operator. The operator combines a given set of random variables Y = {Y 1, . . . , Yk} into a single random variable denoted ⊕Y = Y1 ⊕· · · ⊕Yk. (4) For simplicity, we assume, in the definition below, that {Y1, . . . , Yk} are discrete random variables taking values in the set X = {x1, . . . , xn}. Our definition can be easily generalized to continuous random variables. The append random variable, ⊕Y, with distribution defined below, takes values in the set of pairs, {1, . . . , k} × X. The distribution of the random variable ⊕Y is given by, p(⊕Y = (i, xj)) = 1 k · p(Yi = xj), (5) 3 where index i is over the RVs and index j is over the labels. The intuition behind this construction can be explained by the following simple example. Let Y 1, Y2 be two discrete random variables with distributions (p(Y1 = 1), p(Y1 = 2), p(Y1 = 3)) and (p(Y2 = 1), p(Y2 = 2), p(Y2 = 3)), (6) then the distribution of Y1 ⊕Y2 is simply given by vector 1/2 · (p(Y1 = 1), p(Y1 = 2), p(Y1 = 3), p(Y2 = 1), p(Y2 = 2), p(Y2 = 3)). (7) Armed with this construction, we return to our multiscale RF with RVs in (3). Recall that the RVs representing the nodes on the highest level F β 1 , . . . , F β mβ are the roots of trees T1, . . . , Tmβ. By slightly abusing our notation, we define ⊕Ti as the append of all random variables that are nodes of tree Ti. This construction allows us to reduce the multiscale RF with RVs in (3) to a RF with RVs T = (⊕T1, . . . , ⊕Tmβ). (8) The graph structure of this new RF is defined by graph G = (T, E) such that (⊕Ti, ⊕Tj) ∈E iff ∃α ∃a,b (F α a , F α b ) ∈Eα and F α a ∈⊕Ti and F α b ∈⊕Tj (9) In simple words, ⊕Ti and ⊕Tj are related in G iff on some level α both trees have related random variables. The construction in (8) and (9) maps a multiscale RF to a single level RF, i.e., to a random field with a simple graph structure G. The intuition is that we collapse all graphs G α = (Sα, Eα) for α = 1, . . . , β to a single graph G = (T, E) by gluing all RVs in each tree Ti to a single RV ⊕Ti. Consequently, any existing RF optimization method can be applied to compute t = (t1, . . . , tmβ) = argmax (t1,...,tmβ ) p(⊕T1 = t1, . . . , ⊕Tmβ = tmβ|X). (10) We observe that when optimizing the new RF in (10), we can simply perform separate optimizations on each level, i.e., on each level α we optimize (8) with respect to the graph structure G α. Hence at each level α we choose the maximum aposteriori estimate associated with the random field at that level. Our key contribution is the fact that these optimizing estimators are linked by the internal structure of the RVs ⊕Ti. After optimizing a regular RF in (10) that contains append RVs, we obtain as the solution updated distributions of the append RVs. From them, we can easily reconstruct the updated distributions of the original RVs from the multiscale RF in (2) by the construction of the append RVs. For example, if we obtain ( 1 10, 3 5, 1 10, 0, 1 10, 1 10) as the updated distribution of some RV Y1 ⊕Y2, then we can easily derive the updated distributions of Y1, Y2 as (p(Y1 = 1) = 1 8, p(Y1 = 2) = 3 4, p(Y1 = 3) = 1 8) & (p(Y2 = 1) = 0, p(Y2 = 2) = 1 2, p(Y2 = 3) = 1 2) To obtain the distributions of the compound RVs Y1, Y2, we only need to ensure that both distributions of Y1 and Y2 sum to one. Since we are usually interested in selecting a variable assignment with maximum posterior probability (10), we do not need to derive these distributions. Consequently, in this example, it is sufficient for us to determine that the assignment of Y 1 to label 2 maximizes Y1 ⊕Y2. Going back to our application in contour grouping, the RV ⊕T 2 is an append of three RVs representing segments 2, 7, 8 in Fig. 1. We observe that RVs appended to ⊕T 2 compete in the label assignment. For example, if a given assignment of RV ⊕T2 to an image segment, say x5, maximizes ⊕T2, then, by the position in the discrete distribution of ⊕T 2, we can clearly identify which RV is the winner, i.e., which of the model segments 2, 7, 8 is assigned to image segment x 5. We can also make this competition soft (with more then one winner) if we select local maxima of the discrete distribution of ⊕T2, which may lead to assigning more than one of model segments 2, 7, 8 to image segments. In the computation model presented in the next section, we focus on finding a global maximum for each RV ⊕Ti. 4 3 Computing the label assignment with relaxation labeling There exist several approaches to compute the assignment f that optimizes the relational structure of a given RF [7], i.e., approaches that solve Eq. (10), which is our formulation of the general RF Eq. (2). In our implementation, we use a particularly simple approach of relaxation labeling introduced by Rosenfeld et al. in [11]. However, a more powerful class of MCMC methods could also be used [7]. In this section, we briefly describe the relaxation labeling (RL) method, and how it fits into our framework. We recall that our goal is to find a label assignment t = (t1, . . . , tm) that maximizes the probability p(t|X) = p(⊕T1 = t1, . . . , ⊕Tm = tm|X) in Eq. (10), where we have shortened m = mβ. One of the key ideas of using RL is to decompose p(t|X) into individual probabilities p(⊕T a = (ia, xj)), where index a = 1, . . . , m ranges over the RVs of the RF, index j = 1, . . . , n ranges over the possible labels, which in our case are the contour segments X = {x1, . . . , xn} extracted from a given image, and index ia ranges over the RVs that are appended to ⊕Ta, which we denote with ia ∈a. For brevity, we use the notation pa(ia, xj) = p(⊕Ta = (ia, xj)). Going back to our example in Fig. 1, p2(7, x5) denotes the probability that contour segment 7 is assigned to an image segment x5, and 2 is the index of RV ⊕T2. We recall that ⊕T2 is an append of three RVs representing segments 2, 7, 8 in Fig. 1. In Section 5, p 2(7, x5) is modeled as a Gaussian of the shape dissimilarity between model contour segment 7 and image contour segment 5. As is usually the case for RFs, we also consider binary relations between RVs that are adjacent in the underlying graph structure G = (T, E), which represent conditional probabilities p(⊕T a = (ia, xj) | ⊕Tb = (ib, xk)). They express the compatibility of these label assignment. Again for brevity, we use notation Ca,b((ia, xj), (ib, xk)) = p(⊕Ta = (ia, xj) | ⊕Tb = (ib, xk)). For example, C2,3((7, x5), (9, x8)) models the compatibility of assignment of model segment 7 (part of model tree 2) to image segment x5 with the assignment of model segment 9 (part of model tree 3) to image segment x8. This compatibility is a function of geometric relations between the segments. Since segment 9 is above segment 7 in the model contour, it is reasonable to assign high compatibility only if the same holds for the image segments, i.e., x 8 is above x5. The RL algorithm iteratively estimates the change in the probability p a(ia, xj) by: δpa(ia, xj) =  b=1,...,m: b̸=a  ib∈b  xk∈X: xk̸=xj Ca,b((ia, xj), (ib, xk)) · pb(ib, xk), (11) where b varies over all append random variables ⊕T b different form ⊕Ta and ib varies over all compound RVs that are combined by append to ⊕T b. Then the probability is updated by pa(ia, xj) = pa(ia, xj)[1 + δpa(ia, xj)]  ia∈a  xk∈X pa(ia, xk)[1 + δpa(ia, xk)], (12) The double sum in the denominator simply normalizes the distribution of ⊕T a so that it sums to one. The RL algorithm in our framework iterates steps (11) and (12) for all a = 1, . . . , m (append RVs), all ia ∈a, and all labels xj ∈X. It can be shown that the RL algorithm is guaranteed to converge, but not necessarily to a global maximum [7]. 4 A contour grouping example We provide a simple but real example to illustrate how our multiscale RF framework solves a concrete contour grouping instance. We use the contour model presented in Fig. 1. Let F i be a RV corresponding to model contour segment si for i = 1, . . . , 12. We have two levels S0 = {F5, . . . , F12} and S1 = {F1, . . . , F4}. Both graph structures G0 and G1 are complete graphs. As described in Section 2, we have MSRF with four trees. The append RVs determined by these trees are: ⊕T1 = F1 ⊕F5 ⊕F6, ⊕T2 = F2 ⊕F7 ⊕F8, ⊕T3 = F3 ⊕F9 ⊕F10, ⊕T4 = F4 ⊕F11 ⊕F12 We obtain a regular (single level) RF with the four append RVs, T = (⊕T 1, ⊕T2, ⊕T3, ⊕T4), and with the graph structure G = (T, E) determined by Eq. (9). 5 Given an image as in Fig. 2(a), we first compute its edge map shown in Fig. 2(b), and use a low level edge linking to obtain edge segments in Fig. 2(c). The 16 edge segments in Fig. 2(c) form our label set X = {x1, x2, . . . x16}. Our goal is to find label assignment to RVs ⊕Ta for a = 1, 2, 3, 4 with maximum posterior probability (10). However, the label set of each append RV is different, e.g., the label set of ⊕T1 is equal to {1, 5, 6} × X, where ⊕T1 = (1, x5) denotes the assignment of F1 = x5 representing mapping model segment 1 to image segment 5. Hence p 1(ia, xj) = p(⊕T1 = (ia, xj)) for ia = 1, j = 5 denotes the probability of mapping model segment i a = 1 to image segment j = 5. As described in Section 3, we use relaxation labeling to compute the maximum posterior probability (10). Initially, all probabilities pa(ia, xj) are set based on shape similarity between involved model and image segments. The assignments compatibilities are determined using geometric relations described in Section 5. After 200 iterations, RL finds the best assignment for each RV ⊕T a as Fig. 2(d) illustrates. They are presented in the format RV: model segment →edge segment: ⊕T1 : 1 →x12; ⊕T2 : 5 →x10; ⊕T3 : 8 →x7; ⊕T4 : 4 →x5. Observe that many model segments remained unmatched, since there they do not have any corresponding segments in the image 2(c). This very desirable property results from the label assignment competition within each append RV ⊕Ta for a = 1, 2, 3, 4. This fact demonstrates one of the main benefits of the propose approach. We stress that we do not use any penalties for non matching, which are usually used in classical RFs (e.g., nil variables in [7]), but are very hard to set in real applications. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 4 85 1 (a) (b) (c) (d) Figure 2: (c) The 16 edge segments form our label set X = {x 1, x2, . . . x16}. (d) The numbers and colors indicate the assignment of the model segments from Fig. 1. 5 Geometric contour relations In this section, we provide a brief description of contour segment relations used to assign labels for contour grouping. Two kinds of relations are defined. First, the probability p a(ia, xj) is set to be a Gaussian of shape dissimilarity between model segment ia and image segment xj. The shape dissimilarity is computed by matching sequences of tangent directions at their sample points. To make our matching scale invariant, we sample each model and image segment with the same number of sample points. We also consider four binary relations to measure the compatibility between a pair of model segments and a pair of image segments: d(1)(i, i′) – the maximum distance between the end-points of two contour segments i and i′; d(2)(i, i′) – the minimum distance between the endpoints of two contour segments i and i′; d(3)(i, i′) – the direction from the mid-point of i to the mid-point of i′; d(4)(i, i′) – the distance between the mid-points of i and i′. To make our relations scale invariant, all distances are normalized by the sum of the lengths of segments i and i ′. Then the compatibility between pair of model segments ia, ib and pair of image segments xj, xk is given by a mixture of Gaussians: Ca,b((ia, xj), (ib, xk)) = 4  r=1 1 4N(d(r)(ia, ib) −d(r)(xj, xk), σ(r)) (13) 6 Experimental results We begin with a comparison between the proposed append MSRF and single level RF. Given an edge map in Fig. 3(b) extracted by edge detector [8], we employ a low level edge linking method to obtain edge segments as shown in 3(c), where the 27 edge segments form our label set X = {x 1, . . . , x27}. Fig. 3(d) illustrates our shape contour model and its two level multiscale structure of 10 contour segments. Fig. 3(e) shows the result of contour grouping obtained in the framework of the proposed 6 append MSRF. The numbers and colors in indicate the assignment of the model segments. The benefits of the flexible multiscale model structure are clearly visible. Out of 10 model segments, only 4 have corresponding edge segments in the image, and our approach correctly determined a label assignments reflecting this fact. In contrast, this is not the case for a single level RF. Fig. 3(f) shows a model with a fixed single level structure, and its contour grouping result computed with classical RL can be found in Fig. 3(g). We observe that model segment 2 on giraffe’s head has no matching contour in the image, but is nevertheless incorrectly assigned. This wrong assignment influences model contour 4, and leads to another wrong assignment. In the proposed approach, model contours 2 and 3 in Fig. 3(d) compete for label assignments. Since contour 3 finds a good match in the image, we correctly obtain (through our append RV structure) that that there is not match for segment 2. (a) (b) (c) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 (d) 1 2 3 4 5 6 7 8 9 10 (e) 6 1 4 5 (f) 1 2 3 4 5 (g) 3 1 5 4 2 Figure 3: (d-g) comparison of results obtain by the proposed MSRF to a single level RF. By mapping the model segments to the image segments, we enforce the existence of a solution. Even if no target shape is present in a given image, our approach will ”hallucinate” a matching configuration of edge segments in the image. A standard alternative in the framework of random fields is to use a penalty for non-matching (dummy or null nodes). However, this requires several constants, and it is a highly nontrivial problem to determine their values. In our approach, we can easily distinguish hallucinated contours from true contours, since when the RF optimization is completed, we obtain the assignment of contour segments, i.e., we know a global correspondence between model segments and image segments. Based on this correspondence, we compute global shape similarity, and discard solutions with low global similarity to the model contour. This requires only one threshold on global shape similarity, which is relatively easy to set, and our experimental results verify this fact. In Figs. 4 and 5, we show several examples of contour grouping obtained by the proposed MSRF method on the ETHZ data set [4]. We only use two contour models, the swan model (Fig. 1) and the giraffe model (Fig. 3(d)). Their original images are included as shape models in the ETHZ data set. Model contours are decomposed into segments by introducing break points at high curvature points. Edge contour segments in the test images have been automatically computed by a low level edge linking process. Noise and shape variations cause the edge segments to vary a lot from image to image. We also observe that grouped contours contain internal edge structures. 7 Conclusions Since edges, and consequently, contour parts vary significantly in real images, it is necessary to make decomposition of model contours into segments flexible. The proposed multiscale construction permits us to have a very flexible decomposition that can adapt to different configurations of contour parts in the image. We introduce a novel multiscale random field interpretation based on the append operator that leads to efficient optimization. We applied the new algorithm to the ETHZ data set to illustrate the application potential of the proposed method. 7 Figure 4: ETHZ data set grouping results for the Giraffe model. Figure 5: ETHZ data set grouping results for the swan model. Acknowledgments This work was supported in part by the NSF Grants IIS-0534929, IIS-0812118 in the Robust Intelligence Cluster and by the DOE Grant DE-FG52-06NA27508. References [1] S. Belongie, J. Malik, and J. Puzicha. Shape matching and object recognition using shape contexts. IEEE Trans. Pattern Analysis and Machine Intelligence, 24:705–522, 2002. [2] C. A. Bouman and M. Shapiro. A multiscale random field model for bayesian image segmentation. IEEE Trans. on IP, 3(2):162–177, 1994. [3] H. Chui and A. Rangarajan. A new algorithm for non-rigid point matching. In CVPR, 2000. [4] V. Ferrari, L. Fevrier, F. Jurie, and C. Schmid. Groups of adjacent contour segments for object detection. IEEE Trans. PAMI, 2008. [5] A.R. Ferreira and H.K.H.Lee. Multiscale Modeling: A Bayesian Perspective. Springer-Verlag, Springer Series in Statistics, 2007. [6] X. He, R. S. Zemel, and M. A. Carreira-Perpinan. Multiscale conditional random fields for image labeling. In CVPR, volume 2, pages 695–702, 2004. [7] S. Z. Li. Markov Random Field Modeling in Image Analysis. Springer-Verlag, Tokyo, 2001. [8] D. Martin, C. Fowlkes, and J. Malik. Learning to detect natural image boundaries using local birghtness, colour and texture cues. IEEE Trans. PAMI, 26:530–549, 2004. [9] G. McNeill and S. Vijayakumar. Part-based probabilistic point matching using equivalence constraints. In NIPS, 2006. [10] A. Opelt, A. Pinz, and A. Zisserman. A boundary-fragment-model for object detection. In ECCV, 2006. [11] A. Rosenfeld, R. Hummel, and S. Zucker. Scene labeling by relaxation operations. Trans. on Systems, Man and Cybernetics, 6:420–433, 1976. [12] J. Shotton, A. Blake, and R. Cipolla. Contour-based learning for object detection. In ICCV, 2005. [13] A. Thayananthan, B. Stenger, P. H. S. Torr, and R. Cipolla. Shape context and chamfer matching in cluttered scenes. In CVPR, 2003. [14] Z. Tu and A.L. Yuille. Shape matching and recognition using generative models and informative features. In ECCV, 2004. [15] A. S. Willsky. Multiresolution markov models for signal and image processing. Proceedings of the IEEE, 90:1396–1458, 2002. [16] Y. N. Wu, Z. Si, C. Fleming, and S.-C. Zhu. Deformable template as active basis. In ICCV, 2007. 8
2008
242
3,501
Modeling Short-term Noise Dependence of Spike Counts in Macaque Prefrontal Cortex Arno Onken Technische Universit¨at Berlin / BCCN Berlin aonken@cs.tu-berlin.de Steffen Gr¨unew¨alder Technische Universit¨at Berlin Franklinstr. 28/29, 10587 Berlin, Germany gruenew@cs.tu-berlin.de Matthias Munk MPI for Biological Cybernetics Spemannstr. 38, 72076 T¨ubingen, Germany matthias.munk@tuebingen.mpg.de Klaus Obermayer Technische Universit¨at Berlin / BCCN Berlin oby@cs.tu-berlin.de Abstract Correlations between spike counts are often used to analyze neural coding. The noise is typically assumed to be Gaussian. Yet, this assumption is often inappropriate, especially for low spike counts. In this study, we present copulas as an alternative approach. With copulas it is possible to use arbitrary marginal distributions such as Poisson or negative binomial that are better suited for modeling noise distributions of spike counts. Furthermore, copulas place a wide range of dependence structures at the disposal and can be used to analyze higher order interactions. We develop a framework to analyze spike count data by means of copulas. Methods for parameter inference based on maximum likelihood estimates and for computation of mutual information are provided. We apply the method to our data recorded from macaque prefrontal cortex. The data analysis leads to three findings: (1) copula-based distributions provide significantly better fits than discretized multivariate normal distributions; (2) negative binomial margins fit the data significantly better than Poisson margins; and (3) the dependence structure carries 12% of the mutual information between stimuli and responses. 1 Introduction Understanding neural coding is at the heart of theoretical neuroscience. Analyzing spike counts of a population is one way to gain insight into neural coding properties. Even when the same stimulus is presented repeatedly, responses from the neurons vary, i.e. from trial to trial responses of neurons are subject to noise. The noise variations of neighboring neurons are typically correlated (noise correlations). Due to their relevance for neural coding, noise correlations have been subject of a considerable number of studies (see [1] for a review). However, these studies always assumed Gaussian noise. Thus, correlated spike rates were generally modeled by multivariate normal distributions with a specific covariance matrix that describes all pairwise linear correlations. For long time intervals or high firing rates, the average number of spikes is sufficiently large for the central limit theorem to apply and thus the normal distribution is a good approximation for the spike count distributions. However, several experimental findings suggest that noise correlations as well as sensory information processing predominantly take place on a shorter time scale, on the order of tens to hundreds of milliseconds [2, 3]. It is therefore questionable if the normal distribution is still an appropriate approximation and if the results of studies based on Gaussian noise apply to short time intervals and low firing rates. (a) 0 2 4 6 8 10 12 0 2 4 6 N1 [#Spikes/Bin] N2 [#Spikes/Bin] (b) 0 2 4 6 8 10 12 0 2 4 6 N1 [#Spikes/Bin] N2 [#Spikes/Bin] (c) 0 2 4 6 8 10 12 0 2 4 6 N1 [#Spikes/Bin] N2 [#Spikes/Bin] (d) Figure 1: (a): Recording of correlated spike trains from two neurons and conversion to spike counts. (b): The distributions of the spike counts of a neuron pair from the data described in Section 4 for 100 ms time bins. Dark squares represent a high number of occurrences of corresponding pairs of spike counts. One can see that the spike counts are correlated since the ratios are high near the diagonal. The distributions of the individual spike counts are plotted below and left of the axes. (c): Density of a fit with a bivariate normal distribution. (d): Distribution of a fit with negative binomial margins coupled with the Clayton copula. This is due to several major drawbacks of the multivariate normal distribution: (1) Its margins are continuous with a symmetric shape, whereas empirical distributions of real spike counts tend to have a positive skew, i.e. the mass of the distribution is concentrated at the left of its mode. Moreover, the normal distribution allows negative values which are not meaningful for spike counts. Especially for low rates, this can become a major issue, since the probability of negative values will be high. (2) The dependence structure of a multivariate normal distribution is always elliptical, whereas spike counts of short time bins can have a bulb-shaped dependence structure (see Fig. 1b). (3) The multivariate normal distribution does not allow higher order correlations of its elements. Instead, only pairwise correlations can be modeled. It was shown that pairwise interactions are sufficient for retinal ganglion cells and cortex cells in vitro [4]. However, there is evidence that they are insufficient for subsequent cortex areas in vivo [5]. We will show that our data recorded in prefrontal cortex suggest that higher order interactions (which involve more than two neurons) do play an important role in the prefrontal cortex as well. In this paper, we present a method that addresses the above shortcomings of the multivariate normal distribution. We apply copulas [6] to form multivariate distributions with a rich set of dependence structures and discrete marginal distributions, including the Poisson distribution. Copulas were previously applied to model the distribution of continuous first-spike-latencies [7]. Here we apply this concept to spike counts. 2 Copulas We give an informal introduction to copulas and apply the concept to a pair of neurons from our data which are described and fully analyzed in Section 4. Formal details of copulas follow in Section 3.2. A copula is a cumulative distribution function that can couple arbitrary marginal distributions. There are many families of copulas, each with a different dependence structure. Some families have an elliptical dependence structure, similar to the multivariate normal distribution. However, it is also possible to use completely different dependence structures which are more appropriate for the data at hand. As an example, consider the modeling of spike count dependencies of two neurons (Fig. 1). Spike trains are recorded from the neurons and transformed to spike counts (Fig. 1a). Counting leads to a bivariate empirical distribution (Fig. 1b). The distribution of the counts depends on the length of the time bin that is used to count the spikes, here 100 ms. In the case considered, the correlation at low counts is higher than at high counts. This is called lower tail dependence. The density of a typical population model based on the multivariate normal (MVN) distribution is shown in Fig. 1c. Here, we did not discretize the distribution since the standard approach to investigate noise correlations also uses the continuous distribution [1]. The mean and covariance matrix of the MVN distribution correspond to the sample mean and the sample covariances of the empirical distribution. Yet, the dependence structure does not reflect the true dependence structure of the counts. But the spike count probabilities for a copula-based distribution (Fig. 1d) correspond well to the empirical distribution in Fig. 1b. The modeling of spike count data with the help of a copula is done in three steps: (1) A marginal distribution, e.g. a Poisson or a negative binomial distribution is chosen, based on the spike count distribution of the individual neurons. (2) The counts are transformed to probabilities using the cumulative distribution function of the marginal distribution. (3) The probabilities and thereby the cumulative marginal distributions are coupled with the help of a so-called copula function. As an example, consider the Clayton copula family [6]. For two variables the copula is given by C(p1, p2, α) = 1 αq max{ 1 pα 1 + 1 pα 2 −1, 0} , where pi denotes the probability of the spike count Xi of the ith neuron being lower or equal to ri (i.e. pi = P(Xi ≤ri)). Note that there are generalizations to more than two margins (see Section 3.2). The function C(p1, p2, α) generates a joint cumulative distribution function by coupling the margins and thereby introduces correlations of second and higher order between the spike count variables. The ratio of the joint probability that corresponds to statistically independent spike counts P(X1 ≤r1, X2 ≤r2) = p1p2 and the dependence introduced by the Clayton copula (for 1 pα 1 + 1 pα 2 −1 ≥0) is given by p1p2 C(p1, p2, α) = p1p2 α s 1 pα 1 + 1 pα 2 −1 = αp pα 1 + pα 2 −pα 1 pα 2 . Suppose that α is positive. Since pi ∈[0, 1] the deviation from the ratio 1 will be larger for small probabilities. Thus, the copula generates correlations whose strengths depend on the magnitude of the probabilities. The probability mass function (Fig. 1d) can then be calculated from the cumulative probability using the difference scheme as described in Section 3.4. Care must be taken whenever copulas are applied to form discrete distributions: while for continuous distributions typical measures of dependence are determined by the copula function C only, these measures are affected by the shape of the marginal distributions in the discrete case [8]. 3 Parametric spike count models and model selection procedure We will now describe the formal aspects of the multivariate normal distribution on the one hand and copula-based models as the proposed alternative on the other hand, both in terms of their application to spike counts. 3.1 The discretized multivariate normal distribution The MVN distribution is continuous and needs to be discretized (and rectified) before it can be applied to spike count data (which are discrete and non-negative). The cumulative distribution function (cdf) of the spike count vector ⃗X is then given by F ⃗ X(r1, . . . , rd) = Φµ,Σ(⌊r1⌋, . . . , ⌊rd⌋), if ∀i ∈{1, . . . , d} : ri ≥0 0, otherwise where ⌊.⌋denotes the floor operation for the discretization, Φµ,Σ denotes the cdf of the MVN distribution with mean µ and correlation matrix Σ, and d denotes the dimension of the multivariate distribution and corresponds to the number of neurons that are modeled. Note that µ is no longer the mean of ⃗X. The mean is shifted to greater values as Φµ,Σ is rectified (negative values are cut off). This deviation grows with the dimension d. According to the central limit theorem, the distribution of spike counts approaches the MVN distribution only for large counts. 3.2 Copula-based models Formally, a copula C is a cdf with uniform margins. It can be used to couple marginal cdf’s FX1, . . . , FXd to form a joint cdf F ⃗ X, such that F ⃗ X(r1, . . . , rd) = C(FX1(r1), . . . , FXd(rd)) holds [6]. There are many families of copulas with different dependence shapes and different numbers of parameters, e.g. the multivariate Clayton copula family with a scalar parameter α: Cα(⃗u) = max ( 1 −d + d X i=1 u−α i , 0 )!−1/α . Thus, for a given realization ⃗r, which can represent the counts of two neurons, we can set ui = FXi(ri) and FX(⃗r) = Cα(⃗u), where FXi can be arbitrary univariate cdf’s. Thereby, we can generate a multivariate distribution with specific margins FXi and a dependence structure determined by C. In the case of discrete marginal distributions, however, typical measures of dependence, such as the linear correlation coefficient or Kendall’s τ are effected by the shape of these margins [8]. Note that α does not only control the strength of pairwise interactions but also the degree of higher order interactions. Another copula family is the Farlie-Gumbel-Morgenstern (FGM) copula [6]. It is special in that it has 2d −d −1 parameters that individually determine the pairwise and higher order interactions. Its cdf takes the form C⃗α(⃗u) =  1 + d X k=2 X 1≤j1<···<jk≤d αj1j2...jk k Y i=1 (1 −uji)   d Y i=1 ui subject to the constraints 1 + d X k=2 X 1≤j1<···<jk≤d αj1j2...jk k Y i=1 εji ≥0, ε1, ε2, . . . εd ∈{−1, 1}. We only have pairwise interactions if we set all but the first d 2  parameters to zero. Hence, we can easily investigate the impact of higher order interactions on the model fit. Due to the constraints for α, the correlations that the FGM copula can model are small in terms of their absolute value. Nevertheless, this is not an issue for modeling noise dependencies of spike counts of a small number of neurons, since the noise correlations that are found experimentally are typically small (see e.g. [2]). 3.3 Marginal distributions Copulas allow us to have different marginal distributions. Typically, the Poisson distribution is a good approximation to spike count variations of single neurons [9]. For this distribution the cdf’s of the margins take the form FXi(r; λi) = ⌊r⌋ X k=0 λk i k! e−λi, where λi is the mean spike count of neuron i for a given bin size. We will also use the negative binomial distribution as a generalization of the Poisson distribution: FXi(r; λi, υi) = ⌊r⌋ X k=0 λk i k! 1 (1 + λi υi )υi Γ(υi + k) Γ(υi)(υi + λi)k , where Γ is the gamma function. The additional parameter υi controls the degree of overdispersion: the smaller the value of υi, the greater the Fano factor. As υi approaches infinity, the negative binomial distribution converges to the Poisson distribution. 3.4 Inference for copulas and discrete margins Likelihoods of discrete vectors can be computed by applying the inclusion-exclusion principle of Poincar´e and Sylvester. For this purpose we define the sets A = {X1 ≤r1, . . . , Xd ≤rd} and Ai = {X1 ≤r1, . . . , Xd ≤rd, Xi ≤ri −1}, i ∈{1, . . . , d}. The probability of a realization ⃗r is given by P ⃗ X(⃗r) = P A \ d[ i=1 Ai ! = P(A) − d X k=1 (−1)k−1 X I⊆{1....,d}, |I|=k P \ i∈I Ai ! = F ⃗ X(⃗r) − d X k=1 (−1)k−1 X ⃗ m∈{0,1}d, P mi=k F ⃗ X(r1 −m1, . . . , rd −md). (1) Thus, we can compute the probability mass of a realization ⃗r using only the cdf of ⃗X. Since copulas separate the margins from the dependence structure, an efficient inference procedure is feasible. Let li(θi) = T X t=1 log PXi(ri,t; θi), i = 1, . . . , d denote the univariate margins of log likelihoods. Note that we assume independent time bins. Further, let l(⃗α, θ1, . . . , θd) = T X t=1 log P ⃗ X(⃗rt; ⃗α, θ1, . . . , θd) be the log likelihood of the joint distribution, where ⃗α denotes the parameter of the copula. The socalled inference for margins (IFM) method proceeds in two steps [10]. First, the marginal likelihoods are maximized separately: bθi = argmax θi {li(θi)}. Then, the full likelihood is maximized given the estimated margin parameters: b⃗α = argmax ⃗α {l(⃗α, bθ1, . . . , bθd)}. The estimator is asymptotically efficient and close to the maximum likelihood estimator [10]. 3.5 Estimation of mutual information The mutual information [11] of dependent spike counts ⃗X is a measure of the information that knowing the neural response ⃗r provides about the stimulus. It can be written as I( ⃗X; S) = X s∈MS PS(s) X ⃗r∈Nd P ⃗ X(⃗r|s) log2 P ⃗ X(⃗r|s)  −log2 X s′∈MS PS(s′)P ⃗ X(⃗r|s′) !! where S is the stimulus random variable, MS is the set of stimuli, and PS is the probability mass function for the stimuli. The likelihood P ⃗ X(⃗r|s) of ⃗r given s can be calculated using Equation 1. Thereby, I( ⃗X; S) can be estimated by the Monte Carlo method. 4 Application to multi-electrode recordings We now apply our parametric count models to the analysis of spike data, which we recorded from the prefrontal cortex of an awake behaving macaque, using a 4 × 4 tetrode array. Experimental setup. Activity was recorded while the monkey performed a visual match-tosample-task. The task involved matching of 20 visual stimuli (fruits and vegetables) that were presented for approximately 650 ms each. After an initial presentation (“sample”) a test stimulus (“test”) was presented with a delay of 3 seconds and the monkey had to decide by differential button press whether both stimuli were the same or not. Correct responses were rewarded. Match and non-match trials were randomly presented with an equal probability. We recorded from the lateral prefrontal cortex in a 2 × 2 mm2 area around the ventral bank of the principal sulcus. Recordings were performed simultaneously from up to 16 adjacent sites with an array of individually movable fiber micro-tetrodes (manufactured by Thomas Recording). Data were sampled at 32 kHz and bandpass filtered between 0.5 kHz and 10 kHz. Recording positions of individual electrodes were chosen to maximize the recorded activity and the signal quality. The recorded data were processed by a PCA based spike sorting method. The method provides automatic cluster cutting which was manually corrected by subsequent cluster merging if indicated by quantitative criteria such as the ISI-histograms or amplitude stability. Data set. To select neurons with stimulus specific responses, we calculated spike counts from their spike trains. No neuron was accepted in the dependence analysis that shifted its mean firing rate averaged over the time interval of the sample stimulus presentation by less than 6.5 Hz compared to the pre-stimulus interval. A total of six neurons fulfilled this criterion (each recorded from a different tetrode). With this criterion we can assume that the selected neurons are indeed related to processing of the stimulus information. Spike trains were separated into 80 groups, one for each of the 20 different stimuli and the four trial intervals: pre-stimulus, sample stimulus presentation, delay, and test stimulus presentation. Afterwards, the trains were binned into successive 100 ms intervals and converted to six-dimensional spike counts for each bin. Due to the different interval lengths, total sample sizes of the groups were between 224 and 1793 count vectors. A representative example of the empirical distribution of a pair of these counts from the stimulus presentation interval is presented in Fig. 1b. Model fitting. The discretized MVN distribution as well as several copula-based distributions were fitted to the data. For each of the 80 groups we selected randomly 50 count vectors (test set) for obtaining an unbiased estimate of the likelihoods. We trained the model on the remainder of each group (training set). A commonly applied criterion for model selection is maximum entropy [4]. This criterion selects a certain model with minimal complexity subject to given constraints. It thereby performs regularization which is supposed to prevent overfitting. Copulas on the other hand typically increase the complexity of the model and thus decrease the entropy. However, our evaluation takes place on a separate test set and hence takes overfitting into account. Parameter inference for the discretized MVN distribution (see Section 3.1) was performed by computing the sample mean and sample covariance matrix of the spike counts which is the standard procedure for analyzing noise correlations [1]. Note that this estimator is biased, since it is not the maximum likelihood solution for the discretized distribution. The following copula families were used to construct noise distributions of the spike counts. The Clayton (see Section 3.2), Gumbel-Hougaard, Frank and Ali-Mikhail-Haq copula families as examples of families with one parameter [6] and the FGM with a variable number of parameters (see Section 3.2). We applied the IFM method for copula inference (see Section 3.4). The sample mean is the maximum likelihood estimator for λi for both the Poisson and the negative binomial margins. The maximum likelihood estimates for υi were computed iteratively by Newton’s method. Depending on whether the copula parameters were constrained, either the Nelder-Mead simplex method for Figure 2: Evaluation of the IFM estimates on the test set and estimated mutual information. (a): Log likelihoods for the discrete multivariate normal distribution, the best fitting copula-based model with Poisson margins, and the best fitting copula-based model with negative binomial margins averaged over the 20 different stimuli. (b): Difference between the log likelihood of the model with independent counts and negative binomial margins (“ind. model”) and the log likelihoods of different copula-based models with negative binomial margins averaged over the 20 different stimuli. (c): Mutual information between stimuli and responses for the Clayton-based model with negative binomial margins. (d): Normalized difference between the mutual information for the Clayton-based model with negative binomial margins and the corresponding “ind. model”. unconstrained nonlinear optimization or the line-search algorithm for constrained nonlinear optimization was applied to estimate the copula parameters. Results for different distributions. Fig. 2 shows the evaluation of the IFM estimates on the test set. The likelihood for the copula-based models is significantly larger than for the discrete MVN model (p = 2 · 10−14, paired-sample Student’s t test over stimuli). Moreover, the likelihood for the negative binomial margins is even larger than that for the Poisson margins (p = 0.0003). We estimated the impact of neglecting higher order interactions on the fit by using different numbers of parameters for the FGM copula. For the 2nd order model we set all but the first d 2  parameters to zero, therefore leaving only parameters for pairwise interactions. In contrast, for the 3rd order model we set all but the first d 2  + d 3  parameters to zero. We computed the difference between the likelihood of the model with dependence and the corresponding model with independence between its counts. Fig. 2b shows this difference for several copulas and negative binomial margins evaluated on the test set. The model based on the Clayton copula family provides the best fit. The fit is significantly better than for the second best fitting copula family (p = 0.0014). In spite of having more parameters, the FGM copulas perform worse. However, the FGM model with third order interactions fits the data significantly better than the model that includes only pairwise interactions (p = 0.0437). Copula coding analysis. Fig. 2c shows the Monte Carlo estimate of the mutual information based on the Clayton-based model with negative binomial margins and IFM parameters determined on the training set for each of the intervals. For the test stimulus interval, the estimation was performed twice: for the previously presented sample stimulus and for the test stimulus. The Monte Carlo method was terminated when the standard error was below 5 · 10−4. The mutual information is higher during the stimulus presentation intervals than during the delay interval. We estimated the information increase due to the dependence structure by computing the mutual information for the Clayton-based model with negative binomial margins and subtracting the (smaller) mutual information for the corresponding distribution with independent elements. Fig. 2d shows this information estimate ∆Ishuffled, normalized to the mutual information for the Clayton-based model. The dependece structure carries up to 12% of the mutual information. During the test stimulus interval it carries almost twice as much information about the test stimulus as about the previously presented sample stimulus. Another important measure related to stimulus decoding which is currently under debate is ∆I/I [12]. The measure provides an upper bound on the information loss for stimulus decoding based on the distribution that assumes independence. We find that one loses at most 19.82% of the information for the Clayton-based model. 5 Conclusion We developed a framework for analyzing the noise dependence of spike counts. Applying this to our data from the macaque prefrontal cortex we found that: (1) Gaussian noise is inadequate to model spike count data for short time intervals; (2) negative binomial distributed margins describe the individual spike counts better than Poisson distributed margins; and (3) higher order interactions are present and play a substantial role in terms of model fit and information content. The substantial role of higher order interactions bears a challenge for theoreticians as well as experimentalists. The complexity of taking all higher order interactions into account grows exponentially with the number of neurons, known as the curse of dimensionality. Based on our findings, we conclude that one needs to deal with this problem to analyze short-term coding in higher cortical areas. In summary, one can say that the copula-based approach provides a convenient way to study spike count dependencies for small population sizes (< 20). At present, the approach is computationally too demanding for higher numbers of neurons. Approximate inference methods might provide a solution to the computational problem and seem worthwhile to investigate. Directions for future research are the exploration of other copula families and the validation of population coding principles that were obtained on the assumption of Gaussian noise. Acknowledgments. This work was supported by BMBF grant 01GQ0410. References [1] B. B. Averbeck and P. E. Latham and A. P. Pouget, Neural correlations, population coding and computation. Nature Review Neuroscience, 7:358–366, 2006. [2] W. Bair, E. Zohary, and W. T. Newsome, Correlated firing in macaque visual area MT: time scales and relationship to behavior. Journal of Neuroscience, 21(5):1676–1697, 2001. [3] A. Kohn and M. A. Smith, Stimulus dependence of neuronal correlation in primary visual cortex of the macaque. Journal of Neuroscience, 25(14):3661–3673, 2005. [4] E. Schneidman and M. J. Berry II and R. Segev and W. Bialek, Weak pairwise correlations imply strongly correlated network states in a neural population. Nature, 440:1007–1012, 2006. [5] M. M. Michel and R. A. Jacobs, The costs of ignoring high-order correlations in populations of model neurons. Neural Computation, 18:660–682, 2006. [6] R. B. Nelsen, An Introduction to Copulas. Springer, New York, second edition, 2006. [7] R. L. Jenison and R. A. Reale, The shape of neural dependence. Neural Computation, 16:665–672, 2004. [8] C. Genest and J. Neslehova, A primer on discrete copulas. ASTIN Bulletin, 37:475–515, 2007. [9] D. J. Tolhurst, J. A. Movshon, and A. F. Dean, The statistical reliability of signals in single neurons in cat and monkey visual cortex. Vision Research, 23:775–785, 1982. [10] H. Joe and J. J. Xu, The estimation method of inference functions for margins for multivariate models. Technical Report, 166, Department of Statistics, University of British Colombia, 1996. [11] C. E. Shannon and W. Weaver, The mathematical theory of communication. Urbana: University of Illinois Press, 1949. [12] P. E. Latham and S. Nirenberg, Synergy, redundancy, and independence in population codes, revisited. Journal of Neuroscience, 25(21):5195–5206, 2005.
2008
243
3,502
Predictive Indexing for Fast Search Sharad Goel Yahoo! Research New York, NY 10018 goel@yahoo-inc.com John Langford Yahoo! Research New York, NY 10018 jl@yahoo-inc.com Alex Strehl Yahoo! Research New York, NY 10018 strehl@yahoo-inc.com Abstract We tackle the computational problem of query-conditioned search. Given a machine-learned scoring rule and a query distribution, we build a predictive index by precomputing lists of potential results sorted based on an expected score of the result over future queries. The predictive index datastructure supports an anytime algorithm for approximate retrieval of the top elements. The general approach is applicable to webpage ranking, internet advertisement, and approximate nearest neighbor search. It is particularly effective in settings where standard techniques (e.g., inverted indices) are intractable. We experimentally find substantial improvement over existing methods for internet advertisement and approximate nearest neighbors. 1 Introduction The Problem. The objective of web search is to quickly return the set of most relevant web pages given a particular query string. Accomplishing this task for a fixed query involves both determining the relevance of potential pages and then searching over the myriad set of all pages for the most relevant ones. Here we consider only the second problem. More formally, let Q ⊆Rn be an input space, W ⊆Rm a finite output space of size N, and f : Q × W 7→R a known scoring function. Given an input (search query) q ∈Q, the goal is to find, or closely approximate, the top-k output objects (web pages) p1, . . . , pk in W (i.e., the top k objects as ranked by f(q, ·)). The extreme speed constraint, often 100ms or less, and the large number of web pages (N ≈1010) makes web search a computationally-challenging problem. Even with perfect 1000-way parallelization on modern machines, there is far too little time to directly evaluate against every page when a particular query is submitted. This observation limits the applicability of machine-learning methods for building ranking functions. The question addressed here is: “Can we quickly return the highest scoring pages as ranked by complex scoring rules typical of learning algorithms?” Predictive Indexing. We describe a method for rapidly retrieving the top elements over a large set as determined by general scoring functions. The standard method for mitigating the computational difficulties of search is to pre-process the data so that far less computation is necessary at runtime. Taking the empirical probability distribution of queries into account, we pre-compute collections of web pages that have a large expected score conditioned on the query falling into particular sets of related queries {Qi}. For example, we may pre-compute and store the list of web pages that have the highest average score when the query contains the phrase “machine learning”. To yield a practical algorithm, these sets should form meaningful groups of pages with respect to the scoring function and query distribution. At runtime, we then optimize only over those collections of top-scoring web pages for sets Qi containing the submitted query. Our main contribution is optimizing the search index with respect to the query distribution. The empirical evidence presented shows that predictive indexing is an effective technique, making general machine learning style prediction methods viable for quickly ranking over large numbers of objects. 1 The general methodology applies to other optimization problems as well, including approximate nearest neighbor search. In the remainder of Section 1 we describe existing solutions to large-scale search, and their applicability to general scoring functions. Section 2 describes the predictive indexing algorithm and covers an example and lemma suggesting that predictive indexing has significant advantages over existing techniques. We present empirical evaluation of the method in Section 3, using both proprietary web advertising data and public data for nearest neighbor search. 1.1 Feature Representation One concrete way to map web search into the general predictive index framework is to represent both queries and pages as sparse binary feature vectors in a high-dimensional Euclidean space. Specifically, we associate each word with a coordinate: A query (page) has a value of 1 for that coordinate if it contains the word, and a value of 0 otherwise. We call this the word-based feature representation, because each query and page can be summarized by a list of its features (i.e., words) that it contains. The general predictive framework supports many other possible representations, including those that incorporate the difference between words in the title and words in the body of the web page, the number of times a word occurs, or the IP address of the user entering the query. 1.2 Related Work Given the substantial importance of large-scale search, a variety of techniques have been developed to address the rapid ranking problem. Past work that has referenced the query distribution includes (Cheng et al., 2006; Chierichetti et al., 2008). Here we describe two commonly applied methods related to the predictive index approach. Fagin’s Threshold Algorithm. Fagin’s threshold algorithm (Fagin et al., 2003) supports the top-k problem for linear scoring functions of the form f(q, p) = Pn i=1 qigi(p), where qi ∈{0, 1} is the ith coordinate of the query q, and gi : W 7→R are partial scores for pages as determined by the ith feature1. For each query feature i, construct an ordered list Li containing every web page, sorted in descending order by their partial scores gi(p). We refer to this as the projective order, since it is attained by projecting the scoring rule onto individual coordinates. Given a query q, we evaluate web pages in the lists Li that correspond to features of q. The algorithm maintains two statistics, upper and lower bounds on the score of the top-kth page, halting when these bounds cross. The lower bound is the score of the kth best page seen so far; the upper bound is the sum of the partial scores (i.e., gi(p)) for the next-to-be-scored page in each list. Since the lists are ordered by the partial scores, the upper threshold does in fact bound the score of any page yet to be seen. The threshold algorithm is particularly effective when a query contains a small number of features, facilitating fast convergence of the upper bound. In our experiments, we find that the halting condition is rarely satisfied within the imposed computational restrictions. One can, of course, simply halt the algorithm when it has expended the computational budget (Fagin, 2002), which we refer to as the Halted Threshold Algorithm. Inverted Indices. An inverted index is a datastructure that maps every page feature x to a list of pages p that contain x. When a new query arrives, a subset of page features relevant to the query is first determined. For instance, when the query contains “dog”, the page feature set might be {“dog”, “canine”, “collar”, ...}. Note that a distinction is made between query features and page features, and in particular, the relevant page features may include many more words than the query itself. Once a set of page features is determined, their respective lists (i.e., inverted indices) are searched, and from them the final list of output pages is chosen. One method for searching over these lists is to execute Fagin’s threshold algorithm. Other methods, such as the “Weighted-And” algorithm (Broder et al., 2003), use one global order for pages in the lists and walk down the lists synchronously to compute page scores. See (Zobel & Moffat, 2006) for an overview of inverted indices applied to web search. Standard approaches based on inverted indices suffer from a shortcoming. The resulting algorithms are efficient only when it is sufficient to search over a relatively small set of inverted indices for each 1More general monotone scoring functions (e.g., coordinate-wise product and max) are in fact supported; for clarity, however, we restrict to the linear case. 2 query. They require, for each query q, that there exists a small set2 Xq of page features such that the score of any page against q depends only on its intersection with Xq. In other words, the scoring rule must be extremely sparse, with most words or features in the page having zero contribution to the score for q. In Section 3.1, we consider a machine-learned scoring rule, derived from internet advertising data, with the property that almost all page features have substantial influence on the score for every query, making any straightforward approach based on inverted indices intractable. Furthermore, algorithms that use inverted indices do not typically optimize the datastructure against the query distribution and our experiments suggest that doing so may be beneficial. 2 An Algorithm for Rapid Approximate Ranking Suppose we are provided with a categorization of possible queries into related, potentially overlapping, sets. For example, these sets might be defined as, “queries containing the word ‘France’,” or “queries with the phrase ‘car rental’.” For each query set, the associated predictive index is an ordered list of web pages sorted by their expected score for random queries drawn from that set. In particular, we expect web pages at the top of the ‘France’ list to be good, on average, for queries containing the word ‘France.’ In contrast to an inverted index, the pages in the ‘France’ list need not themselves contain the word ‘France’. To retrieve results for a particular query (e.g., “France car rental”), we optimize only over web pages in the relevant, pre-computed lists. Note that the predictive index is built on top of an already existing categorization of queries, a critical, and potentially difficult initial step. In the applications we consider, however, we find that predictive indexing works well even when applied to naively defined query sets. Furthermore, in our application to approximate nearest neighbor search, we found predictive indexing to be robust to cover sets generated via random projections whose size and shape were varied across experiments. We represent queries and web pages as points in, respectively, Q ⊆Rn and W ⊆Rm. This setting is general, but for the experimental application we consider n, m ≈106, with any given page or query having about 102 non-zero entries (see Section 3.1 for details). Thus, pages and points are typically sparse vectors in very high dimensional spaces. A coordinate may indicate, for example, whether a particular word is present in the page/query, or more generally, the number of times that word appears. Given a scoring function f : Q × W 7→R, and a query q, we attempt to rapidly find the top-k pages p1, . . . , pk. Typically, we find an approximate solution, a set of pages ˆp1, . . . , ˆpk that are among the top l for l ≈k. We assume queries are generated from a probability distribution D that may be sampled. 2.1 Predictive Indexing for General Scoring Functions Consider a finite collection Q of sets Qi ⊆Q that cover the query space (i.e., Q ⊆∪iQi). For each Qi, define the conditional probability distribution Di over queries in Qi by Di(·) = D(·|Qi), and define fi : W 7→R as fi(p) = Eq∼Di[f(q, p)]. The function fi(p) is the expected score of the web page p for the (related) queries in Qi. The hope is that any page p has approximately the same score for any query q ∈Qi. If, for example, Qi is the set of queries that contain the word “dog”, we may expect every query in Qi to score high against pages about dogs and to score low against those pages not about dogs. For each set of queries Qi we pre-compute a sorted list Li of pages pi1, pi2, . . . , piN ordered in descending order of fi(p). At runtime, given a query q, we identify the query sets Qi containing q, and compute the scoring function f only on the restricted set of pages at the beginning of their associated lists Li. We search down these lists for as long as the computational budget allows. In general, it is difficult to compute exactly the conditional expected scores of pages fi(p). One can, however, approximate these scores by sampling from the query distribution D. Algorithm 1 outlines the construction of the sampling-based predictive indexing datastructure. Algorithm 2 shows how the method operates at run time. Note that in the special case where we cover Q with a single set, we end up with a global ordering of web pages, independent of the query, which is optimized for the underlying query distribution. 2The size of these sets are typically on the order of 100 or smaller. 3 Algorithm 1 Construct-Predictive-Index(Cover Q, Dataset S) Lj[s] = 0 for all objects s and query sets Qj. for t random queries q ∼D do for all objects s in the data set do for all query sets Qj containing q do Lj[s] ←Lj[s] + f(q, s) end for end for end for for all lists Lj do sort Lj end for return {L} Algorithm 2 Find-Top(query q, count k) i = 0 top-k list V = ∅ while time remains do for each query set Qj containing q do s ←Lj[i] if f(q, s) > kth best seen so far then insert s into ordered top-k list V end if end for i ←i + 1 end while return V While this global ordering may not be effective in isolation, it could perhaps be used to order pages in traditional inverted indices. 2.2 Discussion We present an elementary example to help develop intuition for why we can sometimes expect predictive indexing to improve upon projective datastructures such as those used in Fagin’s threshold algorithm. Suppose we have: two query features t1 and t2; three possible queries q1 = {t1}, q2 = {t2} and q3 = {t1, t2}; and three web pages p1, p2 and p3. Further suppose we have a simple linear scoring function defined by f(q, p1) = It1∈q −It2∈q f(q, p2) = It2∈q −It1∈q f(q, p3) = .5 · It2∈q + .5 · It1∈q where I is the indicator function. That is, pi is the best match for query qi, but p3 does not score highly for either query feature alone. Thus, an ordered, projective datastructure would have t1 →{p1, p3, p2} t2 →{p2, p3, p1}. Suppose, however, that we typically only see query q3. In this case, if we know t1 is in the query, we infer that t2 is likely to be in the query (and vice versa), and construct the predictive index t1 →{p3, p1, p2} t2 →{p3, p2, p1}. On the high probability event, namely query q3, we see the predictive index outperforms the projective, query independent, index. We expect predictive indices to generally improve on datastructures that are agnostic to the query distribution. In the simple case of a single cover set (i.e., a global web page ordering) and when we wish to optimize the probability of returning the highest-scoring object, Lemma 2.1 shows that a predictive ordering is the best ordering relative to any particular query distribution. 4 Lemma 2.1. Suppose we have a set of points S, a query distribution D, and a function f that scores queries against points in S. Further assume that for each query q, there is a unique highest scoring point Hq. For s ∈S, let h(s) = Prq∼D(s = Hq), and let s1, s2, . . . , sN be ordered according to h(s). For any fixed k, Pr q∼D(Hq ∈{s1, ..., sk}) = max permutations π Pr q∼D(Hq ∈{sπ(1), ..., sπ(k)}). Proof. For any ordering of points, sπ(1), . . . , sπ(k), the probability of the highest scoring point appearing in the top k entries equals Pk j=1 h(sπ(j)). This sum is clearly maximized by ordering the list according to h(·). 3 Empirical Evaluation We evaluate predictive indexing for two applications: Internet advertising and approximate nearest neighbor. 3.1 Internet Advertising We present results on Internet advertising, a problem closely related to web search. We have obtained proprietary data, both testing and training, from an online advertising company. The data are comprised of logs of events, where each event represents a visit by a user to a particular web page p, from a set of web pages Q ⊆Rn. From a large set of advertisements W ⊆Rm, the commercial system chooses a smaller, ordered set of ads to display on the page (generally around 4). The set of ads seen and clicked by users is logged. Note that the role played by web pages has switched, from result to query. The total number of ads in the data set is |W| ≈6.5 × 105. Each ad contains, on average, 30 ad features, and a total of m ≈106 ad features are observed. The training data consist of 5 million events (web page × ad displays). The total number of distinct web pages is 5 × 105. Each page consists of approximately 50 page features, and a total of n ≈9 × 105 total page features are observed. We used a sparse feature representation (see Section 1.1) and trained a linear scoring rule f of the form f(p, a) = P i,j wi,jpiaj, to approximately rank the ads by their probability of click. Here, wi,j are the learned weights (parameters) of the linear model. The search algorithms we compare were given the scoring rule f, the training pages, and the ads W for the necessary pre-computations. They were then evaluated by their serving of k = 10 ads, under a time constraint, for each page in the test set. There was a clear separation of test and training. We measured computation time in terms of the number of full evaluations by the algorithm (i.e., the number of ads scored against a given page). Thus, the true test of an algorithm was to quickly select the most promising T ads to fully score against the page, where T ∈{100, 200, 300, 400, 500} was externally imposed and varied over the experiments. These numbers were chosen to be in line with real-world computational constraints. We tested four methods: halted threshold algorithm (TA), as described in Section 1.2, two variants of predictive indexing (PI-AVG and PI-DCG), and a fourth method, called best global ordering (BO), which is a degenerate form of PI discussed in Section 2.1. An inverted index approach is prohibitively expensive since almost all ad features have substantial influence on the score for every web page (see Section 1.2). PI-AVG and PI-DCG require a covering of the web page space. We used the natural covering suggested by the binary features—each page feature i corresponds to a cover set consisting of precisely those pages p that contain i. The resulting datastructure is therefore similar to that maintained by the TA algorithm—lists, for each page feature, containing all the ads. However, while TA orders ads by partial score P j wi,jpiaj for each fixed page feature i, the predictive methods order by expected score. PI-AVG sorts ad lists by expected score of f, Ep∼Di[f(p, a)] = Ep∼D[f(p, a)|i ∈p], conditioned on the page containing feature i. PI-DCG and BO optimize the expected value of a modified scoring rule, DCGf(p, a) = Ir(p,a)≤16/ log2 (r(p, a) + 1), where r is the rank function and I is the indicator function. Here, r(p, a) = j indicates that ad a has rank j according to f(p, a) over all ads in W. BO stores a single list of all ads, sorted by expected DCGf(p, a), while PI-DCG stores a list for each page feature i sorted by Ep∼Di[DCGf(p, a)]. We chose this measure because: 5 1. Compared with using the average score of f, we empirically observe that expected DCGf greatly improves the performance of BO on these data. 2. It is related to “discounted cumulative gain”, a common measure for evaluating search results in the information retrieval literature (J¨arvelin & Kek¨al¨ainen, 2002). 3. Expected DCGf is zero for many ads, enabling significant compression of the predictive index. 4. Lemma 2.1 suggests ordering by the probability an ad is in the top 10. The DCGf score is a softer version of indicator of top 10. All three predictive methods were trained by sampling from the training set, as described in 2.1. Figure 3.1 plots the results of testing the four algorithms on the web advertising data. Each point in the figure corresponds to one experiment, which consisted of executing each algorithm on 1000 test pages. Along the x-axis we vary the time constraint imposed on the algorithm. The y-axis plots the frequency, over the test pages, that the algorithm succeeded in serving the top scoring ad for position 1 (Figure 1(a)) and for position 10 (Figure 1(b)). Thus, vertical slices through each plot show the difference in performance between the algorithms when they are given the same amount of serving time per page. The probabilities were computed by off-line scoring of all 6.5 × 105 ads for each test page and computing the true top-10 ads. Serving correctly for position 10 is more difficult than for position 1, because it also requires correctly serving ads for positions 1 through 9. We see that all three methods of predictive indexing are superior to Fagin’s halted threshold algorithm. In addition, the use of a richer covering, for PI-DCG and PI-AVG, provides a large boost in performance. These latter two predictive indexing methods attain relatively high accuracy even when fully evaluating only 0.05% of the potential results. G G G G G 100 200 300 400 500 0.0 0.2 0.4 0.6 0.8 1.0 Comparison of Serving Algorithms Number of Full Evaluations Probability of Exact Retrieval−1st Result G G G G G G G PI−AVG PI−DCG Fixed Ordering Halted TA (a) G G G G G 100 200 300 400 500 0.0 0.2 0.4 0.6 0.8 1.0 Comparison of Serving Algorithms Number of Full Evaluations Probability of Exact Retrieval−10th Result G G G G G G G PI−AVG PI−DCG Fixed Ordering Halted TA (b) Figure 1: Comparison of the first and tenth results returned from the four serving algorithms on the web advertisement dataset. Our implementation of the predictive index, and also the halted threshold algorithm, required about 50ms per display event when 500 ad evaluations are allowed. The RAM use for the predictive index is also reasonable, requiring about a factor of 2 more RAM than the ads themselves. 3.2 Approximate Nearest Neighbor Search A special case application of predictive indexing is approximate nearest neighbor search. Given a set of points W in n-dimensional Euclidean space, and a query point x in that same space, the nearest neighbor problem is to quickly return the top-k neighbors of x. This problem is of considerable interest for a variety of applications, including data compression, information retrieval, and pattern recognition. In the predictive indexing framework, the nearest neighbor problem corresponds to 6 minimizing a scoring function, f(x, y) = ||x −y||2, defined by Euclidean distance. We assume query points are generated from a distribution D that can be sampled. To start, we define a covering Q of the input space Rn, which we borrow from locality-sensitive hashing (LSH) (Gionis et al., 1999; Datar et al., 2004), a commonly suggested scheme for the approximate nearest neighbor problem. Fix positive integer parameters α, β. First, we form α random partitions of the input space. Geometrically, each partition splits the n-dimensional space on β random hyperplanes. Formally, for all 1 ≤i ≤α and 1 ≤j ≤β, generate a random unitnorm n-vector Y ij = (Y ij 1 , . . . , Y ij n ) ∈Rn from the Gaussian (normal) distribution. For fixed i ∈{1, . . . , α} and subset J ⊆{1, . . . , β} define the cover set Qi,J = {x ∈Rn : x · Y ij ≥ 0 if and only if j ∈J}. Note that for fixed i, the set {Qi,J|J ⊆{1, . . . , k}} partitions the space by random planes. Given a query point x, consider the union Ux = S {Qi,J∈Q | x ∈Qi,J} Qi,J of all cover sets containing x. Standard LSH approaches to the nearest neighbor problem work by scoring points in the set Qx = W ∩Ux. That is, LSH considers only those points in W that are covered by at least one of the same α sets as x. Predictive indexing, in contrast, maps each cover set Qi,J to an ordered list of points sorted by their probability of being a top-10 nearest point to points in Qi,J. That is, the lists are sorted by hQi,J(p) = Prq∼D|Qi,J(p is one of the nearest 10 points to q). For the query x, we then consider those points in W with large probability hQi,J for at least one of the sets Qi,J that cover x. We compare LSH and predictive indexing over four data sets: (1) MNIST—60,000 training and 10,000 test points in 784 dimensions; (2) Corel—37,749 points in 32 dimensions, split randomly into 95% training and 5% test subsets; (3) Pendigits—7494 training and 3498 test points in 17 dimensions; and (4) Optdigits—3823 training and 1797 test points in 65 dimensions. The MNIST data is available at http://yann.lecun.com/exdb/mnist/ and the remaining three data sets are available at the UCI Machine Learning Repository (http://archive.ics.uci.edu/ ml/). Random projections were generated for each experiment, inducing a covering of the space that was provided to both LSH and predictive indexing. The predictive index was generated by sampling over the training set as discussed in Section 2.1. The number of projections β per partition was set to 24 for the larger sets (Corel and MNIST) and 63 for the smaller sets (Pendigits and Optdigits), while the number of partitions α was varied as an experimental parameter. Larger α corresponds to more full evaluations per query, resulting in improved accuracy at the expense of increased computation time. Both algorithms were restricted to the same average number of full evaluations per query. Predictive indexing offers substantial improvements over LSH for all four data sets. Figure 2(a) displays the true rank of the first point returned by LSH and predictive indexing on the MNIST data set as a function of α, averaged over all points in the test set and over multiple trials. Predictive indexing outperforms LSH at each parameter setting, with the difference particularly noticeable when fewer full evaluation are permitted (i.e., small α). Figure 2(b) displays the performance of LSH and predictive indexing for the tenth point returned, over all four data sets, with values of α varying from 5 to 70, averaged over the test sets, and replicated by multiple runs. In over 300 trials, we did not observe a single instance of LSH outperforming predictive indexing. Recent work has proposed more sophisticated partitionings for LSH (Andoni & Indyk, 2006). Approaches based on metric trees (Liu et al., 2004), which take advantage of the distance metric structure, have also been shown to perform well for approximate nearest neighbor. Presumably, taking advantage of the query distribution could further improve these algorithms as well, although that is not studied here. 4 Conclusion Predictive indexing is the first datastructure capable of supporting scalable, rapid ranking based on general purpose machine-learned scoring rules. In contrast, existing alternatives such as the Threshold Algorithm (Fagin, 2002) and Inverted Index approaches (Broder et al., 2003) are either substantially slower, inadequately expressive, or both, for common machine-learned scoring rules. In the special case of approximate nearest neighbors, predictive indexing offers substantial and consistent improvements over the Locality Sensitive Hashing algorithm. 7 G G G G G G 20 30 40 50 60 70 0 10 20 30 40 LSH vs. Predictive Indexing on MNIST Data Number of Partitions α Rank of 1st Result G LSH Predictive Indexing (a) The y-axis, “Rank of 1st Result” measures the true rank of the first result returned by each method. As the number of partitions α is increased, improved accuracy is achieved at the expense of longer computation time. G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G 20 40 60 80 100 20 40 60 80 100 LSH vs. Predictive Indexing − All Data Sets LSH − Rank of 10th Result Predictive Indexing − Rank of 10th Result (b) Each point represents the outcome of a single experiment for one of the four data sets at various parameter settings. Figure 2: Comparison of the first and tenth results returned from LSH and predictive indexing. References Andoni, A., & Indyk, P. (2006). Near-optimal hashing algorithms for near neighbor problem in high dimensions. Proceedings of the Symposium on Foundations of Computer Science (FOCS’06). Broder, A. Z., Carmel, D., Herscovici, M., Soffer, A., & Zien, J. (2003). Efficient query evaluation using a two-level retrieval process. CIKM ’03: Proceedings of the twelfth international conference on Information and knowledge management (pp. 426–434). Cheng, C.-S., Chung, C.-P., & Shann, J. J.-J. (2006). Fast query evaluation through document identifier assignment for inverted file-based information retrieval systems. Inf. Process. Manage., 42, 729–750. Chierichetti, F., Lattanzi, S., Mari, F., & Panconesi, A. (2008). On placing skips optimally in expectation. WSDM ’08: Proceedings of the international conference on Web search and web data mining (pp. 15–24). New York, NY, USA: ACM. Datar, M., Immorlica, N., Indyk, P., & Mirrokni, V. S. (2004). Locality-sensitive hashing scheme based on pstable distributions. SCG ’04: Proceedings of the twentieth annual symposium on Computational geometry (pp. 253–262). New York, NY, USA: ACM. Fagin, R. (2002). Combining fuzzy information: an overview. SIGMOD Rec., 31, 109–118. Fagin, R., Lotem, A., & Naor, M. (2003). Optimal aggregation algorithms for middleware. J. Comput. Syst. Sci., 66, 614–656. Gionis, A., Indyk, P., & Motwani, R. (1999). Similarity search in high dimensions via hashing. The VLDB Journal (pp. 518–529). J¨arvelin, K., & Kek¨al¨ainen, J. (2002). Cumulated gain-based evaluation of IR techniques. ACM Transactions on Information Systems, 20, 422–446. Liu, T., Moore, A., Gray, A., & Yang, K. (2004). An investigation of practical approximate nearest neighbor algorithms. Neural Information Processing Systems. Zobel, J., & Moffat, A. (2006). Inverted files for text search engines. ACM Comput. Surv., 38, 6. 8
2008
244
3,503
Human Active Learning Rui Castro1, Charles Kalish2, Robert Nowak3, Ruichen Qian4, Timothy Rogers2, Xiaojin Zhu4∗ 1Department of Electrical Engineering Columbia University. New York, NY 10027 Department of {2Psychology, 3Electrical and Computer Engineering, 4Computer Sciences} University of Wisconsin-Madison. Madison, WI 53706 Abstract We investigate a topic at the interface of machine learning and cognitive science. Human active learning, where learners can actively query the world for information, is contrasted with passive learning from random examples. Furthermore, we compare human active learning performance with predictions from statistical learning theory. We conduct a series of human category learning experiments inspired by a machine learning task for which active and passive learning error bounds are well understood, and dramatically distinct. Our results indicate that humans are capable of actively selecting informative queries, and in doing so learn better and faster than if they are given random training data, as predicted by learning theory. However, the improvement over passive learning is not as dramatic as that achieved by machine active learning algorithms. To the best of our knowledge, this is the first quantitative study comparing human category learning in active versus passive settings. 1 Introduction Active learning is a paradigm in which the learner has the ability to sequentially select examples for labeling. The selection process can take advantage of information gained from previously observed labeled examples in order to accelerate the learning process. In contrast, passive learning is a paradigm in which the learner has no control over the labeled examples it is given. In machine learning, active learning has been a topic of intense interest. In certain machine learning problems it has been shown that active learning algorithms perform much better than passive learning, with superior convergence bounds (see [1, 4] and references therein) and/or superior empirical performance [5, 19]. In this paper we focus on the application of active learning to classification, in both machines and humans. To our knowledge, no previous work has attempted to quantify human active learning performance in probabilistic category learning (i.e., classification), contrast human active and passive learning, and compare against theoretically optimal theory bounds. Theories of human category learning often cast the learner as a passive learner, who observes some object (typically represented as a feature vector), is presented with the object’s category label, and does some statistical processing to determine how the label should generalize. Anyone who has ever interacted with a three-year-old will recognize that this scenario is exceedingly unrealistic in at least one respect. Certainly toddlers observe their environment, and certainly they pay attention when adults label objects for them – but they also ask a lot of questions. Active querying provides children with information that they would otherwise be less likely to encounter through passive observation; and so, presumably, such active querying has important implications for category learning. Early research in human concept attainment suggested that learners do benefit from the opportunity to actively select examples during learning [11]. However, it proved very difficult to establish cri∗Correspondence concerning this article should be send to jerryzhu@cs.wisc.edu. 1 Figure 1: The two-category learning task with boundary θ and noise level ϵ. Figure 2: Probabilistic bisection strategy. Shaded areas have 1/2 probability mass. teria for assessing the magnitude of the active learning benefit (e.g., compared to theoretical ideals, or to passive learning). Partly as a result, nearly all contemporary research in classification and categorization has ignored active learning. Furthermore, a rich literature on decision-making and scientific inference has produced conflicting claims regarding people’s capacities to select optimal learning examples [7, 10, 12, 13, 14, 15, 16, 17, 20]. Most famously, people make inappropriate queries to assess simple logical hypotheses such as “if p then q” (frequently examining q instances to see if they are p, and failing to explore not-q instances [20]). Several authors have argued that pessimistic views of the human ability to choose relevant queries are based on faulty task analyses; and that, when the learning task is properly construed, humans do an excellent, even optimal job of selection [7, 14]. As much of the debate in the psychological literature turns on task analysis and the proper metric for assessing performance, there is significant opportunity to benefit from the formal descriptions characteristic of machine learning research. The current study exploits one such analysis of a relatively simple binary classification task with fixed error rate in feedback. Specification of the theoretical benefits of active learning in this context allows us to address the following questions regarding human performance: [Q1] Do humans perform better when they can select their own examples for labeling, compared to passive observation of labeled examples? [Q2] If so, do they achieve the full benefit of active learning suggested by statistical learning theory? [Q3] If they do not, can machine learning be used to enhance human performance? [Q4] Do the answers to these questions vary depending upon the difficulty of the learning problem? The goal of this paper is to answer these questions in a quantitative way by studying human and machine performance in one well-understood classification task. Answers to these questions have important theoretical and practical implications for our understanding of human learning and cognition. As previously noted, most theories of human category learning assume passive sampling of the environment. Some researchers have argued that the environment provides little information regarding the category structure of the world, and so conclude that human category learning must be subject to strong initial constraints [6, 3, 9]. If, however, human learning benefits from active querying of the environment, it is not clear that such conclusions are justified. From an applied perspective, if machines can be shown to aid human learning in certain predictable circumstances, this has clear implications for the design of intelligent tutoring systems and other machine-human hybrid applications. 2 A Two-Category Learning Task For the study in this paper we consider learning in a relatively simple setting, where there is a good theoretical understanding of both active and passive machine learning, offering an ideal test-bed for assessing active learning in humans. The task is essentially a two-category learning problem (binary classification) in the interval [0, 1]. Let θ ∈[0, 1] be the unknown but fixed decision boundary. To the left of θ the category is “zero” and to the right of θ the category is “one.” The goal of the learning task is to infer θ as accurately as possible from a set of examples. The training data (set of examples) consists of n sample and label pairs; {(Xi, Yi)}n i=1, where Xi ∈[0, 1] and Yi ∈{0, 1}. The label Yi is related to the sample Xi in the following noisy way: Yi is equal to the category of Xi with probability 1 −ϵ and equal to the other category with probability ϵ, where 0 ≤ϵ < 1/2. In other words, each label more probably is correct than incorrect, and ϵ is the probability of an incorrect 2 label1. Note that the label Yi is simply a noisy answer to the question “is Xi larger than θ?” Figure 1 illustrates this model. Furthermore assume that, given Xi, Yi is statistically independent of {Yj}j̸=i. At this point we have not specified how the sample locations Xi are generated, and in this lies the major difference between passive and active learning. In the passive learning setting the sample locations are randomly distributed, independent of the labels. On the other hand, in the active learning setting the learner can choose the sample locations in a sequential way depending on the past, that is Xi = h(X1, . . . , Xi−1, Y1, . . . , Yi−1) , where h is a (possibly random) function that takes into account past experiences and proposes a new query Xi. If ϵ = 0, that is when there is no label noise, the optimal methodologies for passive and active learning are quite obvious. In passive learning, the optimal inference is that θ lies somewhere between the rightmost location where a label of zero was observed and the leftmost location where a label of one was observed. If the n sample locations are (approximately) evenly distributed between 0 and 1, then the error of the inference is on the order of 1/n. On the other hand, in active learning the optimal strategy is a deterministic binary bisection: begin by taking X1 = 1/2. If Y1 = 0, then θ > 1/2, otherwise θ ≤1/2. Suppose Y1 = 1, then the next sample point is X2 = 1/4 and if Y2 = 1, then θ < 1/4 otherwise θ ≥1/4. Proceeding in this fashion we see that the length of the interval of possible values of θ is halved at every observation. Therefore after n samples the error of the active learning inference is at most 2−(n+1). Clearly active learning, where the error decays exponentially with the number of samples, is much better than passive learning, where the error can decay only polynomially. If ϵ > 0 there is uncertainty in our label observation process and estimating θ becomes more delicate. Under passive learning, the maximum likelihood estimator yields the optimal rate of error convergence. Furthermore it is possible to show a performance lower bound that clarifies what is the best possible performance of any passive learning algorithm. In particular we have the following result. inf ˆθn sup θ∈[0,1] E[|ˆθn −θ|] ≥1 4 1 + 2ϵ 1 −2ϵ 2ϵ 1 n + 1 , (1) where ˆθn is the estimate of θ obtained after n observations, and the infimum is taken over all possible passive learning procedures. This is a so-called minimax lower bound, and gives an indication of the best achievable performance of any passive learning algorithm. That is, no passive algorithm can learn more rapidly. This bound can be easily shown using Theorem 2.2 of [18], and the performance of the maximum likelihood estimator is within a constant factor of (1). For active learning, deterministic bisection cannot be used due to the label noise. Nevertheless active learning is still extremely beneficial in this setting. Horstein [8] proposed a method that is suitable for our purposes. The key idea stems from Bayesian estimation. Suppose that we have a prior probability density function p0(·) on the unknown parameter θ, namely that θ is uniformly distributed over the interval [0, 1]. To make the exposition clear let us assume θ = 1/4. Like before, we start by making a query at X1 = 1/2. With probability 1 −ϵ we observe the correct label Y1 = 1, and with probability ϵ we observe the incorrect label Y1 = 0. Suppose Y1 = 1 was observed. Given these facts we can update the posterior density by applying Bayes rule. In this case we obtain p1(t|X1, Y1) = 2(1 −ϵ) if t ≤1/2, or 2ϵ if t > 1/2. The next step is to choose the sample location X2. We choose X2 so that it bisects the posterior probability mass, that is, we take X2 such that Prt∼p1(·)(t > X2|X1, Y1) = Prt∼p1(·)(t < X2|X1, Y1). In other words X2 is just the median of the posterior distribution. We continue iterating this procedure until we have collected n samples. The estimate ˆθn is then defined as the median of the final posterior distribution. Figure 2 illustrates the procedure. Note that if ϵ = 0 then this probabilistic bisection is simply the binary bisection described above. The above algorithm works extremely well in practice, but it is hard to analyze. In [2] a slightly modified method was introduced, which is more amenable to analysis; the major difference involves 1We use a constant noise level ϵ because the theoretical distinction between active and passive learning is dramatic in this case. Other (perhaps more natural) noise models are possible, for example ϵ can decrease away from the true class boundary. Noise models like this are well understood theoretically [4]; we will investigate them in future work. 3 0 0.125 0.25 0.375 0.5 0.625 0.75 0.875 1 Figure 3: A few 3D visual stimuli and their X values used in our experiment. a discretization of the possible query locations. For this method it can be shown [2] that sup θ∈[0,1] E[|ˆθn −θ|] ≤2 r 1 2 + p ϵ(1 −ϵ) !n . (2) Note that the expected estimation error decays exponentially with the number of observations, as opposed to the polynomial decay achievable using passive learning (1). This shows that the accuracy of active learning is significantly better than passive learning, even under the presence of uncertainty. Furthermore no active (or passive) learning algorithm can have their expected error decaying faster than exponentially with the number of samples, as in (2). 3 Human Passive and Active Learning Experiments Equipped with the theoretical performance of passive learning (1) and active learning (2), we now describe a behavioral study designed to answer Q1-Q4 posed earlier. The experiment is essentially a human analog of the abstract learning problem described in the previous section in which the learner tries to find the boundary between two classes defined along a single dimension, a setting used to demonstrate semi-supervised learning behavior in humans in our previous work [21]. We are particularly interested in comparing three distinct conditions: Condition “Random”. This is the passive learning condition where the human subject cannot select the queries, and is instead presented sequentially with examples {Xi}n i=1 sampled uniformly at random from [0, 1], and their noisy labels {Yi}n i=1. The subject is regularly asked to guess the boundary from these observations (without feedback). As in (1), the expected estimation error |ˆθn −θ| of an optimal machine learning algorithm decreases at the rate 1/n. If humans are capable of learning from passive observation of random samples, their boundary estimates should approach the true boundary with this polynomial rate too. Condition “Human-Active”. This is the active learning condition where the human subject, at iteration i, selects a query Xi based on her previous queries and their noisy labels {(Xj, Yj)}i−1 j=1. She then receives a subsequent noisy label Yi. If humans are making good use of previously collected examples by selecting informative queries then the rate of error decrease should be exponential, following (2). Condition “Machine-Yoked”. This is a hybrid human-machine-learning condition in which the human passively observes samples selected by the active learning algorithm in [2], observes the noisy label generated in response to each query, and is regularly asked to guess, without feedback, where the boundary is – as though the machine is teaching the human. It is motivated by question Q3: Can machine learning assist human category learning? Materials. Each sample X is a novel artificial 3D shape displayed to the subject on a computer screen. The shapes change with X smoothly in several aspects simultaneously. Figure 3 shows a few shapes and their X values. A difference of 0.06 in X value corresponds roughly to the psychological “Just Noticeable Difference” determined by a pilot study. For implementation reasons our shapes are discretized to a resolution of about 0.003 in X values, beyond which the visual difference is too small to be of interest. Participants. Participants were 33 university students, participating voluntarily or for partial course credit. They were told that the 3D shapes are alien eggs. Spiky eggs (X close to 0) most likely hatch alien snakes (category zero), and smooth eggs (X close to 1) most likely hatch alien birds (category one), but there could be exceptions (label noise). Their task was to identify as precisely as possible the egg shape (decision boundary) at which it switches from most likely snakes to most likely birds. 4 Procedure. Each participant was assigned one of the three conditions: Random (13 subjects), Human-Active (14 subjects), Machine-Yoked (6 subjects). Machine-Yoked receives approximately half the number of other groups, as pilot studies indicated that performance was much less variable in this condition. In all conditions, subjects were explicitly informed of the one dimensional nature of the task. The participant first completed a short practice session to familiarize her with the computer interface and basic task, followed by 5 longer sessions of 45 iterations each. The noise level ϵ, which determines the difficulty of the learning task, varied across sessions, taking the values 0, 0.05, 0.1, 0.2, 0.4 with order determined randomly for each participant. For each session and participant the true decision boundary θ was randomly set in [1/16, 15/16] to avoid dependencies on the location of the true boundary. The experiment thus involved one between-subject factor (learning condition) and one within-subjects factor (noise level ϵ). At iteration i of the learning task, a single shape at Xi was displayed on a CRT monitor at a normal viewing distance. In the Human-Active condition, the participant then used a computer mouse wheel to scroll through the range of shapes. Once the participant found the shape she wished to query (Xi+1), she clicked a “hatch” button and observed the outcome (bird or snake, corresponding to the noisy label), followed by a “Continue” button to move on to the next query. In the Random and Machine-Yoked conditions, each sample Xi+1 was generated by the computer with no user intervention, and a short animation was displayed showing shapes smoothly transitioning from Xi to Xi+1 in order to match the visual experience in the Human-Active condition. Once the transition was completed, the outcome (label) for Xi+1 was observed, and participants clicked a “Continue” button to observe the next sample and outcome. In all conditions, the computer generated the noisy label Yi+1 according to the true boundary θ and noise level ϵ, and displayed it to the participant with either a snake picture (Yi+1 = 0) or a bird picture (Yi+1 = 1). The display was reset to the initial shape after ever 3 queries to ensure that participants paid attention to the precise shape corresponding to their estimate of the boundary location rather than simply searching locally around the current shape (total 15 re-starts over 45 queries; 45 re-starts would be too tedious for the subjects). The participant was asked to guess the decision boundary (ˆθ) after every three iterations. In these “boundary queries,” the computer began by displaying the shape at X = 1/2, and the participant used the mouse wheel to change the shape until it matched her current best guess about the boundary shape. Once satisfied, she clicked a “submit boundary” button. We thus collect ˆθ3, ˆθ6, ˆθ9, . . . , ˆθ45 for each session. These boundary estimates allowed us to compute mean (across subjects) human estimation errors |ˆθn −θ| for different n, under different conditions and different noise levels. We compare these means (i) across the different experimental conditions and (ii) to the theoretical predictions in (1)(2). 4 Experimental Results Figure 4 shows, for each condition and noise level, how every participant’s boundary guesses approach the true boundary θ. Qualitatively, human active learning (Human-Active) appears better than passive learning (Random) because the curves are more concentrated around zero. Machineassisted human learning (Machine-Yoked) seems even better. As the task becomes harder (larger noise ϵ), performance suffers in all conditions, though less so for the Machine-Yoked learners. These conclusions are further supported by our quantitative analysis below. It is worth noting that the behavior of a few participants stand out in Figure 4. For example, one subject’s boundary guesses shift considerably within a session, resulting in a rather zigzagged curve in (Human-Active, ϵ = 0.1). All participants, however, perform relatively well in at least some noise settings, suggesting that they took the experiment seriously. Any strange-looking behavior likely reflect genuine difficulties in the task, and for this reason we have not removed any apparent outliers in the following analyses. We now answer questions Q1–Q4 raised in Section 1. [Q1] Do humans perform better when they can actively select samples for labeling compared to passive observation of randomly-selected samples? [A1] Yes – at least for low noise levels. For higher noise the two are similar. To support our answer, we show that the human estimation error |ˆθn −θ| is smaller in the HumanActive condition than Random condition. This is plotted in Figure 5, with ±1 standard error bars. When noise is low, the Human-Active curve is well below the Random curve throughout the session. 5 noise ϵ = 0 noise ϵ = 0.05 noise ϵ = 0.1 noise ϵ = 0.2 noise ϵ = 0.4 Random 10 20 30 40 −1 −0.5 0 0.5 1 10 20 30 40 −1 −0.5 0 0.5 1 10 20 30 40 −1 −0.5 0 0.5 1 10 20 30 40 −1 −0.5 0 0.5 1 10 20 30 40 −1 −0.5 0 0.5 1 Human Active 10 20 30 40 −1 −0.5 0 0.5 1 10 20 30 40 −1 −0.5 0 0.5 1 10 20 30 40 −1 −0.5 0 0.5 1 10 20 30 40 −1 −0.5 0 0.5 1 10 20 30 40 −1 −0.5 0 0.5 1 Machine Yoked 10 20 30 40 −1 −0.5 0 0.5 1 10 20 30 40 −1 −0.5 0 0.5 1 10 20 30 40 −1 −0.5 0 0.5 1 10 20 30 40 −1 −0.5 0 0.5 1 10 20 30 40 −1 −0.5 0 0.5 1 Figure 4: Overview of experiment results. The x-axis is iteration n, y-axis is the (signed) difference between human boundary guess and true boundary ˆθn −θ. Each curve shows performance from one human subject (though they overlap, it is sufficient to note the trends). Overall, human active learning (Human-Active) is better than passive learning (Random), and machine-assisted human learning (Machine-Yoked) is even better. As the task becomes harder (larger noise ϵ), all performances suffer. 10 20 30 40 0 0.1 0.2 0.3 noise !=0.10 estimation error Human Active Random Machine Yoked 10 20 30 40 0 0.1 0.2 0.3 noise !=0.20 10 20 30 40 0 0.1 0.2 0.3 noise !=0.40 Figure 5: Human estimate error |ˆθn −θ| under different conditions and noise levels. The x-axis is iteration n. The error bars are ±1 standard error. Human-Active is better than Random when noise is low; Machine-Yoked is better than Human-Active when noise is high. That is, with active learning the subjects quickly come up with better guesses and maintain this advantage till the end. Human-Active performance deteriorates with higher noise levels, however, and at the highest noise levels is appears indistinguishable from performance in the Random condition. [Q2] Can humans achieve the full benefit of active learning suggested by learning theory? [A2] Human active learning does have exponential convergence, but with slower decay constants than the upper bound in (2). Human passive learning, on the other hand, sometimes does not even achieve polynomial convergence as predicted in (1), and in no condition does the rate approach optimal performance. To support these conclusions, consider that, for active learning, the theoretical estimation error bound in (2) has the form 2e−λn and decays exponentially with n. The decay constant λ = −1/2 log  1/2 + p ϵ(1 −ϵ)  is determined by the noise level ϵ. The larger the decay constant, the faster the error approaches zero. If one plots log of the bound vs. n, it would be a line with slope −λ. To determine whether human error decays exponentially as predicted, and with a comparable slope, one can similarly plot the logarithm of human active learning estimation error vs. n. If human active learning decreases error exponentially (which is desirable), this relationship is linear, as Figure 6 (Upper) shows it to be. This exponential decay of error offers further evidence that human active learning exceeds passive learning performance, where error can only decay polynomially (Figure 6, Lower). The speed (decay constant) of the exponential decay in human active learning is, however, slower than the theoretical upper bound (2). To see this, we fit one line per noise level in 6 10 20 30 40 −5 −4 −3 −2 −1 noise !=0.00 10 20 30 40 −5 −4 −3 −2 −1 noise !=0.05 10 20 30 40 −5 −4 −3 −2 −1 noise !=0.10 10 20 30 40 −5 −4 −3 −2 −1 noise !=0.20 10 20 30 40 −5 −4 −3 −2 −1 noise !=0.40 0 2 4 −5 −4 −3 −2 −1 noise !=0.00 0 2 4 −5 −4 −3 −2 −1 noise !=0.05 0 2 4 −5 −4 −3 −2 −1 noise !=0.10 0 2 4 −5 −4 −3 −2 −1 noise !=0.20 0 2 4 −5 −4 −3 −2 −1 noise !=0.40 Figure 6: (Upper) Human active learning decreases error exponentially, as indicated by the linear distribution of log(|ˆθn −θ|) (the y-axis) versus n (the x-axis). (Lower) Human passive learning in the Random condition is slower than O(1/n), since the slopes are shallower than -1 on log(|ˆθn −θ|) (the y-axis) versus log(n) (the x-axis). ϵ = 0 0.05 0.1 0.2 0.4 Human-Active 0.031 0.042 0.037 0.030 0.005 bound (2) 0.347 0.166 0.112 0.053 0.005 Table 1: The exponential decay constants of human active learning is slower than predicted by statistical learning theory for lower noise levels. Figure 6 and use the negative slope of the fitted lines as the estimate of the decay constant in human active learning. For comparison, we computed the decay constant in the theoretical bound. Table 1 compares these decay constants under different noise levels. It is clear that human active learning’s error decays at a slower rate, especially when the noise is low. For passive learning, the minimax lower bound (1) has a polynomial decay of O(1/n), which is a line with slope -1 on a plot of log(|ˆθn −θ|) vs. log(n). As shown in Figure 6 (Lower), the analogous log-log plot from human passive learning in the Random condition does seem to fit a line, but the slope is much shallower than -1. Indeed, for 2 of the 5 noise levels (0.1 and 0.2), the estimated slope is not significantly different from zero! These results suggest that humans either fail to learn or learn at a much lower rate than formal analysis suggests is possible. [Q3] Can machine learning be used to enhance human learning? [A3] Apparently in high noise levels – But what really happened? As shown in Figure 5, the Machine-Yoked curve is no different than Human-Active in low noise levels, but substantially better in high noise levels. It is important to remember that Machine-Yoked is human performance, not that of the machine learning algorithm. The results seem to indicate that humans can utilize the training data chosen by a machine active learning algorithm to enhance their performance in settings where humans are not generally performing well. Upon closer inspection, however, we noticed that almost all subjects in the Machine-Yoked condition used the following strategy. They quickly learned that the computer was generating training examples that soon converge to the true boundary. They then simply placed their boundary guess at (or near) the latest training example generated by the machine. This “memorizing” strategy worked very well in our setting, but it is difficult to believe that the subjects were really “learning” the decision boundary. Instead, they likely learned to trust and depend upon the computer. In view of this, we consider Q3 inconclusive, but hope these observations provoke thoughts on how to actually improve human learning. [Q4] Do answers to the above questions depend upon the difficulty of the learning task? [A4] One form of difficulty, the label noise level ϵ, has profound effects on human learning. Specifically, the advantage of active learning diminishes with noise; and at high noise levels active learning arguably has no advantage over passive learning for humans in this setting. Formal analysis 7 suggests that the advantage of active over passive sampling should diminish with increasing noise; but it also suggests that some benefit to active sampling should always be obtained. An important goal for future research, then, is to understand why human performance is so adversely affected by noise. 5 Conclusions and Future Work We have conducted behavioral experiments to compare active versus passive learning by humans in a simple classification task, and compared human performance to that predicted by statistical learning theory. In short, humans are able to actively select queries and use them to achieve faster category learning; but the advantages of active-learning diminish under higher noise conditions and do not approach theoretical bounds. One important conclusion from this work is that passive learning may not be a very good model for how human beings learn to categorize. Our research also raises several interesting further questions, including how the current conclusions extend to more realistic learning scenarios. The benefit of the current work is that it capitalizes on a simple learning task for which passive and active performance has been formally characterized. The drawback is that the task is not especially natural. In future work we plan to extend the current approach to learning situations more similar to those faced by people in their day-to-day lives. Acknowledgments: This work is supported in part by the Wisconsin Alumni Research Foundation, and NSF Grant 0745423 from Developmental Learning Sciences. References [1] N. Balcan, S. Hanneke, and J. Wortman. The true sample complexity of active learning. to appear in COLT 2008, Helsinki, Finland, 2008. [2] M. V. Burnashev and K. Sh. Zigangirov. An interval estimation problem for controlled observations. Problems in Information Transmission, 10:223–231, 1974. [3] S. Carey. Conceptual change in childhood. MIT Press, 1985. [4] R. Castro and R. Nowak. Minimax bounds for active learning. IEEE Transactions on Information Theory, 54(5):2339–2353, 2008. [5] D. Cohn, L. Atlas, and R. Ladner. Improving generalization with active learning. Machine Learning, 15(2):201–221, 1994. [6] R. Gelman and E. M. Williams. Handbook of child psychology, chapter Enabling constraints for cognitive development and learning: A domain-specific epigenetic theory. John Wiley and Sons, 1998. [7] G. Gigerenzer and R. Selten. Bounded rationality: The adaptive toolbox. The MIT Press, 2001. [8] M. Horstein. Sequential decoding using noiseless feedback. IEEE Trans. Info. Theory, 9(3):136–143, 1963. [9] F. Keil. Concepts, kinds, and cognitive development. MIT Press, 1989. [10] J. K. Kruschke. Bayesian approaches to associative learning: From passive to active learning. Learning & Behavior, 36(3):210–226, 2008. [11] P. A. Laughlin. Focusing strategy in concept attainment as a function of instructions and task complexity. Journal of Experimental Psychology, 98(2):320–327, May 1973. [12] C. R. Mynatt, M. E. Doherty, and R. D. Tweney. Confirmation bias in a simulated research environment: An experimental study of scientific inference. The Quarterly Journal of Experimental Psychology, 29(1):85–95, Feb 1977. [13] J. Nelson. Finding useful questions: On Bayesian diagnosticity, probability, impact, and information gain. Psychological Review, 112(4):979–999, 2005. [14] M. Oaksford and N. Chater. Bayesian rationality the probabilistic approach to human reasoning. Oxford University Press, 2007. [15] L. E. Schulz, T. Kushnir, and A. Gopnik. Causal Learning; Psychology, Philosophy and Computation, chapter Learning from doing: Interventions and causal inference. Oxford University Press, 2007. [16] D. Sobel and T. Kushnir. Interventions do not solely benefit causal learning: Being told what to do results in worse learning than doing it yourself. In Proceedings of the 25th Annual Meeting of the Cognitive Science Society, 2003. [17] M. Steyvers, J. Tenenbaumb, E. Wagenmakers, and B. Blum. Inferring causal networks from observations and interventions. Cognitive Science, 27:453–489, 2003. [18] Alexandre B. Tsybakov. Introduction `a l’estimation non-param´etrique. Math´ematiques et Applications, 41. Springer, 2004. [19] G. Tur, D. Hakkani-T¨ur, and R. E. Schapire. Combining active and semi-supervised learning for spoken language understanding. Speech Communication, 45:171–186, 2005. [20] P. C. Wason and P. N. Johnson-Laird. Psychology of reasoning: Structure and content. Harvard U. Press, 1972. [21] X. Zhu, T. Rogers, R. Qian, and C. Kalish. Humans perform semi-supervised classification too. In Twenty-Second AAAI Conference on Artificial Intelligence, 2007. 8
2008
245
3,504
Clustered Multi-Task Learning: a Convex Formulation Laurent Jacob Mines ParisTech – CBIO INSERM U900, Institut Curie 35, rue Saint Honor´e, 77300 Fontainebleau, France laurent.jacob@mines-paristech.fr Francis Bach INRIA – Willow Project Ecole Normale Sup´erieure, 45, rue d’Ulm, 75230 Paris, France francis.bach@mines.org Jean-Philippe Vert Mines ParisTech – CBIO INSERM U900, Institut Curie 35, rue Saint Honor´e, 77300 Fontainebleau, France jean-philippe.vert@mines-paristech.fr Abstract In multi-task learning several related tasks are considered simultaneously, with the hope that by an appropriate sharing of information across tasks, each task may benefit from the others. In the context of learning linear functions for supervised classification or regression, this can be achieved by including a priori information about the weight vectors associated with the tasks, and how they are expected to be related to each other. In this paper, we assume that tasks are clustered into groups, which are unknown beforehand, and that tasks within a group have similar weight vectors. We design a new spectral norm that encodes this a priori assumption, without the prior knowledge of the partition of tasks into groups, resulting in a new convex optimization formulation for multi-task learning. We show in simulations on synthetic examples and on the IEDB MHC-I binding dataset, that our approach outperforms well-known convex methods for multi-task learning, as well as related non-convex methods dedicated to the same problem. 1 Introduction Regularization has emerged as a dominant theme in machine learning and statistics, providing an intuitive and principled tool for learning from high-dimensional data. In particular, regularization by squared Euclidean norms or squared Hilbert norms has been thoroughly studied in various settings, leading to efficient practical algorithms based on linear algebra, and to very good theoretical understanding (see, e.g., [1, 2]). In recent years, regularization by non Hilbert norms, such as ℓp norms with p ̸= 2, has also generated considerable interest for the inference of linear functions in supervised classification or regression. Indeed, such norms can sometimes both make the problem statistically and numerically better-behaved, and impose various prior knowledge on the problem. For example, the ℓ1-norm (the sum of absolute values) imposes some of the components to be equal to zero and is widely used to estimate sparse functions [3], while various combinations of ℓp norms can be defined to impose various sparsity patterns. While most recent work has focused on studying the properties of simple well-known norms, we take the opposite approach in this paper. That is, assuming a given prior knowledge, how can we design a norm that will enforce it? More precisely, we consider the problem of multi-task learning, which has recently emerged as a very promising research direction for various applications [4]. In multi-task learning several related inference tasks are considered simultaneously, with the hope that by an appropriate sharing 1 of information across tasks, each one may benefit from the others. When linear functions are estimated, each task is associated with a weight vector, and a common strategy to design multi-task learning algorithm is to translate some prior hypothesis about how the tasks are related to each other into constraints on the different weight vectors. For example, such constraints are typically that the weight vectors of the different tasks belong (a) to a Euclidean ball centered at the origin [5], which implies no sharing of information between tasks apart from the size of the different vectors, i.e., the amount of regularization, (b) to a ball of unknown center [5], which enforces a similarity between the different weight vectors, or (c) to an unknown low-dimensional subspace [6, 7]. In this paper, we consider a different prior hypothesis that we believe could be more relevant in some applications: the hypothesis that the different tasks are in fact clustered into different groups, and that the weight vectors of tasks within a group are similar to each other. A key difference with [5], where a similar hypothesis is studied, is that we don’t assume that the groups are known a priori, and in a sense our goal is both to identify the clusters and to use them for multi-task learning. An important situation that motivates this hypothesis is the case where most of the tasks are indeed related to each other, but a few “outlier” tasks are very different, in which case it may be better to impose similarity or low-dimensional constraints only to a subset of the tasks (thus forming a cluster) rather than to all tasks. Another situation of interest is when one can expect a natural organization of the tasks into clusters, such as when one wants to model the preferences of customers and believes that there are a few general types of customers with similar preferences within each type, although one does not know beforehand which customers belong to which types. Besides an improved performance if the hypothesis turns out to be correct, we also expect this approach to be able to identify the cluster structure among the tasks as a by-product of the inference step, e.g., to identify outliers or groups of customers, which can be of interest for further understanding of the structure of the problem. In order to translate this hypothesis into a working algorithm, we follow the general strategy mentioned above which is to design a norm or a penalty over the set of weights which can be used as regularization in classical inference algorithms. We construct such a penalty by first assuming that the partition of the tasks into clusters is known, similarly to [5]. We then attempt to optimize the objective function of the inference algorithm over the set of partitions, a strategy that has proved useful in other contexts such as multiple kernel learning [8]. This optimization problem over the set of partitions being computationally challenging, we propose a convex relaxation of the problem which results in an efficient algorithm. 2 Multi-task learning with clustered tasks We consider m related inference tasks that attempt to learn linear functions over X = Rd from a training set of input/output pairs (xi, yi)i=1,...,n, where xi ∈X and yi ∈Y. In the case of binary classification we usually take Y = {−1, +1}, while in the case of regression we take Y = R. Each training example (xi, yi) is associated to a particular task t ∈[1, m], and we denote by I(t) ⊂[1, n] the set of indices of training examples associated to the task t. Our goal is to infer m linear functions ft(x) = w⊤ t x, for t = 1, . . . , m, associated to the different tasks. We denote by W = (w1 . . . wm) the d × m matrix whose columns are the successive vectors we want to estimate. We fix a loss function l : R × Y 7→R that quantifies by l(f(x), y) the cost of predicting f(x) for the input x when the correct output is y. Typical loss functions include the square error in regression l(u, y) = 1 2(u −y)2 or the hinge loss in binary classification l(u, y) = max(0, 1 −uy) with y ∈{−1, 1}. The empirical risk of a set of linear classifiers given in the matrix W is then defined as the average loss over the training set: ℓ(W) = 1 n Pm t=1 P i∈I(t) l(w⊤ t xi, yi) . (1) In the sequel, we will often use the m×1 vector 1 composed of ones, the m×m projection matrices U =11⊤/m whose entries are all equal to 1/m, as well as the projection matrix Π=I −U. In order to learn simultaneously the m tasks, we follow the now well-established approach which looks for a set of weight vectors W that minimizes the empirical risk regularized by a penalty functional, i.e., we consider the problem: minW ∈Rd×m ℓ(W) + λΩ(W) , (2) where Ω(W) can be designed from prior knowledge to constrain some sharing of information between tasks. For example, [5] suggests to penalize both the norms of the wi’s and their variance, 2 i.e., to consider a function of the form: Ωvariance(W) = ∥¯w∥2 + β m Pm i=1 ∥wi −¯w∥2 , (3) where ¯w = (Pn i=1 wi) /m is the mean weight vector. This penalty enforces a clustering of the w′ is towards their mean when β increases. Alternatively, [7] propose to penalize the trace norm of W: Ωtrace(W) = Pmin(d,m) i=1 σi(W) , (4) where σ1(W), . . . , σmin(d,m)(W) are the successive singular values of W. This enforces a low-rank solution in W, i.e., constrains the different wi’s to live in a low-dimensional subspace. Here we would like to define a penalty function Ω(W) that encodes as prior knowledge that tasks are clustered into r < m groups. To do so, let us first assume that we know beforehand the clusters, i.e., we have a partition of the set of tasks into r groups. In that case we can follow an approach proposed by [5] which for clarity we rephrase with our notations and slightly generalize now. For a given cluster c ∈[1, r], let us denote J (c) ⊂[1, m] the set of tasks in c, mc = |J (c)| the number of tasks in the cluster c, and E the m × r binary matrix which describes the cluster assignment for the m tasks, i.e., Eij = 1 if task i is in cluster j, 0 otherwise. Let us further denote by ¯wc = (P i∈J (c) wi)/mc the average weight vector for the tasks in c, and recall that ¯w = (Pm i=1 wi) /m denotes the average weight vector over all tasks. Finally it will be convenient to introduce the matrix M = E(E⊤E)−1E⊤. M can also be written I −L, where L is the normalized Laplacian of the graph G whose nodes are the tasks connected by an edge if and only if they are in the same cluster. Then we can define three semi-norms of interest on W that quantify different orthogonal aspects: • A global penalty, which measures on average how large the weight vectors are: Ωmean(W) = n∥¯w∥2 = trWUW ⊤. • A measure of between-cluster variance, which quantifies how close to each other the different clusters are: Ωbetween(W) = Pr c=1 mc∥¯wc −¯w∥2 = trW(M −U)W ⊤. • A measure of within-cluster variance, which quantifies the compactness of the clusters: Ωwithin(W) = Pr c=1 nP i∈J (c) ∥wi −¯wc∥2o = trW(I −M)W ⊤. We note that both Ωbetween(W) and Ωwithin(W) depend on the particular choice of clusters E, or equivalently of M. We now propose to consider the following general penalty function: Ω(W) = εMΩmean(W) + εBΩbetween(W) + εW Ωwithin(W) , (5) where εM, εB and εW are non-negative parameters that can balance the importance of the components of the penalty. Plugging this quadratic penalty into (2) leads to the general problem: minW ∈Rd×m ℓ(W) + λtrWΣ(M)−1W ⊤, (6) where Σ(M)−1 = εMU + εB(M −U) + εW (I −M) . (7) Here we use the notation Σ(M) to insist on the fact that this quadratic penalty depends on the cluster structure through the matrix M. Observing that the matrices U, M −U and I −M are orthogonal projections onto orthogonal supplementary subspaces, we easily get from (7): Σ(M) = ε−1 M U + ε−1 B (M −U) + ε−1 W (I −M) = ε−1 W I + (ε−1 M −ε−1 B )U + (ε−1 B −ε−1 W )M . (8) By choosing particular values for εM, εB and εW we can recover several situations, In particular: • For εW = εB = εM = ε, we simply recover the Frobenius norm of W, which does not put any constraint on the relationship between the different tasks: Ω(W) = εtrWW ⊤= ε Pm i=1 ∥wi∥2 . 3 • For εW = εB > εM, we recover the penalty of [5] without clusters: Ω(W) = trW (εMU + εB(I −U)) W ⊤= εMn∥¯w∥2 + εB Pm i=1 ∥wi −¯w∥2 . In that case, a global similarity between tasks is enforced, in addition to the general constraint on their mean. The structure in clusters plays no role since the sum of the betweenand within-cluster variance is independent of the particular choice of clusters. • For εW > εB = εM we recover the penalty of [5] with clusters: Ω(W) = trW (εMM + εW (I −M)) W ⊤= εM r X c=1 n mc∥¯wc∥2 + εW εM P i∈J (c) ∥wi −¯wc∥2o . In order to enforce a cluster hypothesis on the tasks, we therefore see that a natural choice is to take εW > εB > εM in (5). This would have the effect of penalizing more the within-cluster variance than the between-cluster variance, hence promoting compact clusters. Of course, a major limitation at this point is that we assumed the cluster structure known a priori (through the matrix E, or equivalently M). In many cases of interest, we would like instead to learn the cluster structure itself from the data. We propose to learn the cluster structure in our framework by optimizing our objective function (6) both in W and M, i.e., to consider the problem: minW ∈Rd×m,M∈Mr ℓ(W) + λtrWΣ(M)−1W ⊤, (9) where Mr denotes the set of matrices M = E(E⊤E)−1E⊤defined by a clustering of the m tasks into r clusters and Σ(M) is defined in (8). Denoting by Sr = {Σ(M) : M ∈Mr} the corresponding set of positive semidefinite matrices, we can equivalently rewrite the problem as: minW ∈Rd×m,Σ∈Sr ℓ(W) + λtrWΣ−1W ⊤. (10) The objective function in (10) is jointly convex in W ∈Rd×m and Σ ∈Sm + , the set of m×m positive semidefinite matrices, however the (finite) set Sr is not convex, making this problem intractable. We are now going to propose a convex relaxation of (10) by optimizing over a convex set of positive semidefinite matrices that contains Sr. 3 Convex relaxation In order to formulate a convex relaxation of (10), we observe that in the penalty term (5) the cluster structure only contributes to the second and third terms Ωbetween(W) and Ωwithin(W), and that these penalties only depend on the centered version of W. In terms of matrices, only the last two terms of Σ(M)−1 in (7) depend on M, i.e., on the clustering, and these terms can be re-written as: εB(M −U) + εW (I −M) = Π(εBM + εW (I −M))Π. (11) Indeed, it is easy to check that M −U = MΠ = ΠMΠ, and that I −M = I −U −(M −U) = Π −ΠMΠ = Π(I −M)Π. Intuitively, multiplying by Π on the right (resp. on the left) centers the rows (resp. the columns) of a matrix, and both M −U and I −M are row- and column-centered. To simplify notations, let us introduce f M = ΠMΠ. Plugging (11) in (7) and (9), we get the penalty trWΣ(M)−1W ⊤= εM ¡ trW ⊤WU ¢ + (WΠ)(εB f M + εW (I −f M))(WΠ)⊤, (12) in which, again, only the second part needs to be optimized with respect to the clustering M. Denoting Σ−1 c (M) = εB f M + εW (I −f M), one can express Σc(M), using the fact that f M is a projection: Σc(M) = ¡ ε−1 B −ε−1 W ¢ f M + ε−1 W I. (13) Σc is characterized by f M = ΠMΠ, that is discrete by construction, hence the non-convexity of Sr. We have the natural constraints M ≥0 (i.e., f M ≥−U), 0 ⪯M ⪯I (i.e., 0 ⪯f M ⪯Π) and trM = r (i.e., trf M = r −1). A possible convex relaxation of the discrete set of matrices f M is therefore {f M : 0 ⪯f M ⪯I, trf M = r −1}. This gives an equivalent convex set Sc for Σc, namely: Sc = © Σc ∈Sm + : αI ⪯Σc ⪯βI, trΣc = γ ª , (14) with α = ε−1 W , β = ε−1 B and γ = (m −r + 1)ε−1 W + (r −1)ε−1 B . Incorporating the first part of the penalty (12) into the empirical risk term by defining ℓc(W) = λℓ(W) + εM ¡ trW ⊤WU ¢ , we are now ready to state our relaxation of (10): minW ∈Rd×m,Σc∈Sc ℓc(W) + λtrWΠΣ−1 c (WΠ)⊤. (15) 4 3.1 Reinterpretation in terms of norms We denote ∥W∥2 c = minΣc∈Sc trWΣ−1 c W T the cluster norm (CN). For any convex set Sc, we obtain a norm on W (that we apply here to its centered version). By putting some different constraints on the set Sc, we obtain different norms on W, and in fact all previous multi-task formulations may be cast in this way, i.e., by choosing a specific set of positive matrices Sc (e.g., trace constraint for the trace norm, and simply a singleton for the Frobenius norm). Thus, designing norms for multitask learning is equivalent to designing a set of positive matrices. In this paper, we have investigated a specific set adapted for clustered-tasks, but other sets could be designed in other situations. Note that we have selected a simple spectral convex set Sc in order to make the optimization simpler in Section 3.3, but we could also add some additional constraints that encode the point-wise positivity of the matrix M. Finally, when r = 1 (one cluster) and r = m (one cluster per task), we get back the formulation of [5]. 3.2 Reinterpretation as a convex relaxation of K-means In this section we show that the semi-norm ∥WΠ∥2 c that we have designed earlier, can be interpreted as a convex relaxation of K-means on the tasks [9]. Indeed, given W ∈Rd×m, K-means aims to decompose it in the form W = µE⊤where µ ∈Rd×r are cluster centers and E represents a partition. Given E, µ is found by minimizing minµ ∥W ⊤−Eµ⊤∥2 F . Thus, a natural strategy outlined by [9], is to alternate between optimizing µ, the partition E and the weight vectors W. We now show that our convex norm is obtained when minimizing in closed form with respect to µ and relaxing. By translation invariance, this is equivalent to minimizing minµ ∥ΠW ⊤−ΠEµ⊤∥2 F . If we add a penalization on µ of the form λtrE⊤Eµµ⊤, then a short calculation shows that the minimum with respect to µ (i.e., after optimization of the cluster centers) is equal to trΠW ⊤WΠ(ΠE(E⊤E)−1E⊤Π/λ + I)−1 = trΠW ⊤WΠ(ΠMΠ/λ + I)−1. By comparing with Eq. (13), we see that our formulation is indeed a convex relaxation of K-means. 3.3 Primal optimization Let us now show in more details how (15) can be solved efficiently. Whereas a dual formulation could be easily derived following [8], a direct approach is to rewrite (15) as minW ∈Rd×m ¡ ℓc(W) + minΣc∈Sc trWΠΣ−1 c (WΠ)⊤¢ (16) which, if ℓc is differentiable, can be directly optimized by gradient-based methods on W since ∥WΠ∥2 c = minΣc∈Sc trWΠΣ−1 c (WΠ)⊤is a quadratic semi-norm of WΠ. This regularization term trWΠΣ−1 c (WΠ)⊤can be computed efficiently using a semi-closed form. Indeed, since Σc as defined in (14) is a spectral set (i.e., it does depend only on eigenvalues of covariance matrices), we obtain a function of the singular values of WΠ (or equivalently the eigenvalues of WΠW ⊤): minΣc∈Sc trWΠΣ−1 c (WΠ)⊤= minλ∈Rm, α≤λi≤β, λ1=γ, V ∈Om trWΠV diag(λ)−1V ⊤(WΠ)⊤, where Om is the set of orthogonal matrices in Rm×m. The optimal V is the matrix of the eigenvectors of WΠW ⊤, and we obtain the value of the objective function at the optimum: minΣ∈S trWΠΣ−1(WΠ)⊤= minλ∈Rm, α≤λi≤β, λ1=γ Pm i=1 σ2 i λi , where σ and λ are the vectors containing the singular values of WΠ and Σ respectively. Now, we simply need to be able to compute this function of the singular values. The only coupling in this formulation comes from the trace constraint. The Lagrangian corresponding to this constraint is: L(λ, ν) = Pm i=1 σ2 i λi + ν (Pm i=1 λi −γ) . (17) For ν ≤0, this is a decreasing function of λi, so the minimum on λi ∈[α, β] is reached for λi = β. The dual function is then a linear non-decreasing function of ν (since α ≤γ/m ≤β from the definition of α, β, γ in (14)), which reaches it maximum value (on ν ≤0) at ν = 0. Let us therefore now consider the dual for ν ≥0. (17) is then a convex function of λi. Canceling its derivative with respect to λi gives that the minimum in λ ∈R is reached for λi = σi/√ν. Now this may not be 5 in the constraint set (α, β), so if σi < α√ν then the minimum in λi ∈[α, β] of (17) is reached for λi = α, and if σi > β√ν it is reached for λi = β. Otherwise, it is reached for λi = σi/√ν. Reporting this in (17), the dual problem is therefore maxν≥0 P i,α√ν≤σi≤β√ν 2σi √ν +P i,σi<α√ν ³ σ2 i α + να ´ +P i,β√ν<σi ³ σ2 i β + νβ ´ −νγ . (18) Since a closed form for this expression is known for each fixed value of ν, one can obtain ∥WΠ∥2 c (and the eigenvalues of Σ∗) by Algorithm 1. The cancellation condition in Algorithm 1 is that the Algorithm 1 Computing ∥A∥2 c Require: A, α, β, γ. Ensure: ∥A∥2 c, λ∗. Compute the singular values σi of A. Order the σ2 i α2 , σ2 i β2 in a vector I (with an additional 0 at the beginning). for all interval (a, b) of I do if ∂L(λ∗,ν) ∂ν is canceled on ν ∈(a, b) then Replace ν∗in the dual function L(λ∗, ν) to get ∥A∥2 c, compute λ∗on (a, b). return ∥A∥2 c, λ∗. end if end for value canceling the derivative belongs to (a, b), i.e., ν = ³ P i,α√ν≤σi≤β√ν σi γ−(αn−+βn+) ´2 ∈(a, b) , where n−and n+ are the number of σi < α√ν and σi > β√ν respectively. Denoting ∥A∥2 c = F(A, Σ∗(A)), ∇AF = ∂AF + ∂ΣF∂AΣ cannot be computed because of the non-differentiable constraints on Σ for F. We followed an alternative direction, using only the ∂AF part. 4 Experiments 4.1 Artificial data We generated synthetic data consisting of two clusters of two tasks. The tasks are vectors of Rd, d = 30. For each cluster, a center ¯wc was generated in Rd−2, so that the two clusters be orthogonal. More precisely, each ¯wc had (d −2)/2 random features randomly drawn from N(0, σ2 r), σ2 r = 900, and (d −2)/2 zero features. Then, each tasks t was computed as wt + ¯wc(t), where c(t) was the cluster of t. wt had the same zero feature as its cluster center, and the other features were drawn from N(0, σ2 c), σ2 c = 16. The last two features were non-zero for all the tasks and drawn from N(0, σ2 c). For each task, 2000 points were generated and a normal noise of variance σ2 n = 150 was added. In a first experiment, we compared our cluster norm ∥.∥2 c with the single-task learning given by the Frobenius norm, and with the trace norm, that corresponds to the assumption that the tasks live in a low-dimension space. The multi-task kernel approach being a special case of CN, its performance will always be between the performance of the single task and the performance of CN. In a second setting, we compare CN to alternative methods that differ in the way they learn Σ: • The True metric approach, that simply plugs the actual clustering in E and optimizes W using this fixed metric. This necessitates to know the true clustering a priori, and can be thought of like a golden standard. • The k-means approach, that alternates between optimizing the tasks in W given the metric Σ and re-learning Σ by clustering the tasks wi [9]. The clustering is done by a k-means run 3 times. This is a non convex approach, and different initialization of k-means may result in different local minima. We also tried one run of CN followed by a run of True metric using the learned Σ reprojected in Sr by rounding, i.e., by performing k-means on the eigenvectors of the learned Σ (Reprojected approach), and a run of k-means starting from the relaxed solution (CNinit approach). 6 Only the first method requires to know the true clustering a priori, all the other methods can be run without any knowledge of the clustering structure of the tasks. Each method was run with different numbers of training points. The training points were equally separated between the two clusters and for each cluster, 5/6th of the points were used for the first task and 1/6th for the second, in order to simulate a natural setting were some tasks have fewer data. We used the 2000 points of each task to build 3 training folds, and the remaining points were used for testing. We used the mean RMSE across the tasks as a criterion, and a quadratic loss for ℓ(W). The results of the first experiment are shown on Figure 1 (left). As expected, both multi-task approaches perform better than the approach that learns each task independently. CN penalization on the other hand always gives better testing error than the trace norm penalization, with a stronger advantage when very few training points are available. When more training points become available, all the methods give more and more similar performances. In particular, with large samples, it is not useful anymore to use a multi-task approach. 3 3.5 4 4.5 5 5.5 6 6.5 10 15 20 25 30 35 Number of training points (log) RMSE Frob Trace CN 3 3.5 4 4.5 5 5.5 6 6.5 14 16 18 20 22 24 26 28 30 32 Number of training points (log) RMSE CN KM True Repr Figure 1: RMSE versus number of training points for the tested methods. Figure 2: Recovered Σ with CN (upper line) and k-means (lower line) for 28, 50 and 100 points. Figure 1 (right) shows the results of the second experiment. Using the true metric always gives the best results. For 28 training points, no method recovers the correct clustering structure, as displayed on Figure 2, although CN performs slightly better than the k-means approach since the metric it learns is more diffuse. For 50 training points, CN performs much better than the k-means approach, which completely fails to recover the clustering structure as illustrated by the Σ learned for 28 and 50 training points on Figure 2. In the latter setting, CN partially recovers the clusters. When more training points become available, the k-means approach perfectly recovers the clustering structure and outperforms the relaxed approach. The reprojected approach, on the other hand, performs always as well as the best of the two other methods. The CNinit approach results are not displayed since the are the same as for the reprojected method. 4.2 MHC-I binding data We also applied our method to the IEDB MHC-I peptide binding benchmark proposed in [10]. This database contains binding affinities of various peptides, i.e., short amino-acid sequences, with different MHC-I molecules. This binding process is central in the immune system, and predicting it is crucial, for example to design vaccines. The affinities are thresholded to give a prediction problem. Each MHC-I molecule is considered as a task, and the goal is to predict whether a peptide binds a molecule. We used an orthogonal coding of the amino acids to represent the peptides and balanced 7 Table 1: Prediction error for the 10 molecules with less than 200 training peptides in IEDB. Method Pooling Frobenius norm Multi-task kernel Trace norm Cluster norm Test error 26.53% ± 2.0 11.62% ± 1.4 10.10% ± 1.4 9.20% ± 1.3 8.71% ± 1.5 the data by keeping only one negative example for each positive point, resulting in 15236 points involving 35 different molecules. We chose a logistic loss for ℓ(W). Multi-task learning approaches have already proved useful for this problem, see for example [11, 12]. Besides, it is well known in the vaccine design community that some molecules can be grouped into empirically defined supertypes known to have similar binding behaviors. [12] showed in particular that the multi-task approaches were very useful for molecules with few known binders. Following this observation, we consider the mean error on the 10 molecules with less than 200 known ligands, and report the results in Table 1. We did not select the parameters by internal cross validation, but chose them among a small set of values in order to avoid overfitting. More accurate results could arise from such a cross validation, in particular concerning the number of clusters (here we limited the choice to 2 or 10 clusters). The pooling approach simply considers one global prediction problem by pooling together the data available for all molecules. The results illustrate that it is better to consider individual models than one unique pooled model.On the other hand, all the multitask approaches improve the accuracy, the cluster norm giving the best performance. The learned Σ, however, did not recover the known supertypes, although it may contain some relevant information on the binding behavior of the molecules. 5 Conclusion We have presented a convex approach to clustered multi-task learning, based on the design of a dedicated norm. Promising results were presented on synthetic examples and on the IEDB dataset. We are currently investigating more refined convex relaxations and the natural extension to nonlinear multi-task learning as well as the inclusion of specific features on the tasks, which has shown to improve performance in other settings [6]. References [1] G. Wahba. Spline Models for Observational Data, volume 59 of CBMS-NSF Regional Conference Series in Applied Mathematics. SIAM, Philadelphia, 1990. [2] F. Girosi, M. Jones, and T. Poggio. Regularization Theory and Neural Networks Architectures. Neural Comput., 7(2):219–269, 1995. [3] R. Tibshirani. Regression shrinkage and selection via the lasso. J. Royal. Stat. Soc. B., 58:267–288, 1996. [4] B. Bakker and T. Heskes. Task clustering and gating for bayesian multitask learning. J. Mach. Learn. Res., 4:83–99, 2003. [5] T. Evgeniou, C. Micchelli, and M. Pontil. Learning multiple tasks with kernel methods. J. Mach. Learn. Res., 6:615–637, 2005. [6] J. Abernethy, F. Bach, T. Evgeniou, and J.-P. Vert. Low-rank matrix factorization with attributes. Technical Report cs/0611124, arXiv, 2006. [7] A. Argyriou, T. Evgeniou, and M. Pontil. Multi-task feature learning. In B. Sch¨olkopf, J. Platt, and T. Hoffman, editors, Adv. NIPS 19, pages 41–48, Cambridge, MA, 2007. MIT Press. [8] G.R.G. Lanckriet, N. Cristianini, P. Bartlett, L. El Ghaoui, and M.I. Jordan. Learning the Kernel Matrix with Semidefinite Programming. J. Mach. Learn. Res., 5:27–72, 2004. [9] M. Deodhar and J. Ghosh. A framework for simultaneous co-clustering and learning from complex data. In KDD ’07, pages 250–259, New York, NY, USA, 2007. ACM. [10] B. Peters, H.-H Bui, S. Frankild, M. Nielson, C. Lundegaard, E. Kostem, D. Basch, K. Lamberth, M. Harndahl, W. Fleri, S. S Wilson, J. Sidney, O. Lund, S. Buus, and A. Sette. A community resource benchmarking predictions of peptide binding to MHC-I molecules. PLoS Comput Biol, 2(6):e65, 2006. [11] D. Heckerman, D. Kadie, and J. Listgarten. Leveraging information across HLA alleles/supertypes improves epitope prediction. J. Comput. Biol., 14(6):736–746, 2007. [12] L. Jacob and J.-P. Vert. Efficient peptide-MHC-I binding prediction for alleles with few known binders. Bioinformatics, 24(3):358–366, Feb 2008. 8
2008
246
3,505
Temporal Difference Based Actor Critic Learning Convergence and Neural Implementation Dotan Di Castro, Dmitry Volkinshtein and Ron Meir Department of Electrical Engineering Technion, Haifa 32000, Israel {dot@tx},{dmitryv@tx},{rmeir@ee}.technion.ac.il Abstract Actor-critic algorithms for reinforcement learning are achieving renewed popularity due to their good convergence properties in situations where other approaches often fail (e.g., when function approximation is involved). Interestingly, there is growing evidence that actor-critic approaches based on phasic dopamine signals play a key role in biological learning through cortical and basal ganglia loops. We derive a temporal difference based actor critic learning algorithm, for which convergence can be proved without assuming widely separated time scales for the actor and the critic. The approach is demonstrated by applying it to networks of spiking neurons. The established relation between phasic dopamine and the temporal difference signal lends support to the biological relevance of such algorithms. 1 Introduction Actor-critic (AC) algorithms [22] were probably among the first algorithmic approaches to reinforcement learning (RL). In recent years much work focused on state, or state-action, value functions as a basis for learning. These methods, while possessing desirable convergence attributes in the context of table lookup representation, led to convergence problems when function approximation was involved. A more recent line of research is based on directly (and usually parametrically) representing the policy, and performing stochastic gradient ascent on the expected reward, estimated through trying out various actions and sampling trajectories [3, 15, 23]. However, such direct policy methods often lead to very slow convergence due to large estimation variance. One approach suggested in recent years to remedy this problem is the utilization of AC approaches, where the value function is estimated by a critic, and passed to an actor which selects an appropriate action, based on the approximated value function. The first convergence result for a policy gradient AC algorithm based on function approximation was established in [13], and extended recently in [5, 6]. At this stage it seems that AC based algorithms provide a solid foundation for provably effective approaches to RL based on function approximation. Whether these methods will yield useful solutions to practical problems remains to be seen. RL has also been playing an increasingly important role in neuroscience, and experimentalists have directly recorded the activities of neurons while animals perform learning tasks [20], and used imaging techniques to characterize human brain activities [17, 24] during learning. It was suggested long ago that the basal ganglia, a set of ancient sub-cortical brain nuclei, are implicated in RL. Moreover, these nuclei are naturally divided into two components, based on the separation of the striatum (the main input channel to the basal ganglia) into the ventral and dorsal components. Several imaging studies [17, 24] have suggested that the ventral stream is associated with value estimation by a so called critic, while the dorsal stream has been implicated in motor output, action selection, and learning by a so called actor. Two further experimental findings support the view taken in this work. First, it has been observed [20] that the short latency phasic response of the dopamine neurons in the midbrain strongly resembles the temporal difference (TD) signal introduced in theory of TDlearning [22], which can be used by AC algorithms for both the actor and the critic. Since mid-brain dopaminergic neurons project diffusively to both the ventral and dorsal components of the striatum, these results are consistent with a TD-based AC learning interpretation of the basal ganglia. Second, recent results suggest that synaptic plasticity occurring at the cortico-striatal synapses is strongly modulated by dopamine [18]. Based on these observations it has been suggested that the basal ganglia take part in TD based RL, with the (global) phasic dopamine signal serving as the TD signal [16] modulating synaptic plasticity. Some recent work has been devoted to implementing RL in networks of spiking neurons (e.g., [1, 9, 12]). Such an approach may lead to specific and experimentally verifiable hypotheses regarding the interaction of known synaptic plasticity rules and RL. In fact, one tantalizing possibility is to test these derived rules in the context of ex-vivo cultured neural networks (e.g., [19]), which are connected to the environment through input (sensory) and output (motor) channels. We then envision dopamine serving as a biological substrate for implementing the TD signal in such a system. The work cited above is mostly based on direct policy gradient algorithms, (e.g., [3]), leading to nonAC approaches. Moreover, these algorithms were based directly on the reward, rather than on the biologically better motivated TD signal, which provides more information than the reward itself, and is expected to lead to improved convergence. 2 A Temporal Difference Based Actor-Critic Algorithm The TD-based AC algorithm developed in this section is related to the one presented in [5, 6]. While the derivation of the present algorithm differs from the latter work (which also stressed the issue of the natural gradient) , the essential novel theoretical feature here is the establishment of convergence1 without the restriction to two time scales which was used in [5, 6, 13]. This result is also important in a biological context, where, as far as we are aware, there is no evidence for such a time scale separation. 2.1 Problem Formulation We consider a finite Markov Decision Process (MDP) in discrete time with a finite state set X of size |X| and a finite action set U. The MDP models the environment in which the agent acts. Each selected action u ∈U determines a stochastic matrix P(u) = [P(y|x, u)]x,y∈X where P(y|x, u) is the transition probability from a state x ∈X to a state y ∈X given the control u. A parameterized policy is described by a conditional probability function, denoted by µ(u|x, θ), which maps observation x ∈X into a control u ∈U given a parameter θ ∈RK. For each state x ∈X the agent receives a corresponding reward r(x). The agent’s goal is to adjust the parameter θ in order to attain maximum average reward over time. For each θ ∈RK, we have a Markov Chain (MC) induced by P(y|x, u) and µ(u|x, θ). The state transitions of the MC are obtained by first generating an action u according to µ(u|x, θ), and then generating the next state according to P(y|x, u)]x,y∈X . Thus, the MC has a transition matrix P(θ) = [P(y|x, θ)]x,y∈X which is given by P(y|x, θ) = R U P(y|x, u)dµ(u|x, θ). We denote the set of these transition probabilities by P = {P(θ)|θ ∈RK}, and its closure by ¯P. We denote by P(x, u, y) the stationary probability to be in state x, choose action u and go to state y. Several technical assumptions are required in the proofs below. Assumption 2.1. (i) Each MC P(θ), P(θ) ∈¯P, is aperiodic, recurrent, and contains a single equivalence class. (ii) The function µ(u|x, θ) is twice differentiable. Moreover, there exist positive constants Br and Bµ, such that for all x ∈X, u ∈U, θ ∈RK and 1 ≤k1, k2 ≤K, we have |r(x)| ≤Br, |∂µ(u|x, θ)/∂θk| ≤Bµ, |∂2µ(u|x, θ)/∂θk1θk2| ≤Bµ. As a result of assumption 2.1(i), we have the following lemma regarding the stationary distribution (Theorem 3.1 in [8]). 1Throughout this paper convergence refers to convergence to a small ball around a stationary point; see Theorem 2.6 for a precise definition. Lemma 2.1. Under Assumption 2.1(i), each MC, P(θ) ∈¯P, has a unique stationary distribution, denoted by π(θ), satisfying π(θ)′P(θ) = π(θ)′, where x′ is the transpose of vector x. Next, we define a measure for performance of an agent in an environment. The average reward per stage of a MC starting from an initial state x0 ∈X is defined by J(x|θ) ≜lim T →∞Eθ " 1 T T X n=0 r(xn) ¯¯¯x0 = x # , where Eθ[·] denotes the expectation under the probability measure P(θ), and xn is the state at time n. The agent’s goal is to find θ ∈RK which maximizes J(x|θ). The following lemma shows that under Assumption 2.1, the average reward per stage does not depend on the initial states (see Theorem 4.7 in [10]). Lemma 2.2. Under Assumption 2.1 and Lemma 2.1, the average reward per stage, J(x|θ), is independent of the starting state, is denoted by η(θ), and satisfies η(θ) = π(θ)′r. Based on Lemma 2.2, the agent’s goal is to find a parameter vector θ, which maximizes the average reward per stage η(θ). Performing the maximization directly on η(θ) is hard. In the sequel we show how this maximization can be performed by optimizing η(θ), using ∇η(θ). A consequence of Assumption 2.1 and the definition of η(θ) is the following lemma (see Lemma 1 in [15]). Lemma 2.3. For each x, y ∈X and for each θ ∈RK, the functions P(y|x, θ), π(x|θ), and η(θ), are bounded, twice differentiable, and have bounded first and second derivatives. Next, we define the differential value function of state x ∈X which represents the average reward the agent receives upon starting from a state x0 and reaching a recurrent state x∗for the first time. Mathematically, h(x|θ) ≜Eθ " T X n=0 (r(xn) −η(θ)) ¯¯¯x0 = x # , (1) where T ≜min{k > 0|xk = x∗}. We define h(θ) ≜(h(x1|θ), . . . , h(x|X||θ)) ∈R|X|. For each θ ∈RK and x ∈X, h(x|θ), r(x), and η(θ) satisfy Poisson’s equation (see Theorem 7.4.1 in [4]), h(x|θ) = r(x) −η(θ) + X y∈X P(y|x, θ)h(y|θ). (2) Based on the differential value definition we define the temporal difference (TD) between the states x ∈X and y ∈X. Formally, d(x, y) ≜r(x) −η(θ) + h(y|θ) −h(x|θ). (3) The TD measures the difference between the differential value estimate following the receipt of reward r(x) and a move to a new state y, and the estimate of the current differential state value at state x. 2.2 Algorithmic details and single time scale convergence We start with a definition of the likelihood ratio derivative, ψ(x, u|θ) ≜∇µ(u|x, θ)/µ(u|x, θ), which we assume to be bounded. Assumption 2.2. For all x ∈X, u ∈U, and θ ∈RK, there exists a positive constant, Bψ, such that |ψ(x, u|θ)| ≤Bψ < ∞. In order to improve the agent’s performance, we need to follow the gradient direction. The following theorem shows how the gradient of the average reward per stage can be calculated by the TD signal. Similar variants of the theorem were proved using the Q-value [23] or state value [15] instead of the TD-signal. Theorem 2.4. The gradient of the average reward per stage for θ ∈RK can be expressed by ∇η(θ) = X x,y∈X,u∈U P(x, u, y)ψ(x, u|θ) (d(x, y) + f(x)) (f(x) arbitrary). (4) The theorem was proved using an advantage function argument in [6]. We provide a direct proof in section A of the supplementary material. The flexibility resulting from the function f(x) allows us to encode the TD signal using biologically realistic positive values only, without influencing the convergence proof. In this paper, for simplicity, we use f(x) = 0. Based on Theorem 2.4, we suggest an TD-based AC algorithm. This algorithm is motivated by [15] where an actor only algorithm was proposed. In [15] the differential value function was re-estimated afresh for each regenerative cycle leading to a large estimation variance. Using the continuity of the actor’s policy function in θ, the difference between the estimates between regenerative cycles is small. Thus, the critic has a good initial estimate at the beginning of each cycle, which is used here in order to reduce the variance. A related AC algorithm was proposed in [5, 6], where two time scales were assumed in order to use Borkar’s two time scales convergence theorem [7]. In our proposed algorithm, and associated convergence theorem, we do not assume different time scales for the actor and for the critic. We present batch mode update equations2 in Algorithm 1 for the actor and the critic. The algorithm is based on some recurrent state x∗; the visit times to this state are denoted by t0, t1, . . .. Updated occur only at these times (batch mode). We define a cycle of the algorithm by the time indices which satisfy tm ≤n < tm+1. The variables ˜d, ˜h(x), and ˜η are the critic’s estimates for d, h(x|θ), and η(θ) respectively. Algorithm 1 Temporal Difference Based Actor Critic Algorithm 1: Given • An MDP with finite set X of states and a recurrent state x∗, satisfying 2.1(i). • Hitting times t0 < t1 < t2 < · · · for the state x∗. • Step coefficients γm such that P∞ m=1 γm = ∞and P∞ m=1 γ2 m < ∞. • A parameterized policy µ(u|x, θ), θ ∈RK, which satisfies Assumption 2.1(ii). • A set H, constants B˜h and Bθ, and an operator ΠH according to Assumption B.1. • Step parameters Γη and Γh satisfying Theorem 2.6. 2: Initiate the critic’s variables: • ˜η0 = 0 (the estimate of the average reward per stage) • ˜h0(x) = 0, ∀x ∈X (the estimate of the differential value function) 3: Initiate the actor: θ0 = 0 and choose f(x) (see (4)) 4: for each state xtm+1 visited do 5: Critic: For all x ∈X, Nm(x) ≜min{tm < k < tm+1|xk = x}, (min(∅) = ∞) ˜d(xn, xn+1) = r(xn) −˜ηm + ˜hm(xn+1) −˜hm(xn), ˜hm+1(x) = ˜hm(x) + γmΓh   tm+1−1 X n=Nm(x) ˜d(xn, xn+1)  , ∀x ∈X, ˜ηm+1 = ˜ηm + γmΓη tm+1−1 X n=tm (r(xn) −˜ηm). 6: Actor: θm+1 = θm + γm Ptm+1−1 n=tm ψ(xn, un|θm)( ˜d(xn, xn+1) + f(xn)) 7: Project each component of ˜hm+1 and θm+1 onto H (see Assumption B.1.). 8: end for In order to prove the convergence of Algorithm 1, we establish two basic results. The first shows that the algorithm converges to the set of ordinary differential equations (5), and the second establishes conditions under which the differential equations converge locally. 2In order to prove convergence certain boundedness conditions need to be imposed, which appear as step 7 in the algorithm. For lack of space, the precise definition of the set H is given in Assumption B.1 of the supplementary material. Theorem 2.5. Under Assumptions 2.1 and B.1, Algorithm 1 converges to the following set of ODE’s            ˙θ = T(θ)∇η(θ) + C(θ) (η(θ) −˜η) + X x∈X D(x)(θ) ³ h(x|θ) −˜h(x) ´ , ˙˜h(x) = Γh ³ h(x|θ) −˜h(x) ´ + ΓhT(θ) (η(θ) −˜η) , x ∈X ˙˜η = ΓηT(θ) (η(θ) −˜η) , (5) with probability 1, where T = min{k > 0|x0 = x∗, xk = x∗}, T(θ) = Eθ[T], C(θ) = Eθ "T −1 X n=0 ψ(xn, un|θ) ¯¯¯x0 = x∗ # , D(x)(θ) = Eθ "T −1 X n=0 1 {xn+1 = x} ψ(xn, un|θ) ¯¯¯x0 = x∗ # + Eθ "T −1 X n=0 (1 {xn = x} ψ(xn, un|θ) ¯¯¯x0 = x∗ # , and where T(θ), C(θ), and D(x)(θ) are continuous with respect to θ. Theorem 2.5 is proved in section B of the supplementary material, based on the theory of stochastic approximation, and more specifically, on Theorem 5.2.1 in [14]. An advantage of the proof technique is that it does not need to assume two time scales. The second theorem, proved in section C of the supplementary material, states the conditions for which η(θt) converges to a ball around the local optimum. Theorem 2.6. If we choose Γη ≥B2 ˙η/ϵη and Γh ≥B2 ˙h/ϵh, for some positive constants ϵh and ϵη, then lim supt→∞∥∇η(θ(t))∥≤ϵ, where ϵ ≜BCϵη + |X|BDϵh. The constants B ˙η and B˙h are defined in Section C of the supplementary material. 3 A Neural Algorithm for the Actor Using McCulloch-Pitts Neurons In this section we apply the previously developed algorithm to the case of neural networks. We start with the classic binary valued McCulloch-Pitts neuron, and then consider a more realistic spiking neuron model. While the algorithm presented in Section 2 was derived and proved to converge in batch mode, we apply it here in an online fashion. The derivation of an online learning algorithm from the batch version is immediate (e.g., [15]), and a proof of convergence in this setting is currently underway. A McCulloch-Pitts actor network The dynamics of the binary valued neurons, given at time n by {ui(n)}N i=1, ui(n) ∈{0, 1}, is assumed to be based on stochastic discrete time parallel updates, given by Pr(ui(n) = 1) = σ(vi(n)) where vi(n) = N X j=1 wijuj(n −1) (i = 1, 2, . . . , N). Here σ(v) = 1/(1 + exp(−v)), and the parameters θ in Algorithm 1 are given by {wij}, where wij(n) is the j 7→i synaptic weight at time n. Each neuron’s stochastic output ui is viewed as an action. Applying the actor update from Algorithm 1 we obtain the following online learning rule wij(n + 1) = wij(n) + γd(x(n), x(n + 1)) (ui(n) −σ(vi(n))) uj(n −1). (6) where d(x(n), x(n + 1)) is the TD signal. The update (6) can be interpreted as an error-driven Hebbian-like learning rule modulated by the TD signal. It resembles the direct policy update rule presented in [2], except that in this rule the reward signal is replaced by the TD signal (computed by the critic). Moreover, the eligibility trace formalism in [2] differs from our formulation. We describe a simulation experiment conducted using a one layered feed-forward artificial neural network which functions as an actor, combined with a non biologically motivated critic. The purpose of the experiment is to examine a simple neuronal model, using different actor and critic architectures. The actor network consists of a single layered feed-forward network of McCullochPitts neurons, and TD modulated synapses as described above, where the TD signal is calculated by a critic. The environment is a maze with barriers consisting of 36 states, see Figure 1(b), where a reward of value 1 is provided at the top right corner, and is zero elsewhere. Every time the agent receives a reward, it is transferred randomly to a different location in the maze. At each time step, the agent is given an input vector which represents the state. The output layer consists of 4 output neurons where each neuron represents an action from the action set U = {up, down, left, right}. We used two different input representations for the actor, consisting either of 12 or 36 neurons (note that the minimum number of input neurons to represent 36 states is 6, and the maximum number is 36). The architecture with 36 input neurons represents each maze state with one exclusive neuron, thus, there is no overlap between input vectors. The architecture with 12 input neurons uses a representation where each state is represented by two neurons, leading to overlaps between the input vectors. We tested two types of critic: a table based critic which performs iterates according to Algorithm 1, and an exact TD which provides the TD of the optimal policy. The results are shown in Figure 1(c), averaged over 25 runs, and demonstrate the importance of good input representations and precise value estimates. (a) (b) 0 5 10 15 x 10 5 0 0.02 0.04 0.06 0.08 0.1 0.12 Number of Steps Average Reward per Stage (c) Figure 1: (a) A illustration of the McCulloch-Pitts network. (b) A diagram of the maze where the agent needs to reach the reward at the upper right corner. (c) The average reward per stage in four different cases: an actor consisting of 12 input neurons and a table based critic (blue crosses), an actor consisting of 36 input neurons and a table based critic (green stars), an actor consisting of 12 input neurons and exact critic (red circles), and an actor consisting of 36 input neurons and an exact TD (black crosses). The optimal average reward per stage is denoted by the dotted line, while a random agent achieves a reward of 0.005. A spiking neuron actor Actual neurons function in continuous time producing action potentials. In extension of [1, 9], we developed an update rule which is based on the Spike Response Model (SRM) [11]. For each neuron we define a state variable vi(t) which represents the membrane potential. The dynamics of vi(t) is given by vi(t) = ϑi(t −ˆti) + N X j=1 wij(t) X tf j ϵij(t −ˆti, t −tf j ), (7) where wij(t) is the synaptic efficacy, ˆti is the last spike time of neuron i prior o t, ϑi(t) is the refractory response, tf j are the times of the presynaptic spikes emitted prior to time t, and ϵij(t − ˆti, t −tf j ) is the response induced by neuron j at neuron i. The second summation in (7) is over all spike times of neuron j emitted prior to time t. The neuron model is assumed to have a noisy threshold, which we model by an escape noise model [11]. According to this model, the neuron fires in the time interval [t, t + δt) with probability ui(t)δt = ρi(vi(t) −vth)δt, where vth is the firing threshold and ρi(·) is a monotonically increasing function. When the neuron reaches the threshold it is assumed to fire and the membrane potential is reset to vr. We consider a network of continuous time neurons and synapses. Based on Algorithm 1, using a small time step δt, we find wij(t + δt) = wij(t) + γd(t)ψij(t). (8) We define the output of the neuron (interpreted as an action) at time t by ui(t). We note that the neuron’s output is discrete and that at each time t, a neuron can fire, ui(t) = 1, or be quiescent, ui(t) = 0. Using the definition of ψ from Section 2.2, yields (similar to [9]) ψij(t) =    ρ′ i(t) ρi(t) P Ht j ϵij(t −ˆti, t −tf j ), if ui(t) = 1 − δtρ′ i(t) 1−δtρi(t) P Ht j ϵij(t −ˆti, t −tf j ), if ui(t) = 0 Taking the limit δt →0, yields the following continuous time update rule dwij(t) dt = γd(t) Fpost({tf i }) z }| { à (1/ρi(t)) X Hi δ(t −tf i ) −1 ! ρ′ i(t) Fpre({tf j }) z }| { X Ht j ϵij(t −ˆti, t −tf j ) . (9) Similarly to [1, 9] we interpret the update rule (9) as a TD modulated spike time dependent plasticity rule. A detailed discussion and interpretation of this update in a more biological context will be left to the full paper. We applied the update rule (9) to an actor network consisting of spiking neurons based on (7). The network’s goal was to reach a circle at the center of a 2D plain =, where the agent can move, using Newtonian dynamics, in the four principle directions. The actor is composed of an input layer and a single layer of modifiable weights. The input layer consists of ‘sensory’ neurons which fire according to the agent’s location in the environment. The synaptic dynamics of the actor is determined by (9). The critic receives the same inputs as the actor, but uses a linear function approximation architecture rather than the table lookup used in Algorithm 1. A standard parameter update rule appropriate for this architecture (e.g., ch. 8 in [22]) was used to update the critic’s parameters3. The output layer of the actor consists of four neuronal groups, representing the directions in which the agent can move, coded based on a firing rate model using Gaussian tuning curves. The TD signal is calculated according to (3). Whenever it reaches the centered circle, it receives a reward, and is transferred randomly to a new position in the environment. Results of such a simulation are presented in Figure 3. Figure 3-a displays the agent’s typical random walk like behavior prior to learning, . Figure 3-b depicts four typical trajectories representing the agent’s actions after a learning phase. Finally, Figure 3-c demonstrates the increase of the average reward per stage, η, vs. time. 0 10 20 0 5 10 15 20 (a) 0 10 20 0 5 10 15 20 (b) 0 200 400 600 0 0.005 0.01 0.015 0.02 time[sec] η (c) Figure 2: (a) Typical agent tracks prior to learning. (b) Agent trajectories following learning. (c) Average reward per stage plotted against time. 4 Discussion We have presented a temporal difference based actor critic learning algorithm for reinforcement learning. The algorithm was derived from first principles based on following a noisy gradient of the 3Algorithm 1 relies on a table lookup critic, while in this example we used a function approximation based critic, due to the large (continuous) state space. average reward, and a convergence proof was presented without relying on the widely used two time scale separation for the actor and the critic. The derived algorithm was applied to neural networks, demonstrating their effective operation in maze problems. The motivation for the proposed algorithm was biological, providing a coherent computational explanation for several recently observed phenomena: actor critic architectures in the basal ganglia, the relation of phasic dopaminergic neuromodulators to the TD signal, and the modulation of the spike time dependent plasticity rules by dopamine. While a great deal of further work needs to be done on both the theoretical and biological components of the framework, we hope that these results provide a tentative step in the (noisy!) direction of explaining biological RL. References [1] D. Baras and R. Meir. Reinforcement learning, spike time dependent plasticity and the bcm rule. Neural Comput., 19(8):22452279, 2007 [2] J. Baxter and P.L. Bartlett. Hebbian synaptic modifications in spiking neurons that learn. (Technical rep.). Canberra: Research School of Information Sciences and Engineering, Australian National University, 1999. [3] J. Baxter and P.L. Bartlett. Infinite-Horizon Policy-Gradient Estimation. J. of Artificial Intelligence Research, 15:319–350, 2001. [4] D.P. Bertsekas. Dynamic Programming and Optimal Control, Vol I., 3rd Ed. Athena Scinetific, 2006. [5] S. Bhatnagar, R. Sutton, M. Ghavamzadeh, and M. Lee. Incremental natural actor-critic algorithms. In J.C. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20, pages 105–112. MIT Press, Cambridge, MA, 2008. [6] S. Bhatnagar, R.S. Sutton, M. Ghavamzadeh, and M. Lee. Natural actor-critic algorithms. Automatica, To appear, 2008. [7] V.S. Borkar. Stochastic approximation with two time scales. Syst. Control Lett., 29(5):291294, 1997. [8] P. Bremaud. Markov Chains: Gibbs Fields, Monte Carlo Simulation, and Queues. Springer, 1999. [9] R.V. Florian. Reinforcement learning through modulation of spike-timing-dependent synaptic plasticity. Neural Computation, 19:14681502, 2007. [10] R.G. Gallager. Discrete Stochastic Processes. Kluwer Academic Publishers, 1995. [11] W. Gerstner and W.M. Kistler. Spinking Neuron Models. Cambridge University Press, Cambridge, 2002. [12] E.M. Izhikevich. Solving the Distal Reward Problem through Linkage of STDP and Dopamine Signaling. Cerebral Cortex, 17(10):2443-52, 2007. [13] V.R. Konda and J. Tsitsiklis. On actor critic algorithms. SIAM J. Control Optim., 42(4):11431166, 2003. [14] H.J. Kushner and G.G. Yin. Stochastic Approximation Algorithms and Applications. Springer, 1997. [15] P. Marbach and J. Tsitsiklis. Simulation-Based Optimization of Markov Reward Processes. IEEE. Trans. Auto. Cont., 46:191–209, 1998. [16] P.R. Montague, P. Dayan, and T.J. Sejnowski. A framework for mesencephalic dopamine systems based on predictive hebbian learning. Journal of Neuroscience, 16:19361947, 1996. [17] J. ODoherty, P. Dayan, J. Schultz, R. Deichmann, K. Friston, and R.J. Dolan. Dissociable roles of ventral and dorsal striatum in instrumental conditioning. Science, 304:452454, 2004. [18] J.N.J. Reynolds and J.R. Wickens. Dopamine-dependent plasticity of corticostriatal synapses. Neural Networks, 15(4-6):507521, 2002. [19] S. Marom and G. Shahaf. Development, learning and memory in large random networks of cortical neurons: lessons beyond anatomy. Quarterly Reviews of Biophysics, 35:6387, 2002. [20] W. Schultz. Multiple reward signals in the brain. Nature Reviews Neuroscience, 1:199207, Dec. 2000. [21] S. Singh and P. Dayan. Analytical mean squared error curves for temporal difference learning. Machine Learning, 32:540, 1998. [22] R. S. Sutton and A. G. Barto. Reinforcement Learning. MIT Press, 1998. [23] R. Sutton, D. McAllester, S. Singh and Y. Mansour. Policy-Gradient Methods for Reinforcement Learning with Function Approximation. Advances in Neural Information Processing Systems, 12:1057–1063, 2000. [24] E.M. Tricomi, M.R. Delgado, and J.A. Fiez. Modulation of caudate activity by action contingency. Neuron, 41(2):281292, 2004.
2008
247
3,506
One Sketch For All: Theory and Application of Conditional Random Sampling Ping Li Dept. of Statistical Science Cornell University pingli@cornell.edu Kenneth W. Church Microsoft Research Microsoft Corporation church@microsoft.com Trevor J. Hastie Dept. of Statistics Stanford University hastie@stanford.edu Abstract Conditional Random Sampling (CRS) was originally proposed for efficiently computing pairwise (l2, l1) distances, in static, large-scale, and sparse data. This study modifies the original CRS and extends CRS to handle dynamic or streaming data, which much better reflect the real-world situation than assuming static data. Compared with many other sketching algorithms for dimension reductions such as stable random projections, CRS exhibits a significant advantage in that it is “one-sketch-for-all.” In particular, we demonstrate the effectiveness of CRS in efficiently computing the Hamming norm, the Hamming distance, the lp distance, and the χ2 distance. A generic estimator and an approximate variance formula are also provided, for approximating any type of distances. We recommend CRS as a promising tool for building highly scalable systems, in machine learning, data mining, recommender systems, and information retrieval. 1 Introduction Learning algorithms often assume a data matrix A ∈Rn×D with n observations and D attributes and operate on the data matrix A through pairwise distances. The task of computing and maintaining distances becomes non-trivial, when the data (both n and D) are large and possibly dynamic. For example, if A denotes a term-doc matrix at Web scale with each row representing one Web page, then n ≈O(1010) (which may be verified by querying “A” or “The” in a search engine). Assuming 105 English words, the simplest uni-gram model requires the dimension D ≈O(105); and a bi-gram model can boost the dimension to D ≈O(1010). Google book search program currently provides data sets on indexed digital books up to five-grams. Note that the term-doc matrix is “transposable,” meaning that one can treat either documents or terms as features, depending on applications. Another example is the image data. The Caltech 256 benchmark contains n = 30, 608 images, provided by two commercial firms. Using pixels as features, a 1024 × 1024 color image can be represented by a vector of dimension D = 10242×3 = 3, 145, 728. Using histogram-based features (e.g., [3]), D = 2563 = 16, 777, 216 is possible if one discretizes the RGB space into 2563 scales. Text data are large and sparse, as most terms appear only in a small fraction of documents. For example, a search engine reports 107 pagehits for the query “NIPS,” which is not common to the general audience. Out of 1010 pages, 107 pagehits indicate a sparsity of 99.9%. (We define sparsity as the percentage of zero elements.) In the absolute magnitude, however, 107 is actually very large. Not all large-scale data are sparse. Image data are usually sparse when features are represented by histograms; they are, however, dense when pixel-based features are used. 1.1 Pairwise Distances Used in Machine Learning The lp distance and χ2 distance are both popular. Denote by u1 and u2 the leading two rows in A ∈Rn×D. The lp distance (raised to the pth power), and the χ2 distance, are, respectively, dp(u1, u2) = D X i=1 |u1,i −u2,i|p, dχ2(u1, u2) = D X i=1 (u1,i −u2,i)2 u1,i + u2,i , (0 0 = 0). The χ2 distance is only a special case of Helbertian metrics, defined as, dH,α,β (u1, u2) = D X i=1 21/β ³ uα 1,i + uα 2,i ´1/α −21/α ³ uβ 1,i + uβ 2,i ´1/β 21/α −21/β , α ∈[1, ∞), β ∈[1/2, α] or β ∈[−∞, −1]. Helbertian metrics are defined over probability space[7] and hence suitable for data generated from histograms, e.g., the “bag-of-words” model. For applications in text and images using SVM, empirical studies have demonstrated the superiority of Helbertian metrics over lp distances[3, 7, 9]. More generally, we are interested in any linear summary statistics which can be written in the form: dg(u1, u2) = D X i=1 g(u1,i, u2,i), (1) for any generic function g. An efficient method for computing (1) for any g would be desirable. 1.2 Bottleneck in Distance/Kernel-based Learning Algorithms A ubiquitous task in learning is to compute, store, update, and retrieve various types of distances[17]. For popular kernel SVM solvers including the SMO algorithm[16], storing and computing kernels is the major bottleneck[2], because computing kernels is expensive, and more seriously, storing the full kernel matrix in memory is infeasible when the number of observations n > 105. One popular strategy is to evaluate kernels on the fly[2]. This works well in low-dimensional data (i.e., relatively small D). With high-dimensional data, however, either computing distances ondemand becomes too slow or the data matrix A ∈Rn×D itself may not fit in memory. We should emphasize that this challenge is a universal issue in distance-based methods, not limited to SVMs. For example, popular clustering algorithms and multi-dimensional scaling algorithms require frequently accessing a (di)similarity matrix, which is usually distance-based. In addition to computing and storing distances, another general issue is that, for many real-world applications, entries of the data matrix may be frequently updated, for example, data streams[15]. There have been considerable studies on learning from dynamic data, e.g., [5, 1]. Since streaming data are often not stored (even on disks), computing and updating distances becomes challenging. 1.3 Contributions and Paper Organization Conditional Random Sampling (CRS)[12, 13] was originally proposed for efficiently computing pairwise (l2 and l1) distances, in large-scale static data. The contributions of this paper are: 1. We extend CRS to handle dynamic data. For example, entries of a matrix may vary over time, or the data matrix may not be stored at all. We illustrate that CRS has the one-sketchfor-all property, meaning that the same set of samples/sketches can be used for computing any linear summary statistics (1). This is a significant advantage over many other dimension reduction or data stream algorithms. For example, the method of stable random projections (SRP)[8, 10, 14] was designed for estimating the lp norms/distances for a fixed p with 0 < p ≤2. Recently, a new method named Compressed Counting[11] is able to very efficiently approximate the lp moments of data streams when p ≈1. 2. We introduce a modification to the original CRS and theoretically justify that this modification makes CRS rigorous, at least for computing the Hamming norm, an important application in databases. We point out the original CRS was based on a heuristic argument. 3. We apply CRS for computing Hilbertian metrics[7], a popular family of distances for constructing kernels in SVM. We focus on a special case, by demonstrating that CRS is effective in approximating the χ2 distance. Section 2 reviews the original CRS. Section 3 extends CRS to dynamic/streaming data. Section 4 focuses on using CRS to estimate the Hamming norm of a single vector, based on which Section 5 provides a generic estimation procedure for CRS, for estimating any linear summary statistics, with the focus on the Hamming distance and the χ2 distance. Finally, Section 6 concludes the paper. 2 Conditional Random Sampling (CRS), the Original Version Conditional Random Sampling (CRS)[12, 13] is a local sampling strategy. Since distances are local (i.e., one pair at a time), there is no need to consider the whole matrix at one time. As the first step, CRS applies a random permutation on the columns of A ∈Rn×D. Figure 1(a) provides an example of a column-permuted data matrix. The next step of CRS is to construct a sketch for each row of the data matrix. A sketch can be viewed as a linked list which stores a small fraction of the non-zero entries from the front of each row. Figure 1(b) demonstrates three sketches corresponding to the three rows of the (column) permuted data matrix in Figure 1(a). 5 0 0 1 0 7 0 0 0 8 0 1 0 8 0 2 1 2 u u u3 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 0 9 2 0 6 0 0 7 0 5 0 0 4 0 0 13 0 4 0 0 2 0 0 0 8 0 0 3 0 0 12 0 (a) Permuted data matrix 2 1 3 K : 1 {5} 4 {1} 6 {7} 10 {8} K : 2 {9} 3 {2} 5 {6} 8 {7} K : 2 {4} 5 {2} 9 {8} 12 {3} (b) Sketches Figure 1: (a): A data matrix with three rows and D = 16 columns. We assume the columns are already permuted. (b): Sketches are the first ki non-zero entries ascending by IDs (here ki = 4). In Figure 1, the sketch for row ui is denoted by Ki. Each element of Ki is a tuple “ID {val},” where “ID” is the column ID after the permutation and “{val}” is the value of that entry. Consider two rows u1 and u2. The last (largest) IDs of sketches K1 and K2 are max(ID(K1)) = 10 and max(ID(K2)) = 8, respectively. Here, “ID(K)” stands for the vector of IDs in the sketch K. It is clear that K1 and K2 contain all information about u1 and u2 from columns 1 to min(10, 8) = 8. Had we directly taken the first Ds = 8 columns from the permuted data matrix, we would obtain the same non-zero entries as in K1 and K2, if we exclude elements in K1 and K2 whose IDs > Ds = 8. in this example, the element 10{8} in sketch K1 is excluded. On the other hand, since the columns are already permuted, any Ds columns constitute a random sample of size Ds. This means, by only looking at sketches K1 and K2, one can obtain a “random” sample of size Ds. By statistics theory, one can easily obtain an unbiased estimate of any linear summary statistics from a random sample. Since Ds is unknown until we look at K1 and K2 together, [13] viewed this as a random sample conditioning on Ds. Note that the Ds varies pairwise. When considering the rows u1 and u3, the sketches K1 and K3 suggest their Ds = min(max(ID(K1)), max(ID(K3))) = min(10,12) = 10. In this study, we point out that, although the “conditioning” argument appeared intuitive, it is only a (good) heuristic. There are two ways to understand why this argument is not strictly correct. Consider a true random sample of size Ds, directly obtained from the first Ds columns of the permuted data matrix. Assuming sparse data, elements at the Dsth column should be most likely zero. However, in the “conditional random sample” obtained from CRS, at least one element at the Dsth column is non-zero. Thus, the estimates of the original CRS are, strictly speaking, biased. For a more obvious example, we can consider two rows with exactly one non-zero entry in each row at the same column. The original CRS can not obtain an unbiased estimate unless Ds = D. 3 CRS for Dynamic Data and Introduction to Stable Random Projections The original CRS was proposed for static data. In reality, the “data matrix” may be frequently updated. When data arrive in a streaming fashion, they often will not be stored (even on disks)[15]. Thus, a one-pass algorithm is needed to compute and update distances for training. Learning with dynamic (or incremental) data has become an active topic of research, e.g., [5, 1]. 3.1 Dynamic/Streaming Data We first consider only one data vector u of length D (viewed as one row in the data matrix). At each time t, there is an input stream st = (it, It), it ∈[1, D] which updates u (denoted by ut) by ut[it] = H(ut−1[it], It), where It is the increment/decrement at time t and H is an updating function. The so-called Turnstile model [15] is extremely popular and assumes a linear updating function H, i.e., ut[it] = ut−1[it] + It. (2) For example, ut[it] can represent the number of orders a “user” i has purchased up to time t, where a user may be identified by his/her IP address (i.e., i ∈[1, D = 264]); It is the number of orders the user i orders (i.e., It > 0) or cancels (i.e., It < 0) at time t. In terms of the data matrix A ∈Rn×D, we can view it to be a collection of n data streams. 3.2 CRS for Streaming Data For each stream ut, we maintain a sketch K with length (i.e., capacity) k. Each entry of K is a tuple “ID{val}.” Initially, all entries are empty. The procedure for sketch construction works as follows: 1. Generate a random permutation π : [1, D] →[1, D]. 2. For each st = (it, It), if π[it] > max(ID(K)) and the capacity of K is reached, do nothing. 3. Suppose π[it] ≤max(ID(K)) or the capacity of K is not reached. If an entry with ID = π[it] does not exist, insert a new entry. Otherwise, update that entry according to H.1 4. Apply the procedure to each data stream using the same random permutation mapping π. Once sketches are constructed, the estimation procedure will be the same regardless whether the original data are dynamic or static. Thus, we will use static data to verify some estimators of CRS. 3.3 (Symmetric) Stable Random Projections (SRP) Since the method of (symmetric) stable random projections (SRP)[8, 10] has become a standard algorithm for data stream computations, we very briefly introduce SRP for the sake of comparisons. The procedure of SRP is to multiply the data matrix A ∈Rn×D by a random matrix R ∈RD×k, whose entries are i.i.d. samples from a standard (symmetric) stable distribution S(p, 1), 0 < p ≤2. Consider two rows, u1 and u2, in A. By properties of stable distributions, the projected vectors v1 = RTu1 and v2 = RTu2 have i.i.d. stable entries, i.e., for j = 1 to k, v1,j ∼S à p, Fp = D X i=1 |u1,i|p ! , v1,j −v2,j ∼S à p, dp = D X i=1 |u1,i −u2,i|p ! . Thus, one can estimate an individual norm or distance from k samples. SRP is applicable to dynamic/streaming data, provided the data follow the Turnstile model in (2). Because the Turnstile model is linear and matrix multiplication is also linear, one can conduct A × R incrementally. Compared with Conditional Random Sampling (CRS), SRP has an elegant mathematical derivation, with various interesting estimators and rigorous sample complexity bounds, i.e., k can be predetermined in fully rigorous fashion. The accuracy of SRP is not affected by heavy-tailed data. CRS, however, exhibits certain advantages over SRP: • CRS is “one-sketch-for-all”. The same sketch of CRS can approximate any linear summary statistics (1). SRP is limited to the lp norm and distance with 0 < p ≤2. One has to conduct SRP 10 times (and store 10 sets of sketches) if 10 different p values are needed. • CRS allows “term-weighting” in dynamic data. In machine learning, the distances are often computed using weighted data (e.g., √u1,i or log(1 + u1,i)), which is critical for good performance. For static data, one can first term-weight the data before applying SRP. For dynamic data, however, there is no way to trace back the original data after projections. • CRS is not restricted to the Turnstile model. • CRS is not necessary less accurate, especially for sparse data or binary data. 4 Approximating Hamming Norms in Dynamic Data Counting the Hamming norm (i.e., number of non-zeros) in an exceptionally long, dynamic vector has important applications[4, 15]. For example, if a vector ut records the numbers of items users have ordered, one meaningful question to ask may be “ how many distinct users are there?” The purpose of this section is three-fold. (1) This is the case we can rigorously analyze CRS and propose a truly unbiased estimator. (2) This analysis brings better insights and more reasonable estimators for pairs of data vectors. (3) In this case, despite its simplicity, CRS theoretically achieves similar accuracy as stable random projections (SRP). Empirically, CRS (slightly) outperforms SRP. 1We leave it for particular applications to decide whether an entry updated to zero should be discarded or should be kept in the sketch. In reality, this case does not occur often. For example, the most important type of data streams[15] is “insertion-only,” meaning that the values will never decrease. 4.1 The Proposed (Unbiased) Estimator and Variance Suppose we have obtained the sketch K. For example, consider the first row in Figure 1: D = 16, k = 4 and the number of non-zeros f = 7. Lemma 1 (whose proof is omitted) proposes an unbiased estimator of f, denoted by ˆf, and a biased estimator based on the maximum likelihood, fmle. Lemma 1 ˆf = D(k −1) Z −1 , Z = max(ID(K)), E ³ ˆf ´ = f, D ≥f ≥k > 1 Var ³ ˆf ´ < V U f = f 2 −f k −2 D D −1 −(D −f)f D −1 , (k > 2) Var ³ ˆf ´ > V L f = V U f − (k −1)f(f −1)(f −2)D (k −2)(k −3)(D −1)(D −2), (k > 3). Assume f/D is small and k/f is also small, then Var ³ ˆf ´ = f2 k + O ¡ 1 k2 ¢ . The maximum likelihood estimator is ˆfmle = k(D+1) Z −1. Note that, since Var ³ ˆf ´ /f 2 ≈1/k, independent of the data, the estimator ˆf actually has the worstcase complexity bound similar to that of SRP[10], although the precise constant is not easy to obtain. 4.2 The Approximation Using the Conditioning Argument Interestingly, this estimator, ˆf = D(k−1) max(ID(K))−1, appears to be the estimator for a hypergeometric random sample of size Ds = max(ID(K)) −1. That is, suppose we randomly pick Ds balls (without replacement) from a pool of D balls and we observe that k′ balls are red; then a natural (and unbiased) estimator for the total number of red balls would be D Ds k′; here k′ = k −1. This seems to imply that the “conditioning” argument in the original CRS in Section 2 is “correct” if we make a simple modification by using the Ds which is the original Ds minus 1. While this is what we will recommend as the modified CRS, it is only a close approximation. Consider ˆfapp = ˆf, where we assume ˆfapp is the estimator for the hypergeometric distribution, then Var ³ ˆ fapp|Ds = Z −1 ´ = D2 D2s Ds f D µ 1 −f D ¶ × D −Ds D −1 = D D −1 µ D Ds −1 ¶ f µ 1 −f D ¶ Var ³ ˆ fapp ´ = E ³ Var ³ ˆ fapp|Ds ´´ = D D −1 µ E µ D Z −1 ¶ −1 ¶ f µ 1 −f D ¶ = Df D −1 µ f k −1 −1 ¶ µ 1 −f D ¶ (3) 4.3 Comparisons with Stable Random Projections (SRP) Based on the observation that f = limp→0+ PD i=1 |ui|p, [4] proposed using SRP to approximate the lp norm with very small p, as an approximation to f. For p →0+, the recent work for SRP [10] proposed the harmonic mean estimator. Recall that after projections v = RTu ∈Rk consists of i.i.d. stable samples with scale parameter Fp = PD i=1 |ui|p. The harmonic mean estimator is ˆ Fp,hm = −2 π Γ(−p) sin ¡ π 2 p ¢ P k j=1 |vj|−p à k − à −πΓ(−2p) sin (πp) £ Γ(−p) sin ¡ π 2 p ¢¤2 −1 !! , Var ³ ˆ Fp,hm ´ = F 2 p 1 k à −πΓ(−2p) sin (πp) £ Γ(−p) sin ¡ π 2 p ¢¤2 −1 ! + O µ 1 k2 ¶ . lim p→0+ −2 π Γ(−p) sin µ π 2 p ¶ →1, lim p→0+ −−πΓ(−2p) sin (πp) £ Γ(−p) sin ¡ π 2 p ¢¤2 −1 →1. Denote this estimator by ˆfsrp (using p as small as possible), whose variance is Var ³ ˆfsrp ´ ≈ f2 k , which is roughly equivalent to the variance of ˆf, the unbiased estimator for CRS. We empirically compared CRS with SRP. Four word vectors were selected; entries of each vector record the numbers of occurrences of the word in D = 216 Web pages. The data are very heavytailed. The percentage of zero elements (i.e., sparsity) varies from 58% to 95%. Figure 2 presents the comparisons. (1): It is possible that CRS may outperform SRP non-negligibly. (2): The variance (3) based on the approximate “conditioning” argument is very accurate. (3): The unbiased estimator ˆf is more accurate than ˆfmle; the latter actually uses one more sample. 3 10 20 30 40 10 −1 10 0 k Standardized MSE THIS CRS CRS+mle SRP 1/k Approx. Var 3 10 20 30 40 10 −1 10 0 k Standardized MSE HAVE CRS CRS+mle SRP 1/k Approx. Var 3 10 20 30 40 10 −1 10 0 k Standardized MSE ADDRESS CRS CRS+mle SRP 1/k Approx. Var 3 10 20 30 40 10 −1 10 0 k Standardized MSE CUSTOMER CRS CRS+mle SRP 1/k Approx. Var Figure 2: Comparing CRS with SRP for approximating Hamming norms in Web crawl data (four word vectors), using the normalized mean square errors (MSE, normalized by f 2). “CRS” and “CRS+mle” respectively correspond to ˆf and ˆfmle, derived in Lemma 1. ”SRP” corresponds to the harmonic mean estimator of SRP using p = 0.04. “1/k” is the theoretical asymptotic variance of both CRS and SRP. The curve labeled ”Approx. Var” is the approximate variance in (3). 5 The Modified CRS Estimation Procedure The modified CRS estimation procedure is based on the theoretical analysis for using CRS to approximate Hamming norms. Suppose we are interested in the distance between rows u1 and u2 and we have access to sketches K1 and K2. Our suggested “equivalent” sample size Ds would be Ds = min{Z1 −1, Z2 −1}, Z1 = max(ID(K1), Z2 = max(ID(K2). (4) We should not include elements in K1 and K2 whose IDs are larger than Ds Consider K1 and K2 in Figure 1, the modified CRS adopts Ds = min(10−1, 8−1) = min(9, 7) = 7. Removing 10{8} from K1 and 8{7} from K2, we obtain a sample for u1 and u2: ˜u1,1 = 5, ˜u1,4 = 1, ˜u1,6 = 7, ˜u2,2 = 9, ˜u2,3 = 2, ˜u2,5 = 6. All other sample entries are zero: ˜u1,2 = ˜u1,3 = ˜u1,5 = ˜u1,7 = 0, ˜u2,1 = ˜u2,4 = ˜u2,6 = ˜u2,7 = 0. 5.1 A Generic Estimator and Approximate Variance Rigorous theoretical analysis on one pair of sketches is difficult. We resort to the approximate “conditioning” argument using the modified Ds in (4). We consider a generic distance dg(u1, u2) = PD i=1 g (u1,i, u2,i), and assume that, conditioning on Ds, the sample {˜u1,j, ˜u2,j}Ds j=1 is exactly equivalent to the sample from randomly selected Ds columns without replacement. Under this assumption, an “unbiased” estimator of dg(u1, u2) (and two special cases) would be ˆdg(u1, u2) = D Ds D X i=1 g(˜u1,j, ˜u2,j), ˆdp = D Ds Ds X j=1 |˜u1,j −˜u2,j|p, ˆdχ2 = D Ds Ds X j=1 (˜u1,j −˜u2,j)2 ˜u1,j + ˜u2,j . A generic (approximate) variance formula can be obtained as follows: Var ³ ˆdg(u1, u2)|Ds ´ ≈D −Ds D −1 × D2 D2s Ds ³ E ³ g2(˜u1,j, ˜u2,j) ´ −E2 (g(˜u1,j, ˜u2,j)) ´ = D −Ds D −1 D2 D2s Ds  1 D D X i=1 g2(u1,i, u2,i) − à 1 D D X i=1 g(u1,i, u2,i) ! 2 = D D −1 µ D Ds −1 ¶ à dg2 − d2 g D ! . Var ³ ˆdg(u1, u2) ´ ≈E ³ Var ³ ˆdg(u1, u2)|Ds ´´ = D D −1 µ E µ D Ds ¶ −1 ¶ à dg2 − d2 g D ! = D D −1 µ E µ max{ D Z1 −1 , D Z2 −1 } ¶ −1 ¶ à dg2 − d2 g D ! ≈ D D −1 µ max ½ E µ D Z1 −1 ¶ , E µ D Z2 −1 ¶¾ −1 ¶ à dg2 − d2 g D ! = D D −1 µ max ½ f1 k1 −1 , f2 k2 −1 ¾ −1 ¶ à dg2 − d2 g D ! . (5) Here, k1 and k2 are the sketch sizes of K1 and K2, respectively, f1 and f2 are the numbers of nonzeros in the original data, u1, u2, respectively. We have used the results in Lemma 1 and a common statistical approximation: E(max(x, y)) ≈max (E(x), E(y)). From (5), we know the variance is affected by two factors. If the data are very sparse, i.e., max n f1 k1−1, f2 k2−1 o is small, then the variance also tends to be small. If the data are heavy-tailed, i.e., Ddg2 ≫d2 g, then the variance tends to be large. Text data are often highly sparse and heavytailed; but machine learning applications often need to use the weighted data (i.e., taking logarithm or binary quantization). This is why we expect CRS will be successful in real applications, although it in general does not have the worst-case performance guarantees. The next two subsections apply CRS to estimating the Hamming distance and the χ2 distance. Empirical studies [3, 7, 9] have demonstrated that, in text and image data, using the Hamming distance or the χ2 distance for kernel SVMs achieved good performance. 5.2 Estimating the Hamming Distance Following the definition of Hamming distance in [4]: h (u1, u2) = PD i=1 1{u1,i −u2,i ̸= 0}, we estimate h using the modified CRS procedure, denoted by ˆh. The approximate variance (5) becomes Var ³ ˆh ´ ≈ D D −1 µ max ½ f1 k1 −1, f2 k2 −1 ¾ −1 ¶ µ h −h2 D ¶ . (6) We also apply SRP using small p and its most accurate harmonic mean estimator[10]. The empirical comparisons in Figure 3 verify two points. (1): CRS can be considerably more accurate than SRP for estimating Hamming distances in [4]. (2): The approximate variance formula (6) is very accurate. 10 20 30 40 10 −2 10 −1 10 0 k Standardized MSE THIS − HAVE CRS Approx. Var SRP 10 20 30 40 10 −2 10 −1 10 0 k Standardized MSE ADDRESS − CUSTOMER CRS Approx. Var SRP Figure 3: Approximating Hamming distances (h) using two pairs of words. The results are presented in terms of the normalized (by h2) MSE. The curves labeled “Approx. Var” correspond to the approximate variance of CRS in (6). In this example, the seemingly impressive improvement of CRS over SRP is actually due to that we used the definition of Hamming distance in [4]. An alternative definition of Hamming distance is h(u1, u2) = PD i=1[1 {u1,i ̸= 0 and u2,i = 0} + 1 {u1,i = 0 and u2,i ̸= 0}], which is basically the lp distance after a binary term-weighting. As we have commented, if using SRP in dynamic data, term-weighting is not possible; thus we only experimented with the definition in [4]. 5.3 Estimating the χ2 Distance We apply CRS to estimating the χ2 distance between u1 and u2: dχ2 (u1, u2) = PD i=1 (u1,i−u2,i) 2 u1,i+u2,i . According to (5), the estimation variance should be approximately D D −1 µ max ½ f1 k1 −1, f2 k2 −1 ¾ −1 ¶ à D X i=1 (u1,i −u2,i)4 (u1,i + u2,i)2 − d2 χ2 D ! , (7) which is affected only by the second moments, because PD i=1 (u1,i−u2,i) 4 (u1,i+u2,i)2 ≤PD i=1(u1,i + u2,i)2. There are proved negative results [6] that in the worst-case no efficient algorithms exist for approximating the χ2 distances. CRS does not provide any worst-case guarantees; its performance relies on the assumption that the data are often reasonably sparse and the second moments should be reasonably bounded in machine learning applications. Figure 4 presents some empirical study, using the same four words, plus the UCI Dexter data. Even though the four words are fairly common (i.e., not very sparse) and they are heavy-tailed (no termweighting was applied), CRS still achieved good performance in terms of the normalized MSE (e.g., ≤0.1) at reasonably small k. And again, the approximate variance formula (7) is accurate. Results in the Dexter data set (which is more realistic for machine learning) are encouraging. Only about k = 10 is needed to achieve small MSE. 0 10 20 40 60 80 100 10 −2 10 −1 10 0 k Standardized MSE THIS − HAVE CRS Approx. Var 0 10 20 40 60 80 100 10 −1 10 0 k Standardized MSE ADDRESS − CUSTOMER CRS Approx. Var 3 5 10 15 20 25 30 10 −2 10 −1 10 0 10 1 k Normalized MSE Dexter 10% 50% 90% Figure 4: Left two panels: CRS for approximating the χ2 distance using two pairs of words (D = 216). The curves report the normalized MSE and the approximate variance in (7). Right-most panel: The Dexter data, D = 20000, with 300 data points. We estimate all pairwise (i.e., 44850 pairs) χ2 distances using CRS. The three curves report the quantiles of normalized MSEs. 6 Conclusion The ubiquitous phenomenon of massive, high-dimensional, and possibly dynamic data, has brought in serious challenges. It is highly desirable to achieve compact data presentation and efficiently computing and retrieving summary statistics, in particular, various types of distances. Conditional Random Sampling (CRS) provides a simple and effective mechanism to achieve this goal. Compared with other “main stream” sketching algorithms such as stable random projections (SRP), the major advantage of CRS is that it is “one-sketch-for-all,” meaning that the same set of sketches can approximate any linear summary statistics. This would be very convenient in practice. The major disadvantage of CRS is that it relies heavily on the data sparsity and also on the assumption that in machine learning applications the “worst-case” data distributions are often avoided (e.g., through term-weighting). Also, the theoretical analysis is difficult, despite it is a simple algorithm. Originally based on a heuristic argument, the preliminary version of CRS, was proposed as a tool for computing pairwise l2 and l1 distances in static data. This paper provides a partial theoretical justification of CRS and various modifications, to make the algorithm more rigorous and to extend CRS for handling dynamic/streaming data. We demonstrate, empirically and theoretically, the effectiveness of CRS in approximating the Hamming norms/distances and the χ2 distances. Acknowledgement Ping Li is partially supported by grant DMS-0808864 from the National Science Foundation, and a gift from Microsoft. Trevor Hastie was partially supported by grant DMS-0505676 from the National Science Foundation, and grant 2R01 CA 72028-07 from the National Institutes of Health. References [1] Charu C. Aggarwal, Jiawei Han, Jianyong Wang, and Philip S. Yu. On demand classification of data streams. In KDD, 503–508, 2004. [2] L´eon Bottou, Olivier Chapelle, Dennis DeCoste, and Jason Weston, editors. Large-Scale Kernel Machines. The MIT Press, 2007. [3] Olivier Chapelle, Patrick Haffner, and Vladimir N. Vapnik. Support vector machines for histogram-based image classification. IEEE Trans. Neural Networks, 10(5):1055–1064, 1999. [4] Graham Cormode, Mayur Datar, Piotr Indyk, and S. Muthukrishnan. Comparing data streams using hamming norms (how to zero in). IEEE Transactions on Knowledge and Data Engineering, 15(3):529–540, 2003. [5] Carlotta Domeniconi and Dimitrios Gunopulos. Incremental support vector machine construction. In ICDM, pages 589–592, 2001. [6] Sudipto Guha, Piotr Indyk, and Andrew McGregor. Sketching infomration divergence. In COLT, pages 424–438, 2007. [7] M. Hein and O. Bousquet. Hilbertian metrics and positive definite kernels on probability measures. In AISTATS, pages 136–143, 2005. [8] Piotr Indyk. Stable distributions, pseudorandom generators, embeddings, and data stream computation. J. of ACM, 53(3):307–323, 2006. [9] Yugang Jiang, Chongwah Ngo, and Jun Yang. Towards optimal bag-of-features for object categorization and semantic video retrieval. In CIVR, pages 494–501, 2007. [10] Ping Li. Estimators and tail bounds for dimension reduction in lα (0 < α ≤2) using stable random projections. In SODA, 2008. [11] Ping Li. Compressed Counting. In SODA, 2009. [12] Ping Li and Kenneth W. Church. A sketch algorithm for estimating two-way and multi-way Associations. Computational Linguistics, 33(3):305-354, 2007. Preliminary results appeared in HLT/EMNLP, 2005. [13] Ping Li, Kenneth W. Church, and Trevor J. Hastie. Conditional random sampling: A sketch-based sampling technique for sparse data. In NIPS, pages 873–880, 2007. [14] Ping Li. Computationally efficient estimators for dimension reductions using stable random projections. In ICDM, 2008. [15] S. Muthukrishnan. Data streams: Algorithms and applications. Found. and Trends in Theoretical Computer Science, 1:117–236, 2 2005. [16] John C. Platt. Using analytic QP and sparseness to speed training of support vector machines. In NIPS, pages 557–563, 1998. [17] Bernhard Sch¨olkopf and Alexander J. Smola. Learning with Kernels. The MIT Press, 2002.
2008
248
3,507
The Mondrian Process Daniel M. Roy Massachusetts Institute of Technology droy@mit.edu Yee Whye Teh Gatsby Unit, University College London ywteh@gatsby.ucl.ac.uk Abstract We describe a novel class of distributions, called Mondrian processes, which can be interpreted as probability distributions over kd-tree data structures. Mondrian processes are multidimensional generalizations of Poisson processes and this connection allows us to construct multidimensional generalizations of the stickbreaking process described by Sethuraman (1994), recovering the Dirichlet process in one dimension. After introducing the Aldous-Hoover representation for jointly and separately exchangeable arrays, we show how the process can be used as a nonparametric prior distribution in Bayesian models of relational data. 1 Introduction Relational data are observations of relationships between sets of objects and it is therefore natural to consider representing relations1 as arrays of random variables, e.g., (Ri,j), where i and j index objects xi ∈X and yj ∈Y . Nonrelational data sets (e.g., observations about individual objects in X) are simply one-dimensional arrays (Ri) from this viewpoint. A common Bayesian approach in the one-dimensional setting is to assume there is cluster structure and use a mixture model with a prior distribution over partitions of the objects in X. A similar approach for relational data would na¨ıvely require a prior distribution on partitions of the product space X × Y = {(x, y) | x ∈X, y ∈Y }. One choice is to treat each pair (x, y) atomically, clustering the product space directly, e.g., by placing a Chinese restaurant process (CRP) prior on partitions of X ×Y . An unsatisfactory implication of this choice is that the distribution on partitions of (Ri,j) is exchangeable, i.e., invariant to swapping any two entries; this implies that the identity of objects is ignored when forming the partition, violating common sense. Stochastic block models2 place prior distributions on partitions of X and Y separately, which can be interpreted as inducing a distribution on partitions of the product space by considering the product of the partitions. By arranging the rows and columns of (Ri,j) so that clustered objects have adjacent indices, such partitions look like regular grids (Figure 1.1). An unfortunate side effect of this form of prior is that the “resolution” needed to model fine detail in one area of the array necessarily causes other parts of the array to be dissected, even if the data suggest there is no such structure. The annotated hierarchies described by Roy et al. (2007) generate random partitions which are not constrained to be regular grids (Figure 1.2), but the prior is inconsistent in light of missing data. Motivated by the need for a consistent distribution on partitions of product spaces with more structure than classic block models, we define a class of nonparametric distributions we have named Mondrian processes after Piet Mondrian and his abstract grid-based paintings. Mondrian processes are random partitions on product spaces not constrained to be regular grids. Much like kd-trees, Mondrian processes partition a space with nested, axis-aligned cuts; see Figure 1.3 for examples. We begin by introducing the notion of partially exchangeable arrays by Aldous (1981) and Hoover (1979), a generalization of exchangeability on sequences appropriate for modeling relational data. 1We consider binary relations here but the ideas generalize easily to multidimensional relations. 2Holland et al. (1983) introduced stochastic block models. Recent variations (Kemp et al., 2006; Xu et al., 2006; Roy et al., 2007) descend from Wasserman and Anderson (1987) and Nowicki and Snijders (2001). 1 We then define the Mondrian process, highlight a few of its elegant properties, and describe two nonparametric models for relational data that use the Mondrian process as a prior on partitions. 2 Exchangeable Relational Data The notion of exchangeability3, that the probability of a sequence of data items does not depend on the ordering of the items, has played a central role in hierarchical Bayesian modeling (Bernardo and Smith, 1994). A classic result by de Finetti (1931), later extended by Ryll-Nardzewski (1957), states that if x1, x2, ... is an exchangeable sequence, then there exists a random parameter θ such that the sequence is conditionally iid given θ: p(x1, ..., xn) = Z pθ(θ) n Y i=1 px(xi|θ)dθ (1) That is, exchangeable sequences arise as a mixture of iid sequences, where the mixing distribution is p(θ). The notion of exchangeability has been generalized to a wide variety of settings. In this section we describe notions of exchangeability for relational data originally proposed by Aldous (1981) and Hoover (1979) in the context of exchangeable arrays. Kallenberg (2005) significantly expanded on the concept, and Diaconis and Janson (2007) showed a strong correspondence between such exchangeable relations and a notion of limits on graph structures (Lov´asz and Szegedy, 2006). Here we shall only consider binary relations—those involving pairs of objects. Generalizations to relations with arbitrary arity can be gleaned from Kallenberg (2005). For i, j = 1, 2, ... let Ri,j denote a relation between two objects xi ∈X and yj ∈Y from possibly distinct sets X and Y . We say that R is separately exchangeable if its distribution is invariant to separate permutations on its rows and columns. That is, for each n, m ≥1 and each pair of permutations π ∈Sn and σ ∈Sm, p(R1:n,1:m) = p(Rπ(1:n),σ(1:m)) (2) in MATLAB notation. Aldous (1981) and Hoover (1979) showed that separately exchangeable relations can always be represented in the following way: each object i (and j) has a latent representation ξi (ηj) drawn iid from some distribution pξ (pη); independently let θ be an additional random parameter. Then, p(R1:n,1:m) = Z pθ(θ) Y i pξ(ξi) Y j pη(ηj) Y i,j pR(Ri,j|θ, ξi, ηj)dθdξ1:ndη1:m (3) As opposed to (1), the variables ξi and ηj capture additional dependencies specific to each row and column. If the two sets of objects are in fact the same, i.e. X = Y , then the relation R is a square array. We say R is jointly exchangeable if it is invariant to jointly permuting rows and columns; that is, for each n ≥1 and each permutation π ∈Sn we have p(R1:n,1:n) = p(Rπ(1:n),π(1:n)) (4) Such jointly exchangeable relations also have a form similar to (3). The differences are that we have one latent variable ξi for to each object xi, and that Ri,j, Rj,i need not be independent anymore: p(R1:n,1:n) = Z pθ(θ) Y i pξ(ξi) Y i≤j pR(Ri,j, Rj,i|θ, ξi, ξj)dθdξ1:n (5) In (5) it is important that pR(s, t|θ, ξi, ξj) = pR(t, s|θ, ξj, ξi) to ensure joint exchangeability. The first impression from (5) is that joint exchangeability implies a more restricted functional form than separately exchangeable (3). In fact, the reverse holds—(5) means that the latent representations of row i and column i need not be independent, and that Ri,j and Rj,i need not be conditionally independent given the row and column representations, while (3) assumes independence of both. For example, a symmetric relation, i.e. Ri,j = Rj,i, can only be represented using (5). The above Aldous-Hoover representation serves as the theoretical foundation for hierarchical Bayesian modeling of exchangeable relational data, just as de Finetti’s representation serves as a foundation for the modeling of exchangeable sequences. In Section 5, we cast the Infinite Relational Model (Kemp et al., 2006) and a model based on the Mondrian process into this representation. 2 Anowadya Anowadya (IRM) 0.0 0.5 1.0 Figure 1: (1) Stochastic block models like the Infinite Relational model (Kemp et al., 2006) induce regular partitions on the product space, introducing structure where the data do not support it. (2) Axis-aligned partitions, like those produced by annotated hierarchies and the Mondrian process provide (a posteriori) resolution only where it is needed. (3) Mondrian process on unit square, [0, 1]2. (4) We can visualize the sequential hierarchical process by spreading the cuts out over time. The third dimension is λ. (5) Mondrian process with beta L´evy measure, µ(dx) = x−1dx on [0, 1]2. (6) 10x zoom of 5 at origin. (7) Mondrian on [ϵ, 1]3 with beta measure. 3 The Mondrian Process The Mondrian process can be expressed as a recursive generative process that randomly makes axisaligned cuts, partitioning the underlying product space in a hierarchical fashion akin to decision trees or kd-trees. The distinguishing feature of this recursive stochastic process is that it assigns probabilities to the various events in such a way that it is consistent (in a sense we make precise later). The implication of consistency is that we can extend the Mondrian process to infinite spaces and use it as a nonparametric prior for modeling exchangeable relational data. 3.1 The one dimensional case The simplest space to introduce the Mondrian process is the unit interval [0, 1]. Starting with an initial “budget” λ, we make a sequence of cuts, splitting the interval into subintervals. Each cut costs a random amount, eventually exhausting the budget and resulting in a finite partition m of the unit interval. The cost, EI, to cut an interval I is exponentially distributed with inverse mean given by the length of the interval. Therefore, the first cut costs E[0,1] ∼Exp(1). Let λ′ = λ −E[0,1]. If λ′ < 0, we make no cuts and the process returns the trivial partition m = {[0, 1]}. Otherwise, we make a cut uniformly at random, splitting the unit interval into two subintervals A and B. The process recurses independently on A and B, with independent budgets λ′, producing partitions mA and mB, which are then combined into a partition m = mA S mB of [0, 1]. The resulting cuts can be shown to be a Poisson (point) process. Unlike the standard description of the Poisson process, the cuts in this “break and branch” process are organized in a hierarchy. As the Poisson process is a fundamental building block for random measures such as the Dirichlet process (DP), we will later exploit this relationship to build various multidimensional generalizations. 3.2 Generalizations to higher dimensions and trees We begin in two dimensions by describing the generative process for a Mondrian process m ∼ MP(λ, (a, A), (b, B)) on the rectangle (a, A)×(b, B). Again, let λ′ = λ−E, where E ∼Exp(A− a+B−b) is drawn from an exponential distribution with rate the sum of the interval lengths. If λ′ < 0, the process halts, and returns the trivial partition {(a, A) × (b, B)}. Otherwise, an axis-aligned cut is made uniformly at random along the combined lengths of (a, A) and (b, B); that is, the cut lies along a particular dimension with probability proportional to its length, and is drawn uniformly within that interval. W.l.o.g., a cut x ∈(a, A) splits the interval into (a, x) and (x, A). The process then recurses, generating independent Mondrian processes with diminished rate parameter λ′ on both sides of the cut: m< ∼MP(λ′, (a, x), (b, B)) and m> ∼MP(λ′, (x, A), (b, B)). The partition on (a, A)×(b, B) is then m< S m>. Like the one-dimensional special case, the λ parameter controls the number of cuts, with the process more likely to cut rectangles with large perimeters. The process can be generalized in several ways. In higher dimensions, the cost E to make an additional cut is exponentially distributed with rate given by the sum over all dimensions of the interval lengths. Similarly, the cut point is chosen uniformly at random from all intervals, splitting only that interval in the recursion. Like non-homogeneous Poisson processes, the cut point need not 3In this paper we shall always mean infinite exchangeability when we state exchangeability. 3 be chosen uniformly at random, but can instead be chosen according to a non-atomic rate measure µd associated with each dimension. In this case, lengths (A −a) become measures µ1(a, A). The process can also be generalized beyond products of intervals. The key property of intervals that the Mondrian process relies upon is that any point cuts the space into one-dimensional, simplyconnected pieces. Trees also have this property: a cut along an edge splits a tree into two trees. We denote a Mondrian process m with rate λ on a product of one-dimensional, simply-connected domains Θ1×···×ΘD by m ∼MP(λ, Θ1, ..., ΘD), with the dependence on µ1, ..., µD left implicit. A description of the recursive generative model for the conditional Mondrian (see Section 4) is given in Algorithm 1. 4 Properties of the Mondrian Process This section describes a number of interesting properties of the Mondrian process. The most important properties of the Mondrian is its self-consistency. Instead of representing a draw from a Mondrian as an unstructured partition of Θ1 × ···× ΘD, we will represent the whole history of the generative process. Thus a draw from the Mondrian process is either a trivial partition or a tuple m = ⟨d, x, λ′, m<, m>⟩, representing a cut at x along the d’th dimension Θd, with nested Mondrians m< and m> on either side of the cut. Therefore, m is itself a tree of axis-aligned cuts (a kd-tree data structure), with the leaves of the tree forming the partition of the original product space. Conditional Independencies: The generative process for the Mondrian produces a tree of cuts, where each subtree is itself a draw from a Mondrian. The tree structure precisely reflects the conditional independencies of the Mondrian; e.g., the two subtrees m< and m> are conditional independent given λ′, d and x at the first cut. Consistency: The Mondrian process satisfies an important self-consistency property: given a draw from a Mondrian on some domain, the partition on any subdomain has the same distribution as if we sampled a Mondrian process directly on that subdomain. More precisely, let m ∼MP(λ, Θ1, ..., ΘD) and, for each dimension d, let Φd be a connected subdomain of Θd. The restriction ρ(m, Φ1, ..., ΦD) of m to Φ1 × · · · × ΦD is the subtree of cuts within Φ1 × · · · × ΦD. We define restrictions inductively: If there are no cuts in m, i.e. m = Θ1×···×ΘD, then ρ(m, Φ1, ..., ΦD) is simply Φ1×···×ΦD. Otherwise m = ⟨d, x, λ, m<, m>⟩ for some d, x, and λ, and where m< and m> are the two subtrees. Let Θ<x d , Θ>x d be the d’th domains of m< and m> respectively. If x ̸∈Φd this implies that Φd must be on exactly one side of x (because Φd and Θd are connected). W.l.o.g., assume Φd ⊂Θ<x d . In this case, ρ(m, Φ1, ..., ΦD) = ρ(m<, Φ1, ..., ΦD). If x ∈Φd then both Θ<x d and Θ>x d overlap Φd and ρ(m, Φ1, ..., ΦD) = ⟨d, x, λ, ρ(m<, Φ1, ..., Φd ∩Θ<x d , ...ΦD), ρ(m>, Φ1, ..., Φd ∩Θ>x d , ...ΦD)⟩. By integrating out the variables on nodes not contained in the restriction, it can be shown that the restriction ρ(m, Φ1, ..., ΦD) is itself distributed according to a Mondrian MP(λ, Φ1, ..., ΦD). So far the construction of the Mondrian process assumes that each domain Θd has finite measure. A consequence of this consistency property is that we can now use the Daniell-Kolmogorov extension theorem to extend the Mondrian process to σ-finite domains (those that can be written as a countable union of finite domains). For example, from a Mondrian process on products of intervals, we can construct a Mondrian process on all of RD. Note that if the domains have infinite measure, the tree of cuts will be infinitely deep with no root and infinitely many leaves (being the infinite partition of the product space). However the restriction of the tree to any given finite subdomains will be finite with a root (with probability one). Mondrian Slices: One interesting specific case of consistency under restriction is worth mentioning. Suppose that our subdomains are Φ1 = {y} and Φd = Θd for d ≥2. That is, we consider the restriction of the Mondrian to a slice of the space where the first coordinate takes on value y. The consistency property shows that the restriction ρ = ρ(m, Φ1, ..., ΦD) onto these subdomains is distributed according to a Mondrian as well. But since µ1 is non-atomic, µ1({y}) = 0 thus ρ will not have any cuts in the first domain (with probability 1). That is, we can interpret ρ as a draw from a D −1 dimensional Mondrian with domains Θ2, ..., ΘD. This is true of any lower dimensional slice of the Mondrian. One particular extreme is that since a one dimensional Mondrian is simply the 4 0 .1 .5 1 0 .5 1 3 5 6 2 1 4 6 4 7 1 3 2 5 Figure 2: Modeling a Mondrian with a Mondrian: A posterior sample given relational data created from an actual Mondrian painting. (from left) (1) Composition with Large Blue Plane, Red, Black, Yellow, and Gray (1921). (2) Raw relational data, randomly shuffled. These synthetic data were generated by fitting a regular 6 × 7 point array over the painting (6 row objects, 7 column objects), and using the blocks in the painting to determine the block structure of these 42 relations. We then sampled 18 relational arrays with this block structure. (3) Posterior sample of Mondrian process on unit square. The colors are for visual effect only as the partitions are contiguous rectangles. The small black dots are the embedding of the pairs (ξi, ηj) into the unit square. Each point represents a relation Ri,j; each row of points are the relations (Ri,·) for an object ξi, and similarly for columns. Relations in the same block are clustered together. (4) Induced partition on the (discrete) relational array, matching the painting. (5) Partitioned and permuted relational data showing block structure. break-and-branch generative process for a Poisson process, any one dimensional slice of a Mondrian gives a Poisson point process. Conditional Mondrians: Using the consistency property, we can derive the conditional distribution of a Mondrian m with rate λ on Θ1 × ···× ΘD given its restriction ρ = ρ(m, Φ1, ..., ΦD). To do so, we have to consider three possibilities: when m contains no cuts, when the first cut of m is in ρ, and when the first cut of m is above ρ. Fortunately the probabilities of each of these events can be computed easily, and amounts to drawing an exponential sample E ∼Exp(P d µd(Θd \ Φd)), and comparing it against the diminished rate after the first cut in ρ. Pseudocode for generating from a conditional Mondrian is given in Algorithm 1. When every domain of ρ has zero measure, i.e., µd(Φd) = 0 for all d, the conditional Mondrian reduces to an unconditional Mondrian. Algorithm 1 Conditional Mondrian m ∼MP(λ, Θ1, ..., ΘD | ρ) ρ = φd = ∅is unconditioned 1. let λ′ ←λ −E where E ∼Exp(PD d=1 µd(Θd \ Φd)). 2. if ρ has no cuts then λ′′ ←0 else ⟨d′, x′, λ′′, ρ<, ρ>⟩←ρ. 3. if λ′ < λ′′ then take root form of ρ 4. if ρ has no cut then 5. return m ←Θ1 × ···× ΘD. 6. else (d′, x′) is the first cut in m 7. return m ←⟨d′, x′, λ′′, MP(λ′′, Θ1, . . . , Θ<x′ d′ , . . . , ΘD | ρ<), MP(λ′′, Θ1, . . . , Θ>x′ d′ , . . . , ΘD | ρ>)⟩. 8. else λ′′ < λ′ and there is a cut in m above ρ 9. draw a cut (d, x) outside ρ, i.e., p(d) ∝µd(Θd \ Φd), x|d ∼ µd µd(Θd\Φd) without loss of generality suppose Φd ⊂Θ<x d 10. return m ←⟨d, x, λ′, MP(λ′, Θ1, . . . , Θ<x d , . . . , ΘD | ρ), MP(λ′, Θ1, . . . , Θ>x d , . . . , ΘD)⟩. Partition Structure: The Mondrian is simple enough that we can characterize a number of its other properties. As an example, the expected number of slices along each dimension of (0, A)×(0, B) is λA and λB, while the expected total number of partitions is (1+λA)(1+λB). Interestingly, this is also the expected number of partitions in a biclustering model where we first have two independent Poisson processes with rate λ partition (0, A) and (0, B), and then form the product partition of (0, A) × (0, B). 5 food!animals crude materials minerals!fuels basic goods diplomats posterior sample of Mondrian on !0,1"2 SYRIA MADAG LIBER ETHIO HONDU JAPAN UK SWITZ SPAIN CHINA NEWZE EGYPT THAIL ARGEN CZECH FINLA PAKIS BRAZL YUGOS INDON USA ISRAE ECUAD ALGER LIBER HONDU ETHIO BRAZL MADAG ECUAD ARGEN CHINA INDON THAIL NEWZE PAKIS SYRIA CZECH ALGER EGYPT ISRAE FINLA SWITZ YUGOS USA SPAIN JAPAN UK food!animals crude materials minerals!fuels basic goods diplomats Figure 3: Trade and Diplomacy relations between 24 countries in 1984. Rij = 1 (black squares) implies that country i imports R from country j. The colors are for visual effect only as the partitions are contiguous rectangles. [Kingman 82] 5 10 3 1 4 9 2 6 7 8 X X X X X X 1 3 4 2 1 3 4 2 1 3 4 2 1 3 4 5 2 1 3 4 2 1 3 4 2 1 3 4 2 1 3 4 5 2 1 3 4 2 1 3 4 2 1 3 4 2 1 3 4 5 2 Learning the latent tree Janitor A Janitor B Janitor C Prof. A Prof. B Prof. C Student A Student B Student C Janitor A Janitor B Janitor C Prof. A Prof. B Prof. C Student A Student B Student C Janitor A Janitor B Janitor C Prof. A Prof. B Prof. C Student A Student B Student C Janitor A Janitor B Janitor C Prof. A Prof. B Prof. C Student A Student B Student C Janitor A Janitor B Janitor C Prof. A Prof. B Prof. C Student A Student B Student C Janitor A Janitor B Janitor C Prof. A Prof. B Prof. C Student A Student B Student C Janitor A Janitor B Janitor C Student A Student B Student C Prof. C Prof. B Prof. A Janitor A Janitor B Janitor C Student A Student B Student C Prof. C Prof. B Prof. A Janitor A Janitor B Janitor C Student A Student B Student C Prof. C Prof. B Prof. A Janitor A Janitor B Janitor C Student A Student B Student C Prof. C Prof. B Prof. A Janitor A Janitor B Janitor C Student A Student B Student C Prof. C Prof. B Prof. A Janitor A Janitor B Janitor C Student A Student B Student C Prof. C Prof. B Prof. A Friends? Works With? Gives orders to? Janitor A Janitor B Janitor C Student A Student B Student C Prof. C Prof. B Prof. A Figure 4: (clockwise from bottom left) (1) Nine samples from the Mondrian process on Kingman coalescents with rate λ = 0.25, 0.5, and 1, respectively. As the rate increases, partitions become finer. Note that partitions are not necessarily contiguous; we use color to identify partitions. The partition structure is related to the annotated hierarchies model (Roy et al., 2007). (2) Kingman (1982a,b) describes the relationship between random trees and the DP, which we exploit to define a nonparametric, hierarchical block model. (3) A sequence of cuts; each cut separates a subtree. (4) Posterior trees and Mondrian processes on a synthetic social network. 5 Relational Modeling To illustrate how the Mondrian process can be used to model relational data, we describe two nonparametric block models for exchangeable relations. While we will only consider binary data and assume that each block is conditionally iid, the ideas can be extended to many likelihood models. Recall the Aldous-Hoover representation (θ, ξi, ηj, pR) for exchangeable arrays. Using a Mondrian process with beta L´evy measure µ(dx) = αx−1dx, we first sample a random partition of the unit square into blocks and assign each block a probability: M ∼MP(λ, [0, 1], [0, 1]) slices up unit square into blocks (6) φS | M ∼Beta(a0, a1), ∀S ∈M. each block S gets a probability φS (7) The pair (M, φ) plays the role of θ in the Aldous-Hoover representation. We next sample row and column representations (ξi and ηj, respectively), which have a geometrical interpretation as x,ycoordinates (ξi, ηj) in the unit square: ξi ∼U[0, 1], i ∈{1, . . . , n} shared x coordinate for each row (8) ηj ∼U[0, 1], j ∈{1, . . . , n}. shared y coordinate for each column (9) Let Sij be the block S ∈M such that (ξi, ηj) ∈S. We finally sample the array R of relations: Rij | ξ, η, φ, M ∼Bernoulli(φSij), i, j ∈{1, . . . , n}. Rij is true w.p. φSij (10) 6 This model clusters relations together whose (ξi, ηj) pairs fall in the same blocks in the Mondrian partition and models each cluster with a beta-binomial likelihood model. By mirroring the AldousHoover representation, we guarantee that R is exchangeable and that there is no order dependence. This model is closely related to the IRM (Kemp et al., 2006) and IHRM (Xu et al., 2006), where rows and columns are first clustered using a CRP prior, then each relation Rij is conditionally independent from others given the clusters that row i and column j belong to. In particular, if we replace Eq. (6) with M ∼MP(λ, [0, 1]) × MP(λ, [0, 1]), product of partitions of unit intervals (11) then we recover the same marginal distribution over relations as the IRM/IHRM. To see this, recall that a Mondrian process in one-dimension produces a partition whose cut points follow a Poisson point process. Teh et al. (2007) show that the stick lengths (i.e., partitions) induced by a Poisson point process on [0, 1] with the beta L´evy measure have the same distribution as those in the stickbreaking construction of the DP. Therefore, (11) is the product of two stick-breaking priors. In comparison, any one dimensional slice of (6), e.g., each column or row of the relation, is marginally distributed as a DP, but is more flexible than the product of one-dimensional Mondrian processes. We can also construct an exchangeable variant of the Annotated Hierarchies model (a hierarchical block model) by moving from the unit square to a product of random trees drawn from Kingman’s coalescent prior (Kingman, 1982a). Let µd be Lebesgue measure. Td ∼KC(λ), ∀d ∈{1, . . . , D} for each dimension, sample a tree (12) M | T ∼MP(2α, T1, . . . , TD) partition the cross product of trees (13) φS | M ∼Beta(a0, a1), ∀S ∈M. each block S gets a probability φS (14) Let Sij be the subset S ∈M where leaves (i, j) fall in S. Then Rij | φ, M ∼Bernoulli(φSij), i, j ∈{1, . . . , n}. Rij is true w.p. φSij (15) Figure 4 shows some samples from this prior. Again, this model is related to the DP. Kingman shows that the partition on the leaves of a coalescent tree when its edges are cut by a Poisson point process is the same as that of a DP (Figure 4). Therefore, the partition structure along every row and column is marginally the same as a DP. Both the unit square and product of random trees models give DP distributed partitions on each row and column, but they have different inductive biases. 6 Experiments The first data set was synthetically created using an actual painting by Piet Mondrian, whose gridbased paintings were the inspiration for the name of this process. Using the model defined by (10) and a uniform rate measure, we performed a Markov chain Monte Carlo (MCMC) simulation of the posterior distribution over the Mondrian, ξ’s, η’s, and hyperparameters. We employed a number of Metropolis-Hastings (MH) proposals that rotated, scaled, flipped, and resampled portions of the Mondrian. It can be shown that the conditional distribution of each ξi and ηj is piecewise constant; given the conjugacy of the beta-binomial, we can Gibbs sample the ξ’s and η’s. Figure 2 shows a sample after 1500 iterations (starting from a random initialization) where the partition on the array is exactly recovered. This was a typical attractor state for random initializations. While the data are sufficient to recover the partition on the array, they are not sufficient to recover the underlying Mondrian process. It is an open question as to its identifiability in the limit of infinite data. We next analyzed the classic Countries data set from the network analysis literature (Wasserman and Faust, 1994), which reports trade in 1984 between 24 countries in food and live animals; crude materials; minerals and fuels; basic manufactured goods; and exchange of diplomats. We applied the model defined by (10). Figure 3 illustrates the type of structure the model uncovers during MCMC simulation; it has recognized several salient groups of countries acting in blocs; e.g., Japan, the UK, Switzerland, Spain and China export to nearly all countries, although China behaves more like the other Pacific Rim countries as an importer. The diplomats relation is nearly symmetric, but the model does not represent symmetry explicitly and must redundantly learn the entire relation. Reflecting the Mondrian about the line y = x is one way to enforce symmetry in the partition. In our final experiment, we analyzed a synthetic social network consisting of nine university employees: 3 janitors, 3 professors and 3 students. Given three relations (friends, works-with, and 7 gives-orders-to), the maximum a posteriori Mondrian process partitions the relations into homogeneous blocks. Tree structures around the MAP clustered the janitors, professors and students into three close-knit groups, and preferred to put the janitors and students more closely together in the tree. Inference in this model is particularly challenging given the large space of trees and partitions. 7 Discussion While the Mondrian process has many elegant properties, much more work is required to determine its usefulness for relational modeling. Just as effective inference procedures preceded the popularity of the Dirichlet process, a similar leap in inference sophistication will be necessary to assess the Mondrian process on large data sets. We are currently investigating improved MCMC sampling schemes for the Mondrian process, as well as working to develop a combinatorial representation of the distribution on partitions induced by the Mondrian process. Such a representation is of practical interest (possibly leading to improved inference schemes) and of theoretical interest, being a multidimensional generalization of Chinese restaurant processes. The axis-aligned partitions of [0, 1]n produced by the Mondrian process have been studied extensively in combinatorics and computational geometry, where they are known as guillotine partitions. Guillotine partitions have wide ranging applications including circuit design, approximation algorithms and computer graphics. However, the question of consistent stochastic processes over guillotine partitions, i.e. the question addressed here, has not, to our knowledge, been studied before. At a high level, we believe that developing nonparametric priors on complex data structures from computer science may successfully bridge the gap between old-fashioned Artificial Intelligence and modern statistical approaches. Developing representations for these typically recursive structures will require us to go beyond graphical models; stochastic lambda calculus is an appealing option. References D. J. Aldous. Representations for Partially Exchangeable Arrays of Random Variables. Journal of Multivariate Analysis, 11:581–598, 1981. J. M. Bernardo and A. F. M. Smith. Bayesian theory. John Wiley & Sons, 1994. B. de Finetti. Funzione caratteristica di un fenomeno aleatorio. Atti della R. Academia Nazionale dei Lincei, Serie 6. Memorie, Classe di Scienze Fisiche, Mathematice e Naturale, 4:251299, 1931. P. Diaconis and S. Janson. Graph limits and exchangeable random graphs. arXiv:0712.2749v1, 2007. P. W. Holland, K. B. Laskey, and S. Leinhardt. Stochastic blockmodels: First steps. Social Networks, 5(2):109 – 137, 1983. D. Hoover. Relations on probability spaces and arrays of random variables. Technical report, Preprint, Institute for Advanced Study, Princeton, NJ, 1979. O. Kallenberg. Probabilistic Symmetries and Invariance Principles. Springer, 2005. C. Kemp, J. Tenenbaum, T. Griffiths, T. Yamada, and N. Ueda. Learning systems of concepts with an infinite relational model. In Proceedings of the 21st National Conference on Artificial Intelligence, 2006. J. F. C. Kingman. On the genealogy of large populations. Journal of Applied Probability, 19:27–43, 1982a. J. F. C. Kingman. The coalescent. Stochastic Processes and their Applications, 13:235–248, 1982b. L. Lov´asz and B. Szegedy. Limits of dense graph sequences. J. Comb. Theory B, 96:933957, 2006. K. Nowicki and T. A. B. Snijders. Estimation and prediction for stochastic blockstructures. Journal of the American Statistical Association, 96:1077–1087(11), 2001. D. M. Roy, C. Kemp, V. Mansinghka, and J. B. Tenenbaum. Learning annotated hierarchies from relational data. In Advances in Neural Information Processing Systems 19, 2007. C. Ryll-Nardzewski. On stationary sequences of random variables and the de Finetti’s equivalence. Colloq. Math., 4:149–156, 1957. J. Sethuraman. A Constructive definition of Dirichlet priors. Statistica Sinica, 4:639–650, 1994. Y. W. Teh, D. G¨or¨ur, and Z. Ghahramani. Stick-breaking construction for the Indian buffet process. In Proceedings of the International Conference on Artificial Intelligence and Statistics, volume 11, 2007. S. Wasserman and C. Anderson. Stochastic a posteriori blockmodels: Construction and assessment. Social Networks, 9(1):1 – 36, 1987. S. Wasserman and K. Faust. Social Network Analysis: Methods and Applications, pages 64–65. Cambridge University Press, 1994. Z. Xu, V. Tresp, K. Yu, and H.-P. Kriegel. Infinite Hidden Relational Models. In Proceedings of the 22nd Conference on Uncertainty in Artificial Intelligence, 2006. 8
2008
249
3,508
Semi-supervised Learning with Weakly-Related Unlabeled Data: Towards Better Text Categorization Liu Yang Machine Learning Dept. Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213 liuy@cs.cmu.edu Rong Jin Dept. of Computer Sci. and Eng. 3115 Engineering Building Michigan State University East Lansing, MI 48824 rongjin@cse.msu.edu Rahul Sukthankar Intel Research Pittsburgh and Carnegie Mellon Univ. 4720 Forbes Avenue, #410 Pittsburgh, PA 15213 rahuls@cs.cmu.edu Abstract The cluster assumption is exploited by most semi-supervised learning (SSL) methods. However, if the unlabeled data is merely weakly related to the target classes, it becomes questionable whether driving the decision boundary to the low density regions of the unlabeled data will help the classification. In such case, the cluster assumption may not be valid; and consequently how to leverage this type of unlabeled data to enhance the classification accuracy becomes a challenge. We introduce “Semi-supervised Learning with Weakly-Related Unlabeled Data” (SSLW), an inductive method that builds upon the maximum-margin approach, towards a better usage of weakly-related unlabeled information. Although the SSLW could improve a wide range of classification tasks, in this paper, we focus on text categorization with a small training pool. The key assumption behind this work is that, even with different topics, the word usage patterns across different corpora tends to be consistent. To this end, SSLW estimates the optimal wordcorrelation matrix that is consistent with both the co-occurrence information derived from the weakly-related unlabeled documents and the labeled documents. For empirical evaluation, we present a direct comparison with a number of stateof-the-art methods for inductive semi-supervised learning and text categorization. We show that SSLW results in a significant improvement in categorization accuracy, equipped with a small training set and an unlabeled resource that is weakly related to the test domain. 1 Introduction Semi-supervised Learning (SSL) takes advantage of a large amount of unlabeled data to enhance classification accuracy. Its application to text categorization is stimulated by the easy availability of an overwhelming number of unannotated web pages, in contrast to the limited number of annotated ones. Intuitively, corpora with different topics may not be content wise related, however, word usage exhibits consistent patterns within a language. Then the question is, what would be an effective SSL strategy to extract these valuable word usage patterns embedded in the unlabeled corpus? In this paper, we aim to identify a new data representation, that is on one hand informative to the target class (category), and on the other hand consistent with the feature coherence patterns exhibiting in the weakly related unlabeled data. We further turn it into a convex optimization problem, and solve it efficiently by an approximate approach. In this section, we first review the two types of semisupervised learning: transductive SSL and inductive SSL. Then we state SSL with weakly related unlabeled data as a new challenge. Finally, we provide a strategy of how to address this challenge in the domain of text categorization, as well as a brief summary of related work in text categorization. 1 A variety of methods have been developed for transductive SSL [14, 21]. These methods can be grouped as: EM with generative mixture models, bootstrapping methods (Self-training, Cotraining and the Yarowsky Algorithm), discriminative models (Transductive Support Vector Machines (TSVM) [2]) and data based methods, including Manifold Regularization [1], Information Regularization [17], and Low Density Separation(LDS) [11]. Specifically, TSVM extends the maximum margin principle of SVM to unlabeled data. It combines the regularization of SVMs on the labeled points with the cluster assumption on the unlabeled points, to enforce the decision boundary to lie in low density regions. Data based methods discover an inherent geometry in the data, and exploit it in finding a good classifier, to which additional regularization based on unlabeled data is added to avoid overfitting. Manifold Regularization uses the combinatorial Laplacian as a smoothness term. Based on the assumption that different classes usually form separate manifolds, it constructs decision functions that vary little along the data manifolds. Information Regularization seeks a good conditional Pr(y|x), assuming that the decision boundary lies in a low density area and Pr(y|x) only varies a little in the area of high density. Low Density Separation makes a similar assumption as Manifold Regularization and Information Regularization. In addition, it further computes a new data representation based on the unlabeled data, which often results in better classification performance for SSL. Not many inductive SSL approaches have been presented. In general, the essential distinction between transductive learning and inductive learning is that transductive learning produces labels only for the available unlabeled data; while inductive learning not only produces labels for the unlabeled data, but also learns a classifier that can be used to predict labels for new data. In this sense, some SSL algorithms, though named as “transductive”, have an inductive nature. For example, TSVM is an inductive learner, because it learns a classifier from a mixture of labeled and unlabeled data. Similarly, as an inductive component of Low Density Separation (LDS) [11], ∆TSVMs learns the SVM classification model in the primal, which can be used for predicting new data. However, the graph part of LDS is transductive, because the kernel and the graph distances are addressed by a prior eigen-decompostion and re-representation (MDS); thus, it is unclear how to make a prediction of a new test point other than by rebuilding the graph with the new test point. Manifold Regularization [1] also has an implementation with inductive nature. Harmonic Mixtures [22] is a recent work that aims to overcome the limitations of non-inductive inference. It models the data by a generative mixture of Gaussians, and adds discriminative regularization using the graph Laplacian. In this paper, we focus on inductive SSL. In contrast to previous work in this area, we focus on the following important problem that has been overlooked before. As stated in [11], either directly or indirectly, all successful semi-supervised algorithms typically make the cluster assumption, which puts the decision boundary in low density areas without crossing the high density regions. Note that the cluster assumption is only meaningful when the labeled and unlabeled data are somehow closely related. When the unlabeled data comes from arbitrary data sources, their input patterns may not be closely related to that of labeled ones. As a result, the labeled and unlabeled data could be well separated, which makes it difficult, if not impossible, to exploit the cluster assumption. Hence, the key challenge is how to leverage the seemingly unrelated unlabeled data to improve the classification accuracy of the target classes. Analogous to transfer learning in which information from one category may be generalized to the others, we propose a scheme that helps the categorization of one data source, by making use of information from other unlabeled data sources with little relevance. Our study stands in contrast to the previous ones in that we aim to make maximum use of the unlabeled data that is weakly related to the test bed. We refer to this problem as “SSL with weakly related unlabeled data”, or SSLW for short. We first build a maximum margin framework for SSL with weakly related unlabeled data. We then cast the framework into an Second Order Cone Programming (SOCP) problem that can be efficiently solved. A typical approach for semi-supervised learning with weakly related unlabeled data, presented in the recent study [13] is to first derive a new data representation from unlabeled data, and then apply supervised learning technique to the derived new data representation. In [13], the authors proposed a SSL scheme termed as self-taught learning, which essentially conducts the unsupervised dimension reduction using sparse coding [10]. The new dimensions derived from the unlabeled data can then be used to represent the labeled data points for supervised learning. Notably, self-taught learning [13] performs coding and classification in two separate stages. In contrast, in our method, the construction of a good data representation is combined with the training of a maximum margin classifier under a unified framework. In particular, the data representation generated by our method 2 exploits both labeled and unlabeled data, which differentiates the proposed framework from selftaught learning. In general, SSLW could improve a wide range of classification tasks. However in this study, we focus on text categorization with a small training set. Text categorization has been actively studied in the communities of Web data mining, information retrieval and statistical learning [9, 20]. A number of statistical learning techniques have been applied to text categorization [19], including the K Nearest Neighbor approaches, decision trees, Bayesian classifiers, inductive rule learning, neural networks, support vector machines (SVM), and logistic regression. Empirical studies [7] have shown that support vector machines (SVM) is the leading technique for text categorization. Given the limited amount of labeled documents, the key of semi-supervised text categorization is to exploit the unlabeled documents. The popular implementations of semi-supervised SVMs in [8,15] are considered to be state-of-the-art in text categorization. For text categorization with a small training pool, it is very likely that a large portion of words used by the testing documents are unseen in the training set, which could lead to a poor estimation of the similarity between documents. If we can identify the coherence information of words (e.g., word correlation) from both the labeled and unlabeled documents, we will be able to more accurately estimate the document similarity, particularly for documents sharing few or no common words, thus improving the overall classification accuracy. A straightforward approach is to utilize the word cooccurrence information for computing document similarity. However, this straightforward approach may not serve the best interests of word correlation, because not all of the co-occurrence patterns are useful. Some co-occurrence patterns (e.g., co-occurrence with common words) do not reflect the semantic relations among words, and some are not related to the target class. Consequently, it is critical to identify a subset of co-occurrence patterns that are most informative to the target classification problems. To address this problem, SSLW explicitly estimates the optimal wordcorrelation matrix for the target document categorization problem. The rest of paper is organized as follows. Section 2 introduces the basic notations and gives a brief review of the SVM dualism. In Section 3, we propose the framework of SSL with weakly-related unlabeled data, followed by an efficient algorithm for its computation in Section 4. Section 5 evaluates SSLW; and in section 6 we provide some insights into the experimental evidence and discuss future work. 2 Preliminaries We introduce the notation used throughout this paper and briefly review the SVM dual formulation. Denote L = {(x1, y1), . . . , (xl, yl)} as the collection of labeled documents, where yi is +1 when document xi belongs to a given document category and −1 when it does not (text categorization problem for multi-labeled documents can be treated as a set of independent binary classification problems). Let U = {xl+1 . . . , xn} be the unlabeled collection of documents. Let V denote the size of the vocabulary. Importantly, as an SSL task with weakly-related unlabeled data, U comes from some external resources that are weakly related to the test domain. To facilitate our discussion, we denote the document-word matrix on L by D = (d1, d2, . . . , dl), where di ∈NV represents the word-frequency vector for document di. The word-document matrix on L + U is denoted by G = (g1, g2, . . . , gV ), where gi = (gi,1, gi,2, . . . , gi,n) represents the occurrence of the ith word in all the n documents. Recall the dual formalism for SVM: max α α⊤e −1 2(α ◦y)⊤K(α ◦y) s.t. α⊤y = 0 0 ≤αi ≤C, i = 1, 2, . . . , n, (1) where α = (αi, α2, . . . , αn) are the weights assigned to the training documents, e is a vector with all elements being 1, and the symbol ◦denotes an element-wise product between two vectors. K ∈Rn×n is the kernel matrix representing the document pairwise similarity and K = D⊤D. 3 The Framework of Semi-supervised Learning with Weakly-Related Unlabeled Data In this section, we present the algorithm of Semi-supervised Learning with Weakly-Related Unlabeled Data (SSLW). As analysized in Section 1, the kernel similarity measure in the standard SVM 3 dual formalism K = D⊤D, is problematic in the sense that the similarity between two documents will be zero if they do not share any common words, even if there exists a pairwise relationship between the seen words and the unseen ones, from a large collection of documents. To solve this problem, we take into account a word-correlation matrix when computing the kernel similarity matrix, and we search for an optimal word-correlation matrix, towards maximizing the categorization margin. Specifically, we define the kernel matrix as K = D⊤RD, by introducing the word-correlation matrix R ∈RV ×V , where each element Ri,j represents the correlation between the ith and the jth words. Note G⊤G is not a desirable solution to R, because it is improper to assign a high correlation to two words simply because of their high co-occurrence; the two words may be not closely related as judged by the maximum-margin criterion. Therefore, it is important to search for the optimal word-correlation matrix R in addition to the maximum discovered in Eqn. (1), to maximize the categorization margin. We denote the optimal value of the objective function in Eqn. (1) as κ(K): κ(K) = max α α⊤e −1 2(α ◦y)⊤K(α ◦y) (2) Given the fact that κ(K) is inversely-related to the categorization margin [4], minimizing κ(K) is equivalent to maximizing the categorization margin. Now we consider how to make maximum use of the weakly-related source U. The G matrix is crucial in capturing the word correlation information from the weakly-related external source U. Thus, to incorporate the external source into the learning of the word-correlation matrix R, we regularize R according to G by introducing an internal representation of words W = (w1, w2, . . . , wV ), where vector wi is the internal representation of the ith word (This idea is similar to non-negative matrix factorization (NMF) [6]). We expect that W carries an equivalent amount of information as G does, i.e., G and W are roughly equivalent representations of words. As there exists a matrix U such that the matrix G can be recovered from W by a linear transformation G = UW, the word-correlation matrix can be computed as R = W ⊤W. Further, the constraints G = UW and R = WW ⊤can be combined to obtain the following positive semi-definite constraint  R G⊤ G T  ⪰0, (3) where T = UU ⊤[18]. Another strategy we use to involve the unlabeled data into the learning of word correlation, is to construct the word correlation matrix R as a non-negative linear combination of the top p right eigenvectors of G, i.e., R = ξIV + p X i=1 (αi −ξ)sis⊤ i , (4) where {si, i = 1, 2, . . . , n} denote the right eigenvectors of matrix G, sorted in descending order of their eigenvalues θi. IV is the V × V identity matrix, and αi ≥0, i = 1, . . . , p and ξ ≥0 are non-negative combination weights. Note that introducing ξIV ensures non-singularity of the matrix R, which is important when computing the expression for matrix T ). This simplification of R allows us to effectively extract and utilize the word co-occurrence information in the external source U. Additionally, the positive semi-definite constraint R ⪰0 is converted into simple non-negative constraints, i.e., ξ ≥0 and {αi ≥0}p i=1. The number of variables in R, which was originally O(V 2), is now reduced to p + 1. A further insight into the combination weights reveals that, both the straightforward co-occurrence matrix G⊤G and Manifold Regulization, give predefined weights for eigenvector combination and thus can be seen as the special cases of SSLW. Precisely speaking, the straightforward co-occurrence matrix G⊤G, directly uses the eigenvalues as the weights. Manifold Regularization does a slightly better job by defining the weights as a strict function of the eigenvalues. Different from both, we give SSLW the entire freedom to learn the weights from data. In this sense, SSLW generalizes these two methods. Based on the above analysis, we reformulate an extension of SVM dual in Eqn. (1), to search for an optimal word-correlation matrix R, by exploiting the word co-occurrence information in the external U, under maximum-margin criterion, i.e., min R∈∆,U,W κ(D⊤RD) (5) where the word-correlation matrix R is restricted to domain ∆that is defined as ∆=  R ∈SV ×V + :  R G⊤ G T  ⪰0.  (6) 4 if we use (3) for R, and ∆= ( R = ξIV + p X i=1 (αi −ξ)sis⊤ i : ξ ≥0, αi ≥0, i = 1, . . . , p ) (7) if we use Eqn. (4) for R. Given the definition of κ in Eqn. (2), Eqn. (5) is the following min-max problem without analytic solution. min R∈∆,U,W max α α⊤e −1 2(α ◦y)⊤(D⊤RD)(α ◦y) (8) 4 An Efficient Algorithm of SSLW This section provides a computationally-efficient and scalable algorithm for solving the min-max problem in Eqn. (8), with domain ∆defined in (6). We first rewrite the maximization problem in Eqn. (1) into a minimization problem by computing its dual form: min t,η,δ,ρ t + 2Cδ⊤e s.t.  K ρ ◦y + λe (ρ ◦y + λe)⊤ t  ⪰0 ρ = e + η −δ δi ≥0, ηi ≥0, i = 1, 2, . . ., n. (9) Then, by plugging Eqn. (9) back into Eqn. (5), we transform the min-max problem in Eqn. (8) into the following minimization problem: min t,η,δ,ρ,R t + 2Cδ⊤e + Cttr(T ) + Crtr(R) s.t.  D⊤RD ρ ◦y + λe (ρ ◦y + λe)⊤ t  ⪰0 δi ≥0, ηi ≥0, i = 1, 2, . . ., n ρ = e + η −δ,  R G⊤ G T  ⪰0. (10) Note that as our goal is to compute R and T , thus any valid (W, U) is sufficient, and no uniqueness constraints are imposed on W and U. In Eqn. (10), Cttr(T ) and Crtr(R) serve as sparse regularizers for R and T . They are added to improve the stability of the optimal solution, as well as to favor a simpler model over sophisticated ones. The parameters Ct and Cr are used to weigh the importance of the two regularization terms. The trace heuristic has been widely used to enforce a low-rank matrix by minimizing its trace in place of its rank. In the generalization of the trace heuristic presented by [5], the dual of the spectrum norm is the convex envelope of the rank on the set of matrices with norm less than one. The rank objective can be replaced with the dual of the spectral norm, for rank minimization. In other words, the best convex regularizer one can get for rank minimization is the trace function. Eqn. (10) is a Semi-Definite Programming (SDP) problem [3], and in general can be solved using SDP packages such as SeDuMi [16]. However, solving a SDP problem is computationally expensive and does not easily scale to a large number of training examples. [18] recently provides an elegant scheme of rewriting a SDP problem into a Second Order Cone Programming (SOCP) problem that can be much more efficiently solved [3]. Technically, we adopt this procedure and rewrite Eqn. (10) into a typical SOCP problem that can be efficiently solved. Given the estimated word-correlation matrix R and K = D⊤RD, the example weights α in SVM model can be estimated using the KKT conditions α = (yy⊤◦K)−1(e + η −δ + λy). And the threshold b in SVM can be obtained by solving the primal SVM using the linear programming technique. 5 Evaluation In this section, we evaluate SSLW on text categorization with limited training data. The experiment set up is purely inductive, i.e., the testing feature space is invisible in the training phrase. As an SSL 5 task with weakly-related unlabeled data, the provided unlabeled data have little relevance to the test domain. We show that SSLW can achieve noticeable gains over the state-of-the-art methods in both inductive SSL and text categorization, and we provide insight into why this happens. Following [18], our implementation of SSLW selects the top 200 right eigenvectors of the document-word matrix G matrix to construct the R matrix. As defined in Section 3, the G matrix covers both the training sets and the weakly-related external collection. Evaluation datasets Two standard datasets for text categorization are used as the evaluation test bed: the Reuters-21578 dataset and the WebKB dataset. For computational simplicity, 1000 documents are randomly selected from the TREC AP88 dataset and are used as an external information source for both datasets. The AP88 dataset includes a collection of news documents reported by Associated Press in 1988. The same pre-processing and indexing procedure are applied to these three datasets, by using the Lemur Toolkit 1. For the Reuters-21578 dataset, among the 135 TOPICS categories, the 10 categories with the largest amount of documents are selected (see Table 1). This results in a collection of 9, 400 documents. For the WebKB dataset, which has seven categories: student, faculty, staff, department, course, project, and other, we discard the category of “other” due to its unclear definition (see Table 2). This results in 4, 518 data samples in the selected dataset. The Reuters-21578 dataset and the TREC AP88 dataset have very limited relevance in topic; and the WebKB dataset and the TREC AP88 dataset are even less content-wise related. Category earn acq money-fx crude grain trade interest wheat ship corn # Samples 3987 2448 801 634 628 552 513 306 305 254 Table 1: The ten categories of the Reuters-21578 dataset with the largest amount of documents. Category course department faculty project staff student # Samples 930 182 1124 504 137 1641 Table 2: The six categories of the WebKB dataset. Evaluation Methodology We focus on binary classification. For each class, 4 positive samples and 4 negative samples are randomly selected to form the training set; and the rest of the data serve as the testing set. As a rare classification problem, the testing data is very unbalanced. Therefore, we adopt the area under the ROC curve (AUR) [12] as the quantitative measurement of the binary classification performance for text categorization. AUR is computed based on the output of realvalue scores of the classifiers returned for testing documents. Each experiment is repeated ten times, and the AUR averaged over these trials is reported. Baseline Methods We use six baseline methods to demonstrate the strength of SSLW from different perspectives. The first two baselines are the standard SVM and the traditional TSVM.The third baseline is ∇TSVM 2, the inductive component of LDS, which delivers the state-of-the-art performance of SSL. The fourth baseline Manifold Regularization 3 (ManifoldR for short) is included as a state-of-the-art SSL approach with an inductive nature, and more importantly, being able to incorporate word relationship into the regularization. For the fifth baseline, we compare the word-correlation matrix estimated by SSLW, with the trivial word-correlation matrix G⊤G; and we name this baseline as COR. Finally, self-taught learning [13] serves as our sixth baseline method, named as Self-taught. It uses the unlabeled data to find an low-dimension representation, and then conducts standard classification in this new space. Text Categorization with Limited Training Data We describe the AUR results of both the Reuters21578 dataset and the WebKB datset, by using different methods. For the Reuters-21578 dataset, Table 3 summarizes the AUR comparison between the six baseline methods and SSLW. Both mean and variance of AUR are shown in the table. We observe that SSLW consistently outperforms the six baselines in AUR across most of the ten categories. In general, a t-test shows our performance gain is statistically significant compared to all the baselines at a significance level of 0.05. Detailed analysis is provided below. First, TSVM and ∇TSVM overall perform even worse than the standard SVM. This observation reveals that if the unlabeled data are only weakly relevant to the target class, it could 1http://www.lemurproject.org/ 2http://www.kyb.tuebingen.mpg.de/bs/people/chapelle/lds/ 3http://manifold.cs.uchicago.edu/manifold_regularization/software.html 6 harm the categorization accuracy by simply pushing the decision boundary towards the low density regions, and away from the high density areas of the unlabeled data. It also justifies our intuitive hypothesis that the cluster assumption is not valid in this case. Second, the dramatic advantage of SSLW over the COR method confirms our previous analysis – learning a good word-correlation matrix that is jointly determined by the co-occurrence matrix and the classification margin (as SSLW does), can achieve significant gains over simply using the trivial form G⊤G. Third, we observe that SSLW algorithm consistently improves over Manifold Regularization, except on “trace” category where ManifoldR has a little advantage. Most noticeably, on “wheat” category and “ship” category, the AUR is improved by more than 10%, as a result of SSLW. These results demonstrate that SSLW is effective in improving text categorization accuracy with a small amount of training data. We also notice that, ∆TSVM outperforms TSVM on some categories, but is slightly worse than TSVM on some others. The unstable performanceof ∆TSVM can possibly be explained by its gradient descent nature. Finally, our method receives gains against self-taught learning [13] on most categories. This proves SSLW is more effective than self-taught learning in using unlabeled data to improve classification. The gains can be attributed to the fact that Self-taught does coding and classification in two separate stages, while SSLW achieves these two purposes simultaneously. A more careful examination indicates that SSLW also reduces the standard deviation in classification accuracy. The standard deviations by SSLW are mostly less than 2.5%; while those by the baseline methods are mostly above 2.5%. Over all the ten categories except the “money-fix” category, SSLW always delivers the lowest or the second lowest standard deviation, among all the six methods. We hypothesize that the large standard deviation by the baseline models is mainly due to the small number of training documents. In this situation, many words should only appear in a few training documents. As a result, the association between these words and the class labels can not be reliably established. In extreme cases where these words do not appear in any of the training documents, no association can be established between these words and the class labels. Evidently, test documents related to these unseen words are likely to be classified incorrectly. By contrast, SSLW can resolve this problem by estimating the word correlation. For a missing word, its association with the class label can be reliably estimated through the correlation with other words that appear frequently in the training examples. Table 4 shows the AUR results of the WebKB dataset, from which we observe the similar trends as described above in the Reuters-21578 dataset. It is shown that SSLW maintains its clear advantage over the six baseline methods, across all the six categories. Category SVM TSVM ∇TSVM ManifoldR COR Self-taught SSLW earn 82.3 ± 2.1 70.9 ± 4.1 70.1 ± 5.2 86.4 ± 2.1 62.6 ± 5.8 65.9 ± 3.5 89.3 ± 1.6 acq 69.7 ± 3.0 63.1 ± 3.3 59.2 ± 4.1 70.1 ± 3.0 51.2 ± 4.7 68.2 ± 2.6 73.5 ± 3.3 money-fx 71.3 ± 2.6 67.4 ± 3.1 70.0 ± 2.0 74.0 ± 2.6 76.5 ± 4.6 75.7 ± 3.9 82.1 ± 4.4 crude 69.7 ± 3.3 68.6 ± 3.2 59.9 ± 4.7 71.5 ± 3.3 56.0 ± 5.7 67.6 ± 3.1 77.5 ± 1.7 grain 70.7 ± 3.5 68.7 ± 2.3 66.4 ± 3.5 75.1 ± 3.5 62.1 ± 5.4 69.0 ± 2.9 82.7 ± 2.0 trade 82.7 ± 3.4 65.1 ± 5.0 71.5 ± 4.2 85.1 ± 3.4 78.8 ± 5.2 78.5 ± 4.4 84.4 ± 3.9 interest 79.3 ± 1.5 60.2 ± 3.9 70.4 ± 3.1 85.0 ± 1.5 69.4 ± 4.7 76.5 ± 2.5 89.4 ± 1.8 wheat 77.6 ± 3.8 61.9 ± 3.6 64.7 ± 4.6 79.1 ± 3.8 54.4 ± 5.7 67.1 ± 2.6 89.4 ± 1.6 ship 70.4 ± 2.6 64.5 ± 2.9 65.8 ± 3.9 72.3 ± 2.6 52.1 ± 5.0 68.0 ± 2.1 82.8 ± 1.4 corn 80.8 ± 2.9 65.4 ± 2.1 66.5 ± 5.3 77.0 ± 5.0 54.5 ± 5.6 66.8 ± 3.7 86.4 ± 2.3 Table 3: The AUR results (%) on the Reuters-21578 dataset with 8 training examples per category. Category SVM TSVM ∇TSVM ManifoldR COR Self-taught SSLW course 66.8 ± 2.2 61.5 ± 2.0 61.8 ± 2.9 68.4 ± 2.8 63.3 ± 5.4 66.0 ± 3.9 76.2 ± 2.5 dept. 72.2 ± 2.8 58.8 ± 5.2 63.7 ± 3.5 73.4 ± 5.9 58.3 ± 5.1 70.8 ± 3.6 87.6 ± 2.2 faculty 56.7 ± 3.4 56.4 ± 2.6 54.2 ± 3.0 56.9 ± 2.8 53.1 ± 4.6 61.7 ± 3.3 61.6 ± 3.4 project 59.6 ± 2.9 57.0 ± 2.3 60.3 ± 1.4 61.8 ± 3.1 50.0 ± 5.9 58.7 ± 3.0 69.5 ± 3.2 staff 58.1 ± 1.6 53.0 ± 1.1 51.6 ± 1.3 52.9 ± 0.9 46.4 ± 1.6 59.9 ± 1.9 58.3 ± 1.5 student 59.2 ± 2.7 54.0 ± 2.3 55.3 ± 2.7 59.4 ± 3.1 56.0 ± 4.1 61.0 ± 1.9 67.7 ± 2.6 Table 4: The AUR results (%) on the WebKB dataset with 8 training examples per category. 7 6 Conclusion This paper explores a new challenge in semi-supervised learning, i.e., how to leverage the unlabeled information that is weakly related to the target classes, to improve classification performance. We propose the algorithm of Semi-supervised Learning with Weakly-Related Unlabeled Data (SSLW) to address this challenge. SSLW extends the theory of support vector machines to effectively identify those co-occurrence patterns that are most informative to the categorization margin and ignore those that are irrelevant to the categorization task. Applied to text categorization with limited number of training samples, SSLW automatically estimates the word correlation matrix by effectively exploiting the word co-occurrence embedded in the weakly-related unlabeled corpus. Empirical studies show that SSLW significantly improves both the accuracy and the reliability of text categorization, given a small training pool and the additional unlabeled data that are weakly related to the test bed. Although SSLW is presented in the context of text categorization, it potentially facilitates classification tasks in a variety of domains. In future work, we will evaluate the benefits of SSLW on larger data sets and in other domains. We will also investigate SSLW’s dependencies on the number of eigenvectors used, and its behavior when varying the number of labeled training examples. Acknowledgments The work was supported by the National Science Foundation (IIS-0643494) and National Institute of Health (1R01GM079688-01). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of NSF and NIH. References [1] M. Belkin, P. Niyogi, and V. Sindhwani. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. Technical report, Univ. of Chicago, Dept. of Comp. Sci., 2004. [2] K. Bennett and A. Demiriz. Semi-supervised support vector machines. In Proc. NIPS, 1998. [3] S. Boyd and L. Vandenberghe. Convex optimization. Cambridge University Press, 2004. [4] C. Burges. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 2(2), 1998. [5] M. Fazel, H. Hindi, and S. Boyd. A rank minimization heuristic with application to minimum order system approximation. In Proc. American Control Conf., 2001. [6] P. O. Hoyer. Non-negative matrix factorization with sparseness constraints. J. Mach. Learn. Res., 5, 2004. [7] T. Joachims. Text categorization with support vector machines: learning with many relevant features. In Proc. ECML, 1998. [8] T. Joachims. Transductive inference for text classification using support vector machines. In Proc. ICML, 1999. [9] M. Lan, C. L. Tan, H.-B. Low, and S. Y. Sung. A comprehensive comparative study on term weighting schemes for text categorization with support vector machines. In Proc. WWW, 2005. [10] H. Lee, A. Battle, R. Rajat, and A. Ng. Efficient sparse coding algorithms. In Proc. NIPS, 2007. [11] A. Z. Olivier Chapelle. Semi-supervised classification by low density separation. In Proc. Inter. Workshop on Artificial Intelligence and Statistics, 2005. [12] F. Provost, T. Fawcett, and R. Kohavi. The case against accuracy estimation for comparing induction algorithms. In Proc. ICML, 1998. [13] R. Raina, A. Battle, H. Lee, B. Packer, and A. Y. Ng. Self-taught learning: transfer learning from unlabeled data. In Proc. ICML, 2007. [14] M. Seeger. Learning with labeled and unlabeled data. Technical report, Univ. of Edinburgh, 2001. [15] V. Sindhwani and S. S. Keerthi. Large scale semi-supervised linear support vector machines. In Proc. ACM SIGIR, 2006. [16] J. F. Sturm. Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones. Optimization Methods Software, 11/12(1–4), 1999. [17] M. Szummer and T. Jaakkola. Information regularization with partially labeled data. In Proc. NIPS, 2002. [18] L. Yang, R. Jin, C. Pantofaru, and R. Sukthankar. Discriminative cluster refinement: Improving object category recognition given limited training data. In Proc. CVPR, 2007. [19] Y. Yang. An evaluation of statistical approaches to text categorization. Journal of Info. Retrieval, 1999. [20] Y. Yang and J. O. Pedersen. A comparative study on feature selection in text categorization. In Proc. ICML, 1997. [21] X. Zhu. Semi-supervised learning literature survey. Technical report, UW-Madison, Comp. Sci., 2006. [22] X. Zhu, Z. Ghahramani, and J. Lafferty. Semi-supervised learning using gaussian fields and harmonic functions. In Proc. ICML, 2003. 8
2008
25
3,509
Scalable Algorithms for String Kernels with Inexact Matching Pavel P. Kuksa, Pai-Hsi Huang, Vladimir Pavlovic Department of Computer Science, Rutgers University, Piscataway, NJ 08854 {pkuksa,paihuang,vladimir}@cs.rutgers.edu Abstract We present a new family of linear time algorithms for string comparison with mismatches under the string kernels framework. Based on sufficient statistics, our algorithms improve theoretical complexity bounds of existing approaches while scaling well in sequence alphabet size, the number of allowed mismatches and the size of the dataset. In particular, on large alphabets and under loose mismatch constraints our algorithms are several orders of magnitude faster than the existing algorithms for string comparison under the mismatch similarity measure. We evaluate our algorithms on synthetic data and real applications in music genre classification, protein remote homology detection and protein fold prediction. The scalability of the algorithms allows us to consider complex sequence transformations, modeled using longer string features and larger numbers of mismatches, leading to a state-of-the-art performance with significantly reduced running times. 1 Introduction Analysis of large scale sequential data has become an important task in machine learning and data mining, inspired by applications such as biological sequence analysis, text and audio mining. Classification of string data, sequences of discrete symbols, has attracted particular interest and has led to a number of new algorithms [1, 2, 3, 4]. They exhibit state-of-the-art performance on tasks such as protein superfamily and fold prediction, music genre classification and document topic elucidation. Classification of data in sequential domains is made challenging by the variability in the sequence lengths, potential existence of important features on multiple scales, as well as the size of the alphabets and datasets. Typical alphabet sizes can vary widely, ranging in size from 4 nucleotides in DNA sequences, up to thousands of words from a language lexicon for text documents. Strings within the same class, such as the proteins in one fold or documents about politics, can exhibit wide variability in the primary sequence content. Moreover, important datasets continue to increase in size, easily reaching millions of sequences. As a consequence, the resulting algorithms need the ability to efficiently handle large alphabets and datasets as well as establish measures of similarity under complex sequence transformations in order to accurately classify the data. A number of state-of-the-art approaches to scoring similarity between pairs of sequences in a database rely on fixed, spectral representations of sequential data and the notion of mismatch kernels, c.f. [2, 3]. In that framework an induced representation of a sequence is typically that of the spectra (counts) of all short substrings (k-mers) contained within a sequence. The similarity score is established by allowing transformations of the original k-mers based on different models of deletions, insertions and mutations. However, computing those representations efficiently for large alphabet sizes and ”loose” similarity models can be computationally challenging. For instance, the complexity of an efficient trie-based computation [3, 5] of the mismatch kernel between two strings X and Y strongly depends on the alphabet size and the number of mismatches allowed as O(km+1|Σ|m(|X| + |Y |)) for k-mers (contiguous substring of length k) with up to m mismatches and the alphabet size |Σ|. This limits the applicability of such algorithms to simpler transformation models (shorter k and m) and smaller alphabets, reducing their practical utility on complex real data. As an alternative, more complex transformation models such as [2] lead to state-of-the-art predictive performance at the expense of increased computational effort. In this work we propose novel algorithms for modeling sequences under complex transformations (such as multiple insertions, deletions, mutations) that exhibit state-of-the-art performance on a variety of distinct classification tasks. In particular, we present new algorithms for inexact (e.g. with mismatches) string comparison that improve currently known time bounds for such tasks and show orders-of-magnitude running time improvements. The algorithms rely on an efficient implicit computation of mismatch neighborhoods and k-mer statistic on sets of sequences. This leads to a mismatch kernel algorithm with complexity O(ck,m(|X| + |Y |)), where ck,m is independent of the alphabet Σ. The algorithm can be easily generalized to other families of string kernels, such as the spectrum and gapped kernels [6], as well as to semi-supervised settings such as the neighborhood kernel of [7]. We demonstrate the benefits of our algorithms on many challenging classification problems, such as detecting homology (evolutionary similarity) of remotely related proteins, recognizing protein fold, and performing classification of music samples. The algorithms display state-of-the-art classification performance and run substantially faster than existing methods. Low computational complexity of our algorithms opens the possibility of analyzing very large datasets under both fully-supervised and semi-supervised setting with modest computational resources. 2 Related Work Over the past decade, various methods were proposed to solve the string classification problem, including generative, such as HMMs, or discriminative approaches. Among the discriminative approaches, in many sequence analysis tasks, kernel-based [8] machine learning methods provide most accurate results [2, 3, 4, 6]. Sequence matching is frequently based on common co-occurrence of exact sub-patterns (k-mers, features), as in spectrum kernels [9] or substring kernels [10]. Inexact comparison in this framework is typically achieved using different families of mismatch [3] or profile [2] kernels. Both spectrum-k and mismatch(k,m) kernel directly extract string features based on the observed sequence, X. On the other hand, the profile kernel, proposed by Kuang et al. in [2], builds a profile [11] PX and uses a similar |Σ|k-dimensional representation, derived from PX. Constructing the profile for each sequence may not be practical in some application domains, since the size of the profile is dependent on the size of the alphabet set. While for bio-sequences |Σ| = 4 or 20, for music or text classification |Σ| can potentially be very large, on the order of tens of thousands of symbols. In this case, a very simple semi-supervised learning method, the sequence neighborhood kernel, can be employed [7] as an alternative to lone k-mers with many mismatches. The most efficient available trie-based algorithms [3, 5] for mismatch kernels have a strong dependency on the size of alphabet set and the number of allowed mismatches, both of which need to be restricted in practice to control the complexity of the algorithm. Under the trie-based framework, the list of k-mers extracted from given strings is traversed in a depth-first search with branches corresponding to all possible σ ∈Σ. Each leaf node at depth k corresponds to a particular k-mer feature (either exact or inexact instance of the observed exact string features) and contains a list of matching features from each string. The kernel matrix is updated at leaf nodes with corresponding counts. The complexity of the trie-based algorithm for mismatch kernel computation for two strings X and Y is O(km+1|Σ|m(|X| + |Y |)) [3]. The algorithm complexity depends on the size of Σ since during a trie traversal, possible substitutions are drawn from Σ explicitly; consequently, to control the complexity of the algorithm we need to restrict the number of allowed mismatches (m), as well as the alphabet size (|Σ|). Such limitations hinder wide application of the powerful computational tool, as in biological sequence analysis, mutation, insertions and deletions frequently co-occur, hence establishing the need to relax the parameter m; on the other hand, restricting the size of the alphabet sets strongly limits applications of the mismatch model. While other efficient string algorithms exist, such as [6, 12] and the suffix-tree based algorithms in [10], they do not readily extend to the mismatch framework. In this study, we aim to extend the works presented in [6, 10] and close the existing gap in theoretical complexity between the mismatch and other fast string kernels. 3 Combinatorial Algorithm In this section we will develop our first improved algorithm for kernel computations with mismatches, which serves as a starting point for our main algorithm in Section 4. 3.1 Spectrum and Mismatch Kernels Definition Given a sequence X with symbols from alphabet Σ the spectrum-k kernel [9] and the mismatch(k,m) kernel [3] induce the following |Σ|k-dimensional representation for the sequence: Φ(X) = X α∈X Im(α, γ) ! γ∈Σk , (1) where Im(α, γ) = 1 if α ∈Nk,m(γ), and Nk,m(γ) is the mutational neighborhood, the set of all k-mers that differ from γ by at most m mismatches. Note that, by definition, for spectrum kernels, m = 0. The mismatch kernel is then defined as K(X, Y |k, m) = X γ∈Σk cm(γ|X)cm(γ|Y ), (2) where cm(γ|X) = Σα∈XIm(γ, α) is the number of times a contiguous substring of length k (k-mer) γ occurs in X with no more than m mismatches. 3.2 Intersection-based Algorithm Our first algorithm presents a novel way of performing local inexact string matching with the following key properties: a. parameter independent: the complexity is independent of |Σ| and mismatch parameter m b. in-place: only uses min(2m, k) + 1 extra space for an auxiliary look-up table c. linear complexity: in k, the length of the substring (as opposed to exponential km) To develop our first algorithm, we first write the mismatch kernel (Equation 2) in an equivalent form: K(X, Y |k, m) = nx−k+1 X ix=1 ny−k+1 X iy=1 X a∈Σk Im(a, xix:ix+k−1)Im(a, yiy:iy+k−1) (3) = nx−k+1 X ix=1 ny−k+1 X iy=1 |(N(xix:ix+k−1, m) ∩N(yiy:iy+k−1, m)| (4) = nx−k+1 X ix=1 ny−k+1 X iy=1 I(xix:ix+k−1, yiy:iy+k−1) (5) where I(a, b) is the number of induced (neighboring) k-mers common between a, b (i.e. I(a, b) is the size of intersection of mismatch neighborhoods of a and b). The key observation here is that if we can compute I(a, b) efficiently then the kernel evaluation problem reduces to performing pairwise comparison based on all pairs of observed k-mers, a and b, in the two sequences. The complexity for such procedure is O(c|X||Y |) where c is the cost for evaluating I(a, b) for any given k-mers a and b. In fact, for fixed k, m and Σ, such quantity depends only on the Hamming distance d(a, b) (i.e. the number of mismatches) and can be evaluated in advance, as we will show in Section 3.3. As a result, the intersection values can be looked up in a table in constant time during matching. Note the summation now shows no explicit dependency on |Σ| and m. In summary, given two strings X and Y , the algorithm (Algorithm 1) compares pairs of observed k-mers from X and Y and computes the mismatch kernel according to Equation 5. Algorithm 1. (Hamming-Mismatch) Mismatch algorithm based on Hamming distance Input: strings X, Y, |X| = nx, |Y | = ny, parameters k, m, lookup table I for intersection sizes Evaluate kernel using Equation 5: K(X, Y |k, m) = Pnx−k+1 ix=1 Pny−k+1 iy=1 I(d(xix:ix+k−1, yiy:iy+k−1)|k, m) where I(d) is the intersection size for distance d Output: Mismatch kernel value K(X, Y |k, m) The overall complexity of the algorithm is O(knxny) since the Hamming distances between all k-mer pairs observed in X and Y need to be known. In the following section, we discuss how to efficiently compute the size of the intersection. 3.3 Intersection Size: Closed Form Solution The number of neighboring k-mers shared by two observed k-mers a and b can be directly computed, in a closed-form, from the Hamming distance d(a, b) for fixed k and m, requiring no explicit traversal of the k-mer space as in the case of trie-based computations. We first consider the case a = b (i.e. d(a, b) = 0). The intersection size corresponds to the size of the (k, m)-mismatch neighborhood, i.e. I(a, b) = |Nk,m| = Pm i=0 k i  (|Σ| −1)i. For higher values of Hamming distance d, the key observation is that for fixed Σ, k, and m, given any distance d(a, b) = d, I(a, b) is also a constant, regardless of the mismatch positions. As a result, intersection values can always be pre-computed once, stored and looked up when necessary. To illustrate this, we show two examples for m = 1, 2: I(a, b) (m = 1) =    |Nk,m|, d(a, b) = 0 |Σ|, d(a, b) = 1 2, d(a, b) = 2 I(a, b) (m = 2) =                |Nk,m|, d(a, b) = 0 1 + k(|Σ| −1) + (k −1)(|Σ| −1)2, d(a, b) = 1 1 + 2(k −1)(|Σ| −1) + (|Σ| −1)2, d(a, b) = 2 6(|Σ| −1), d(a, b) = 3 4 2  , d(a, b) = 4 In general, the intersection size can be found in a weighted form P i wi(|Σ| −1)i and can be precomputed in constant time. 4 Mismatch Algorithm based on Sufficient Statistics In this section, we further develop ideas from the previous section and present an improved mismatch algorithm that does not require pairwise comparison of the k-mers between two strings and dependes linearly on sequence length. The crucial observation is that in Equation 5, I(a, b) is non-zero only when d(a, b) ≤2m. As a result, the kernel computed in Equation 5 is incremented only by min(2m, k) + 1 distinct values, corresponding to min(2m, k) + 1 possible intersection sizes. We then can re-write the equation in the following form: K(X, Y |m, k) = nx−k+1 X ix=1 ny−k+1 X iy=1 I(xix:ix+k−1, yiy:iy+k−1) = min(2m,k) X i=0 MiIi, (6) where Ii is the size of the intersection of k-mer mutational neighborhood for Hamming distance i, and Mi, the number of observed k-mer pairs in X and Y having Hamming distance i. The problem of computing the kernel has been further reduced to a single summation. We have shown in Section 3.3 that given any i, we can compute Ii in advance. The crucial task now becomes computing the sufficient statistics Mi efficiently. In the following, we will show how to compute the mismatch statistics {Mi} in O(ck,m(nx + ny)) time, where ck,m is a constant that does not depend on the alphabet size. We formulate the task of inferring matching statistics {Mi} as the following auxiliary counting problem: Mismatch Statistic Counting: Given a set of n k-mers from two strings X and Y , for each Hamming distance i = 0, 1, ..., min(2m, k), output the number of k-mer pairs (a, b), a ∈X, b ∈Y with d(a, b) = i. In this problem it is not necessary to know the distance between each pair of k-mers; one only needs to know the number of pairs (Mi) at each distance i. We show next that the above problem of computing matching statistics can be solved in linear time (in the number n of k-mers) using multiple rounds of counting sort as a sub-algorithm. We first consider the problem of computing number of k-mers at distance 0, i.e. the number of exact matches. In this case, we can apply counting sort to order all k-mers lexicographically and find the number of exact matches by scanning the sorted list. The counting then requires linear O(kn) time. Efficient direct computation of Mi for any i > 0 is difficult (requires quadratic time); we take another approach and first compute inexact cumulative mismatch statistics, Ci = Mi + Pi−1 j=0 k−j i−j  Mj, that overcount the number of k-mer pairs at a given distance i, as follows. Consider two k-mers a and b. Pick i positions and remove from the k-mers the symbols at the corresponding positions to obtain (k−i)-mers a′ and b′. The key observation is that d(a′, b′) = 0 ⇒d(a, b) ≤i. As a result, given n k-mers, we can compute the cumulative mismatch statistics Ci in linear time using k i  rounds of counting sort on (k −i)-mers. The exact mismatch statistics Mi can then be obtained from Ci by subtracting the exact counts to compensate for overcounting as follows: Mi = Ci − i−1 X j=0 k −j i −j  Mj, i = 0, . . . , min(min(2m, k), k −1) (7) The last mismatch statistic Mk can be computed by subtracting the preceding statistics M0, ...Mk−1 from the total number of possible matches: Mk = T − k−1 X j=0 Mj, where T = (nx −k + 1)(ny −k + 1). (8) Our algorithm for mismatch kernel computations based on sufficient statistics is summarized in Algorithm 2. The overall complexity of the algorithm is O(nck,m) with the constant ck,m = Pmin(2m,k) l=0 k l  (k −l), independent of the size of the alphabet set, and k l  is the number of rounds of counting sort for evaluating the cumulative mismatch statistics Cl. Algorithm 2. (Mismatch-SS) Mismatch kernel algorithm based on Sufficient Statistics Input: strings X, Y, |X| = nx, |Y | = ny, parameters k, m, pre-computed intersection values I 1. Compute min(2m, k) cumulative matching statistics, Ci, using counting sort 2. Compute exact matching statistics, Mi, using Equation 7 3. Evaluate kernel using Equation 6: K(X, Y |m, k) = Pmin(2m,k) i=0 MiIi Output: Mismatch kernel value K(X, Y |k, m) 5 Extensions Our algorithmic approach can also be applied to a variety of existing string kernels, leading to very efficient and simple algorithms that could benefit many applications. Spectrum Kernels. The spectrum kernel [9] in our notation is the first sufficient statistic M0, i.e. K(X, Y |k) = M0, which can be computed in k rounds of counting sort (i.e. in O(kn) time). Gapped Kernels. The gapped kernels [6] measure similarity between strings X and Y based on the co-occurrence of gapped instances g, |g| = k + m > k of k-long substrings: K(X, Y |k, g) = X γ∈Σk  X g∈X,|g|=k+m I(γ, g)  X g∈Y,|g|=k+m I(γ, g)  , (9) where I(γ, g) = 1 when γ is a subsequence of g. Similar to the algorithmic approach for extracting cumulative mismatch statistics in Algorithm-2, to compute the gapped(g,k) kernel, we perform a single round of counting sort over k-mers contained in the g-mers. This gives a very simple and efficient O( g k  kn) time algorithm for gapped kernel computations. Wildcard kernels. The wildcard(k,m) kernel [6] in our notation is the sum of the cumulative statistics K(X, Y |k, m) = Pm i=0 Ci, i.e. can be computed in Pm i=0 k i  rounds of counting sort, giving a simple and efficient O(Pm i=0 k i  (k −i)n) algorithm. Spatial kernels. The spatial(k,t,d) kernel [13] can be computed by sorting kt-mers iteratively for every arrangement of t k-mers spatially constrained by distance d. Neighborhood Kernels. The sequence neighborhood kernels [7] proved to be a powerful tool in many sequence analysis tasks. The method uses the unlabeled data to form a set of neighbors for train/test sequences and measure similarity of two sequences X and Y using their neighborhoods: K(X, Y ) = X x∈N(X) X y∈N(Y ) K(x, y) (10) where N(X) is the sequence neighborhood that contains neighboring sequences from the unlabeled data set, including X itself. Note the kernel value, if computed directly using Equation 10, will incur quadratic complexity in the size of the neighborhoods. Similar to the single string case, using our algorithmic approach, to compute the neighborhood kernel (over the string sets), we can jointly sort the observed k-mers in N(X) and N(Y ) and apply the desired kernel evaluation method (spectrum, mismatch, or gapped). Under this setting, the neighborhood kernel can be evaluated in time linear to the neighborhood size. This leads to very efficient algorithms for computing sequence neighborhood kernels even for very large datasets, as we will show in the experimental section. 6 Evaluation We study the performance of our algorithms, both in running time and predictive accuracy, on synthetic data and standard benchmark datasets for protein sequence analysis and music genre classification. The reduced running time requirements of our algorithms open the possibility to consider ”looser” mismatch measures with larger k and m. The results presented here demonstrate that such mismatch kernels with larger (k, m) can lead to state-of-the-art predictive performance even when compared with more complex models such as [2]. We use three standard benchmark datasets to compare with previously published results: the SCOP dataset (7329 sequences with 2862 labeled) [7] for remote protein homology detection, the DingDubchak dataset1 (27 folds, 694 sequences) [14, 15] for protein fold recognition, and music genre data2 (10 classes, 1000 sequences, |Σ| = 1024) [16] for multi-class genre prediction. For protein sequence classification under the semi-supervised setting, we also use the Protein Data Bank (PDB, 17, 232 sequences), the Swiss-Prot (101, 602 sequences), and the non-redundant (NR) databases as the unlabeled datasets, following the setup of [17]. All experiments are performed on a single 2.8GHz CPU. The datasets used in our experiments and the suplementary data/code are available at http://seqam.rutgers.edu/new-inexact/new-inexact.html. 6.1 Running time analysis We compare the running time of our algorithm on synthetic and real data with the trie-based computations. For synthetic data, we generate strings of length n = 105 over alphabets of different sizes and measure the running time of the trie-based and our sufficient statistics based algorithms for evaluating mismatch string kernel. Figure 1 shows relative running time Ttrie/Tss, in logarithmic scale, of the mismatch-trie and mismatch-SS as a function of the alphabet size. As can be seen from the plot, our algorithm demonstrates several orders of magnitude improvements, especially for large alphabet sizes. Table 1 compares running times of our algorithm and the trie-based algorithm for different real dataset (proteins, DNA, text, music) for a single kernel entry (pair of strings) computation. We observe the speed improvements ranging from 100 to 106 times depending on the alphabet size. We also measure the running time for full 7329-by-7329 mismatch(5,2) kernel matrix computations for SCOP dataset under the supervised setting. The running time of our algorithm is 1525 seconds compared to 196052 seconds for the trie-based computations. The obtained speed-up of 128 times is as expected from the theoretical analysis (our algorithm performs 31 counting-sort iterations in total over 5-, 4-, 3-, 2-, and 1- mers, which gives the running time ratio of approximately 125 when compared to the trie-based complexity). We observe similar improvements under 1http://ranger.uta.edu/˜chqding/bioinfo.html 2http://opihi.cs.uvic.ca/sound/genres 100 200 300 400 500 600 700 800 900 1000 10 0 10 1 10 2 10 3 10 4 10 5 alphabet size relative running time, Ttrie/Tss Figure 1: Relative running time Ttrie/Tss (in logarithmic scale) of the mismatch-trie and mismatch-ss as a function of the alphabet size (mismatch(5,1) kernel, n = 105) Table 1: Running time (in seconds) for kernel computation between two strings on real data long protein protein dna text music n 36672 116 570 242 6892 |Σ| 20 20 4 29224 1024 (5,1)-trie 1.6268 0.0212 0.0260 20398 526.8 (5,1)-ss 0.1987 0.0052 0.0054 0.0178 0.0331 time ratio 8 4 5 106 16,000 (5,2)-trie 31.5519 0.2918 0.4800 (5,2)-ss 0.2957 0.0067 0.0064 0.0649 0.0941 time ratio 100 44 75 the semi-supervised setting for neighborhood mismatch kernels; for example, computing a smaller neighborhood mismatch(5,2) kernel matrix for the labeled sequences only (2862-by-2862 matrix) using the Swiss-Prot unlabeled dataset takes 1, 480 seconds with our algorithm, whereas performing the same task with the trie-based algorithm takes about 5 days. 6.2 Empirical performance analysis In this section we show predictive performance results for several sequence analysis tasks using our new algorithms. We consider the tasks of the multi-class music genre classification [16], with results in Table 2, and the protein remote homology (superfamily) prediction [9, 2, 18] in Table 3. We also include preliminary results for multi-class fold prediction [14, 15] in Table 4. On the music classification task, we observe significant improvements in accuracy for larger number of mismatches. The obtained error rate (35.6%) on this dataset compares well with the state-of-theart results based on the same signal representation in [16]. The remote protein homology detection, as evident from Table 3, clearly benefits from larger number of allowed mismatches because the remotely related proteins are likely to be separated by multiple mutations or insertions/deletions. For example, we observe improvement in the average ROC-50 score from 41.92 to 52.00 under a fully-supervised setting, and similar significant improvements in the semi-supervised settings. In particular, the result on the Swiss-Prot dataset for the (7, 3)-mismatch kernel is very promising and compares well with the best results of the state-of-the-art, but computationally more demanding, profile kernels [2]. The neighborhood kernels proposed by Weston et al. have already shown very promising results in [7], though slightly worse than the profile kernel. However, using our new algorithm that significantly improves the speed of the neighborhood kernels, we show that with larger number of allowed mismatches the neighborhood can perform even better than the stateof-the-art profile kernel: the (7,3)-mismatch neighborhood achieves the average ROC-50 score of 86.32, compared to 84.00 of the profile kernel on the Swiss-Prot dataset. This is an important result that addresses a main drawback of the neighborhood kernels, the running time [7, 2]. Table 2: Classification performance on music genre classification (multi-class) Method Error Mismatch (5,1) 42.6±6.34 Mismatch (5,2) 35.6±4.99 Table 3: Classification performance on protein remote homology prediction dataset mismatch (5,1) mismatch (5,2) mismatch (7,3) ROC ROC50 ROC ROC50 ROC ROC50 SCOP (supervised) 87.75 41.92 90.67 49.09 91.31 52.00 SCOP (unlabeled) 90.93 67.20 91.42 69.35 92.27 73.29 SCOP (PDB) 97.06 80.39 97.24 81.35 97.93 84.56 SCOP (Swiss-Prot) 96.73 81.05 97.05 82.25 97.78 86.32 For multi-class protein fold recognition (Table 4), we similarly observe improvements in performance for larger numbers of allowed mismatches. The balanced error of 25% for the (7,3)-mismatch neighborhood kernel using Swiss-Prot compares well with the best error rate of 26.5% for the stateof-the-art profile kernel with adaptive codes in [15] that used a much larger non-redundant (NR) dataset. Using NR, the balanced error further reduces to 22.5% for the (7,3)-mismatch. Table 4: Classification performance on fold prediction (multi-class) Method Error Top 5 Error Balanced Error Top 5 Balanced Error Recall Top 5 Recall Precision Top 5 Precision F1 Top5 F1 Mismatch (5, 1) 51.17 22.72 53.22 28.86 46.78 71.14 90.52 95.25 61.68 81.45 Mismatch (5, 2) 42.30 19.32 44.89 22.66 55.11 77.34 67.36 84.77 60.62 80.89 Mismatch (5, 2)† 27.42 14.36 24.98 13.36 75.02 86.64 79.01 91.02 76.96 88.78 Mismatch (7, 3) 43.60 19.06 47.13 22.76 52.87 77.24 84.65 91.95 65.09 83.96 Mismatch (7, 3)† 26.11 12.53 25.01 12.57 74.99 87.43 85.00 92.78 79.68 90.02 Mismatch (7, 3)‡ 23.76 11.75 22.49 12.14 77.59 87.86 84.90 91.99 81.04 89.88 † used the Swiss-Prot sequence database; ‡ used NR (non-redundant) database 7 Conclusions We presented new algorithms for inexact matching of the discrete-valued string representations that reduce computational complexity of current algorithms, demonstrate state-of-the-art performance and significantly improved running times. This improvement makes the string kernels with approximate but looser matching a viable alternative for practical tasks of sequence analysis. Our algorithms work with large databases in supervised and semi-supervised settings and scale well in the alphabet size and the number of allowed mismatches. As a consequence, the proposed algorithms can be readily applied to other challenging problems in sequence analysis and mining. References [1] Jianlin Cheng and Pierre Baldi. A machine learning information retrieval approach to protein fold recognition. Bioinformatics, 22(12):1456–1463, June 2006. [2] Rui Kuang, Eugene Ie, Ke Wang, Kai Wang, Mahira Siddiqi, Yoav Freund, and Christina S. Leslie. Profile-based string kernels for remote homology detection and motif extraction. In CSB, pages 152– 160, 2004. [3] Christina S. Leslie, Eleazar Eskin, Jason Weston, and William Stafford Noble. Mismatch string kernels for SVM protein classification. In NIPS, pages 1417–1424, 2002. [4] S¨oren Sonnenburg, Gunnar R¨atsch, and Bernhard Sch¨olkopf. Large scale genomic sequence SVM classifiers. In ICML ’05, pages 848–855, New York, NY, USA, 2005. [5] John Shawe-Taylor and Nello Cristianini. Kernel Methods for Pattern Analysis. Cambridge University Press, New York, NY, USA, 2004. [6] Christina Leslie and Rui Kuang. Fast string kernels using inexact matching for protein sequences. J. Mach. Learn. Res., 5:1435–1455, 2004. [7] Jason Weston, Christina Leslie, Eugene Ie, Dengyong Zhou, Andre Elisseeff, and William Stafford Noble. Semi-supervised protein classification using cluster kernels. Bioinformatics, 21(15):3241–3247, 2005. [8] Vladimir N. Vapnik. Statistical Learning Theory. Wiley-Interscience, September 1998. [9] Christina S. Leslie, Eleazar Eskin, and William Stafford Noble. The spectrum kernel: A string kernel for SVM protein classification. In Pacific Symposium on Biocomputing, pages 566–575, 2002. [10] S. V. N. Vishwanathan and Alex Smola. Fast kernels for string and tree matching. Advances in Neural Information Processing Systems, 15, 2002. [11] M. Gribskov, A.D. McLachlan, and D. Eisenberg. Profile analysis: detection of distantly related proteins. Proceedings of the National Academy of Sciences, 84:4355–4358, 1987. [12] Juho Rousu and John Shawe-Taylor. Efficient computation of gapped substring kernels on large alphabets. J. Mach. Learn. Res., 6:1323–1344, 2005. [13] Pavel Kuksa, Pai-Hsi Huang, and Vladimir Pavlovic. Fast protein homology and fold detection with sparse spatial sample kernels. In ICPR 2008, 2008. [14] Chris H.Q. Ding and Inna Dubchak. Multi-class protein fold recognition using support vector machines and neural networks. Bioinformatics, 17(4):349–358, 2001. [15] Iain Melvin, Eugene Ie, Jason Weston, William Stafford Noble, and Christina Leslie. Multi-class protein classification using adaptive codes. J. Mach. Learn. Res., 8:1557–1581, 2007. [16] Tao Li, Mitsunori Ogihara, and Qi Li. A comparative study on content-based music genre classification. In SIGIR ’03, pages 282–289, New York, NY, USA, 2003. ACM. [17] Pavel Kuksa, Pai-Hsi Huang, and Vladimir Pavlovic. On the role of local matching for efficient semisupervised protein sequence classification. In BIBM, 2008. [18] Tommi Jaakkola, Mark Diekhans, and David Haussler. A discriminative framework for detecting remote protein homologies. In Journal of Computational Biology, volume 7, pages 95–114, 2000.
2008
250
3,510
Evaluating probabilities under high-dimensional latent variable models Iain Murray and Ruslan Salakhutdinov Department of Computer Science University of Toronto Toronto, ON. M5S 3G4. Canada. {murray,rsalakhu}@cs.toronto.edu Abstract We present a simple new Monte Carlo algorithm for evaluating probabilities of observations in complex latent variable models, such as Deep Belief Networks. While the method is based on Markov chains, estimates based on short runs are formally unbiased. In expectation, the log probability of a test set will be underestimated, and this could form the basis of a probabilistic bound. The method is much cheaper than gold-standard annealing-based methods and only slightly more expensive than the cheapest Monte Carlo methods. We give examples of the new method substantially improving simple variational bounds at modest extra cost. 1 Introduction Latent variable models capture underlying structure in data by explaining observations as part of a more complex, partially observed system. A large number of probabilistic latent variable models have been developed, most of which express a joint distribution P(v, h) over observed quantities v and their unobserved counterparts h. Although it is by no means the only way to evaluate a model, a natural question to ask is “what probability P(v) is assigned to a test observation?”. In some models the latent variables associated with a test input can be easily summed out: P(v) = P h P(v, h). As an example, standard mixture models have a single discrete mixture component indicator for each data point; the joint probability P(v, h) can be explicitly evaluated for each setting of the latent variable. More complex graphical models explain data through the combination of many latent variables. This provides richer representations, but provides greater computational challenges. In particular, marginalizing out many latent variables can require complex integrals or exponentially large sums. One popular latent variable model, the Restricted Boltzmann Machine (RBM), is unusual in that the posterior over hiddens P(h|v) is fully-factored, which allows efficient evaluation of P(v) up to a constant. Almost all other latent variable models have posterior dependencies amongst latent variables, even if they are independent a priori. Our current work is motivated by recent work on evaluating RBMs and their generalization to Deep Belief Networks (DBNs) [1]. For both types of models, a single constant was accurately approximated so that P(v, h) could be evaluated point-wise. For RBMs, the remaining sum over hidden variables was performed analytically. For DBNs, test probabilities were lower-bounded through a variational technique. Perhaps surprisingly, the bound was unable to reveal any significant improvement over RBMs in an experiment on MNIST digits. It was unclear whether this was due to looseness of the bound, or to there being no difference in performance. A more accurate method for summing over latent variables would enable better and broader evaluation of DBNs. In section 2 we consider existing Monte Carlo methods. Some of them are certainly more accurate, but prohibitively expensive for evaluating large test sets. We then develop a new cheap Monte Carlo procedure for evaluating latent variable models in section 3. Like the variational method used previously, our method is unlikely to spuriously over-state test-set performance. Our presentation is for general latent variable models, however for a running example, we use DBNs (see section 4 and [2]). The benefits of our new approach are demonstrated in section 5. 2 Probability of observations as a normalizing constant The probability of a data vector, P(v), is the normalizing constant relating the posterior over hidden variables to the joint distribution in Bayes rule, P(h|v) = P(h, v)/P(v). A large literature on computing normalizing constants exists in physics, statistics and computer science. In principle, there are many methods that could be applied to evaluating the probability assigned to data by a latent variable model. We review a subset of these methods, with notation and intuitions that will help motivate and explain our new algorithm. In what follows, all auxiliary distributions Q and transition operators T are conditioned on the current test case v, this is not shown in the notation to reduce clutter. Further, all of these methods assume that we can evaluate P(h, v). Graphical models with undirected connections will require the separate estimation of a single constant as in [1]. 2.1 Importance sampling Importance sampling can in principle find the normalizing constant of any distribution. The algorithm involves averaging a simple ratio under samples from some convenient tractable distribution over the hidden variables, Q(h). Provided Q(h) ̸= 0 whenever P(h, v) ̸= 0, we obtain: P(v) = X h P(h, v) Q(h) Q(h) ≈1 S S X s=1 P h(s), v  Q h(s) , h(s) ∼Q h(s) . (1) Importance sampling relies on the sampling distribution Q(h) being similar to the target distribution P(h|v). Specifically, the variance of the estimator is an α-divergence between the distributions [3]. Finding a tractable Q(h) with small divergence is difficult in high-dimensional problems. 2.2 The Harmonic mean method Using Q(h)=P(h|v) in (1) gives an “estimator” that requires knowing P(v). As an alternative, the harmonic mean method, also called the reciprocal method, gives an unbiased estimate of 1/P(v): 1 P(v) = X h P(h) P(v) = X h P(h|v) P(v|h) ≈1 S S X s=1 1 P v|h(s), h(s) ∼P h(s)|v). (2) In practice correlated samples from MCMC are used; then the estimator is asymptotically unbiased. It was clear from the original paper and its discussion that the harmonic mean estimator can behave very poorly [4]. Samples in the tails of the posterior have large weights, which makes it easy to construct distributions where the estimator has infinite variance. A finite set of samples will rarely include any extremely large weights, so the estimator’s empirical variance can be misleadingly low. In many problems, the estimate of 1/P(v) will be an underestimate with high probability. That is, the method will overestimate P(v) and often give no indication that it has done so. Sometimes the estimator will have manageable variance. Also, more expensive versions of the estimator exist with lower variance. However, it is still prone to overestimate test probabilities: If 1/ ˆPHME(v) is the Harmonic Mean Estimator in (2), Jensen’s inequality gives P(v) = 1  E  1/ ˆPHME(v)  ≤E  ˆPHME(v)  . Similarly log P(v) will be overestimated in expectation. Hence the average of a large number of test log probabilities is highly likely to be an overestimate. Despite these problems the estimator has received significant attention in statistics, and has been used for evaluating latent variable models in recent machine learning literature [5, 6]. This is understandable: all of the existing, more accurate methods are harder to implement and take considerably longer to run. In this paper we propose a method that is nearly as easy to use as the harmonic mean method, but with better properties. 2.3 Importance sampling based on Markov chains Paradoxically, introducing auxiliary variables and making a distribution much higher-dimensional than it was before, can help find an approximating Q distribution that closely matches the target distribution. As an example we give a partial review of Annealed Importance Sampling (AIS) [7], a special case of a larger family of Sequential Monte Carlo (SMC) methods (see, e.g., [8]). Some of this theory will be needed in the new method we present in section 3. Annealing algorithms start with a sample from some tractable distribution P1. Steps are taken with a series of operators T2, T3, . . . , TS, whose stationary distributions, Ps, are “cooled” towards the distribution of interest. The probability over the resulting sequence H = {h(1), h(2), . . . h(S)} is: QAIS(H) = P1 h(1) S Y s=2 Ts h(s) ←h(s−1) . (3) To compute importance weights, we need to define a “target” distribution on the same state-space: PAIS(H) = P h(S)|v  S Y s=2 eTs h(s−1) ←h(s) . (4) Because h(S) has marginal P(h|v) = P(h, v)/P(v), PAIS(H) has our target, P(v), as its normalizing constant. The eT operators are the reverse operators, of those used to define QAIS. For any transition operator T that leaves a distribution P(h|v) stationary, there is a unique corresponding “reverse operator” eT, which is defined for any point h′ in the support of P: eT(h←h′) = T(h′ ←h) P(h|v) P h T(h′ ←h) P(h|v) = T(h′ ←h) P(h|v) P(h′|v) . (5) The sum in the denominator is known because T leaves the posterior stationary. Operators that are their own reverse operator are said to satisfy “detailed balance” and are also known as “reversible”. Many transition operators used in practice, such as Metropolis–Hastings, are reversible. Non-reversible operators are usually composed from a sequence of reversible operations, such as the component updates in a Gibbs sampler. The reverse of these (so-called) non-reversible operators is constructed from the same reversible base operations, but applied in reverse order. The definitions above allow us to write: QAIS(H) = PAIS(H) QAIS(H) PAIS(H) = PAIS(H) P1 h(1) P h(S)|v  · S Y s=2 Ts h(s) ←h(s−1) eTs h(s−1) ←h(s) = PAIS(H) P(v) " P1 h(1) P h(S), v  · S Y s=2 P ∗ s (h(s)) P ∗s (h(s−1)) # ≡PAIS(H) P(v) w(H) . (6) We can usually evaluate the P ∗ s , which are unnormalized versions of the stationary distributions of the Markov chain operators. Therefore the AIS importance weight w(H) = 1/ [· · · ] is tractable as long as we can evaluate P(h, v). The AIS importance weight provides an unbiased estimate: EQAIS(H) h w(H) i = P(v) X H PAIS(H) = P(v). (7) As with standard importance sampling, the variance of the estimator depends on a divergence between PAIS and QAIS. This can be made small, at large computational expense, by using hundreds or thousands of steps S, allowing the neighboring intermediate distributions Ps(h) to be close. 2.4 Chib-style estimators Bayes rule implies that for any special hidden state h∗, P(v) = P(h∗, v)/P(h∗|v). (8) This trivial identity suggests a family of estimators introduced by Chib [9]. First, we choose a particular hidden state h∗, usually one with high posterior probability, and then estimate P(h∗|v). We would like to obtain an estimator that is based on a sequence of states H ={h(1), h(2), . . . , h(S)} generated by a Markov chain that explores the posterior distribution P(h|v). The most naive estimate of P(h∗|v) is the fraction of states in H that are equal to the special state P s I(h(s) =h∗)/S. Obviously this estimator is impractical as it equals zero with high probability when applied to highdimensional problems. A “Rao–Blackwellized” version of this estimator, ˆp(H), replaces the indicator function with the probability of transitioning from h(s) to the special state under a Markov chain transition operator that leaves the posterior stationary. This can be derived directly from the operator’s stationary condition: P(h∗|v) = X h T(h∗←h)P(h|v) ≈ ˆp(H) ≡1 S S X s=1 T(h∗←h(s)), {h(s)} ∼P(H), (9) where P(H) is the joint distribution arising from S steps of a Markov chain. If the chain has stationary distribution P(h|v) and could be initialized at equilibrium so that P(H) = P h(1) v  S Y s=2 T h(s) ←h(s−1) , (10) then ˆp(H) would be an unbiased estimate of P(h∗|v). For ergodic chains the stationary distribution is achieved asymptotically and the estimator is consistent regardless of how it is initialized. If T is a Gibbs sampling transition operator, the only way of moving from h to h∗is to draw each element of h∗in turn. If updates are made in index order from 1 to M, the move has probability: T(h∗←h) = M Y j=1 P h∗ j h∗ 1:(j−1), h(j+1):M  . (11) Equations (9, 11) have been used in schemes for monitoring the convergence of Gibbs samplers [10]. It is worth emphasizing that we have only outlined the simplest possible scheme inspired by Chib’s general approach. For some Markov chains, there are technical problems with the above construction, which require an extension explained in the appendix. Moreover the approach above is not what Chib recommended. In fact, [11] explicitly favors a more elaborate procedure involving sampling from a sequence of distributions. This opens up the possibility of many sophisticated developments, e.g. [12, 13]. However, our focus in this work is on obtaining more useful results from simple cheap methods. There are also well-known problems with the Chib approach [14], to which we will return. 3 A new estimator for evaluating latent-variable models We start with the simplest Chib-inspired estimator based on equations (8,9,11). Like many Markov chain Monte Carlo algorithms, (9) provides only (asymptotic) unbiasedness. For our purposes this is not sufficient. Jensen’s inequality tells us P(v) = P(h∗, v) P(h∗|v) = P(h∗, v) E[ˆp(H)] ≤E P(h∗, v) ˆp(H)  . (12) That is, we will overestimate the probability of a visible vector in expectation. Jensen’s inequality also says that we will overestimate log P(v) in expectation. Ideally we would like an accurate estimate of log P(v). However, if we must suffer some bias, then a lower bound that does not overstate performance will usually be preferred. An underestimate of P(v) would result from overestimating P(h∗|v). The probability of the special state h∗will often be overestimated in practice if we initialize our Markov chain at h∗. There are, however, simple counter-examples where this does not happen. Instead we describe a construction based on a sequence of Markov steps starting at h∗that does have the desired effect. We draw a state sequence from the following carefully designed distribution, using the algorithm in figure 1: Q(H) = 1 S S X s=1 eT h(s) ←h∗ S Y s′=s+1 T h(s′) ←h(s′−1) s−1 Y s′=1 eT h(s′) ←h(s′+1) . (13) If the initial state were drawn from P(h|v) instead of eT h(s) ←h∗ , then the algorithm would give a sample from an equilibrium sequence with distribution P(H) defined in (10). This can be checked by repeated substitution of (5). This allows us to express Q in terms of P, as we did for AIS: Q(H) = 1 S S X s=1 eT h(s) ←h∗ P h(s)|v  P(H) = 1 P(h∗|v) " 1 S S X s=1 T h∗←h(s) # P(H). (14) Inputs: v, observed test vector h∗, a (preferably high posterior probability) hidden state S, number of Markov chain steps T, Markov chain operator that leaves P(h|v) stationary 1. Draw s ∼Uniform({1, . . . S}) 2. Draw h(s) ∼eT h(s) ←h∗ 3. for s′ = (s + 1) : S 4. Draw h(s′) ∼T h(s′) ←h(s′−1) 5. for s′ = (s −1) : −1 : 1 6. Draw h(s′) ∼eT h(s′) ←h(s′+1) 7. P(v) ≈P(v, h∗) . 1 S S X s′=1 T(h∗←h(s′)) h∗ h(1) h(2) h(3) h(4) h∗ T h∗ T h∗ T h∗ T eT eT eT T Figure 1: Algorithm for the proposed method. The graphical model shows Q(H|s = 3) for S = 4. At each generated state T(h∗←h(s′)) is evaluated (step 7), roughly doubling the cost of sampling. The reverse operator, eT, was defined in section 2.3. The quantity in square brackets is the estimator for P(h∗|v) given in (9). The expectation of the reciprocal of this quantity under draws from Q(H) is exactly the quantity needed to compute P(v): EQ(H) " 1 , 1 S S X s=1 T h∗←h(s) # = 1 P(h∗|v) X H P(H) = 1 P(h∗|v). (15) Although we are using the simple estimator from (9), by drawing H from a carefully constructed Markov chain procedure, the estimator is now unbiased in P(v). This is not an asymptotic result. As long as no division by zero has occurred in the above equations, the estimator is unbiased in P(v) for finite runs of the Markov chain. Jensen’s implies that log P(v) is underestimated in expectation. Neal noted that Chibs method will return incorrect answers in cases where the Markov chain does not mix well amongst modes [14]. Our new proposed method will suffer from the same problem. Even if no transition probabilities are exactly zero, unbiasedness does not exclude being on a particular side of the correct answer with very high probability. Poor mixing may cause P(h∗|v) to be overestimated with high probability, which would result in an underestimate of P(v), i.e., an overly conservative estimate of test performance. The variance of the estimator is generally unknown, as it depends on the (generally unavailable) auto-covariance structure of the Markov chain. We can note one positive property: for the ideal Markov chain operator that mixes in one step, the estimator has zero variance and gives the correct answer immediately. Although this extreme will not actually occur, it does indicate that on easy problems, good answers can be returned more quickly than by AIS. 4 Deep Belief Networks In this section we provide a brief overview of Deep Belief Networks (DBNs), recently introduced by [2]. DBNs are probabilistic generative models, that can contain many layers of hidden variables. Each layer captures strong high-order correlations between the activities of hidden features in the layer below. The top two layers of the DBN model form a Restricted Boltzmann Machine (RBM) which is an undirected graphical model, but the lower layers form a directed generative model. The original paper introduced a greedy, layer-by-layer unsupervised learning algorithm that consists of learning a stack of RBMs one layer at a time. Consider a DBN model with two layers of hidden features. The model’s joint distribution is: P(v, h1, h2) = P(v|h1) P(h2, h1), (16) where P(v|h1) represents a sigmoid belief network, and P(h1, h2) is the joint distribution defined by the second layer RBM. By explicitly summing out h2, we can easily evaluate an unnormalized probability P ∗(v,h1)=ZP(v, h1). Using an approximating factorial posterior distribution Q(h|v), 5 10 15 20 25 30 35 40 −87 −86.5 −86 −85.5 −85 Number of Markov chain steps Estimated Test Log−probability Estimate of Variational Lower Bound AIS Estimator Our Proposed Estimator 5 10 15 20 25 30 35 40 −585 −580 −575 −570 −565 Number of Markov chain steps Estimated Test Log−probability Estimate of Variational Lower Bound AIS Estimator Our Proposed Estimator MNIST digits Image Patches Figure 2: AIS, our proposed estimator and a variational method were used to sum over the hidden states for each of 50 randomly sampled test cases to estimate their average log probability. The three methods shared the same AIS estimate of a single global normalization constant Z. obtained as a byproduct of the greedy learning procedure, and an AIS estimate of the model’s partition function Z, [1] proposed obtaining an estimate of a variational lower bound: log P(v) ≥ X h1 Q(h1|v) log P ∗(v, h1) −log Z + H(Q(h1|v)). (17) The entropy term H(·) can be computed analytically, since Q is factorial, and the expectation term was estimated by a simple Monte Carlo approximation: X h1 Q(h1|v) log P ∗(v, h1) ≈1 S X s=1..S log P ∗(v, h1(s)), where h1(s) ∼Q(h1|v). (18) Instead of the variational approach, we could also adopt AIS to estimate P h1 P ∗(v, h1). This would be computationally very expensive, since we would need to run AIS for each test case. In the next section we show that variational lower bounds can be quite loose. Running AIS on the entire test set, containing many thousands of test cases, is computationally too demanding. Our proposed estimator requires the same single AIS estimate of Z as the variational method, so that we can evaluate P(v, h1). It then provides better estimates of log P(v) by approximately summing over h1 for each test case in a reasonable amount of computer time. 5 Experimental Results We present experimental results on two datasets: the MNIST digits and a dataset of image patches, extracted from images of natural scenes taken from the collection of Van Hateren (http://hlab.phys.rug.nl/imlib/). The MNIST dataset contains 60,000 training and 10,000 test images of ten handwritten digits (0 to 9), with 28×28 pixels. The image dataset consisted of 130,000 training and 20,000 test 20×20 patches. The raw image intensities were preprocessed and whitened as described in [15]. Gibbs sampling was used as a Markov chain transition operator throughout. All log probabilities quoted use natural logarithms, giving values in nats. 5.1 MNIST digits In our first experiment we used a deep belief network (DBN) taken from [1]. The network had two hidden layers with 500 and 2000 hidden units, and was greedily trained by learning a stack of two RBMs one layer at a time. Each RBM was trained using the Contrastive Divergence (CD) learning rule. The estimate of the lower bound on the average test log probability, using (17), was −86.22. To estimate how loose the variational bound is, we randomly sampled 50 test cases, 5 of each class, and ran AIS for each test case to estimate the true test log probability. Computationally, this is equivalent to estimating 50 additional partition functions. Figure 2, left panel, shows the results. The estimate of the variational bound was −87.05 per test case, whereas the estimate of the true test log probability using AIS was −85.20. Our proposed estimator, averaged over 10 runs, provided an answer of −85.22. The special state h∗for each test example v was obtained by first sampling from the approximating distribution Q(h|v), and then performing deterministic hill-climbing in log p(v, h) to get to a local mode. AIS used a hand-tuned temperature schedule designed to equalize the variance of the intermediate log weights [7]. We needed 10,000 intermediate distributions to get stable results, which took about 3.6 days on a Pentium Xeon 3.00GHz machine, whereas for our proposed estimator we only used S =40, which took about 50 minutes. For a more direct comparison we tried giving AIS 50 minutes, which allows 100 temperatures. This run gave an estimate of −89.59, which is lower than the lower bound and tells us nothing. Giving AIS ten times more time, 1000 temperatures, gave −86.05. This is higher than the lower bound, but still worse than our estimator at S = 40, or even S = 5. Finally, using our proposed estimator, the average test log probability on the entire MNIST test data was −84.55. The difference of about 2 nats shows that the variational bound in [1] was rather tight, although a very small improvement of the DBN over the RBM is now revealed. 5.2 Image Patches In our second experiment we trained a two-layer DBN model on the image patches of natural scenes. The first layer RBM had 2000 hidden units and 400 Gaussian visible units. The second layer represented a semi-restricted Boltzmann machine (SRBM) with 500 hidden and 2000 visible units. The SRBM contained visible-to-visible connections, and was trained using Contrastive Divergence together with mean-field. Details of training can be found in [15]. The overall DBN model can be viewed as a directed hierarchy of Markov random fields with hidden-to-hidden connections. To estimate the model’s partition function, we used AIS with 15,000 intermediate distributions and 100 annealing runs. The estimated lower bound on the average test log probability (see Eq. 17), using a factorial approximate posterior distribution Q(h1|v), which we also get as a byproduct of the greedy learning algorithm, was −583.73. The estimate of the true test log probability, using our proposed estimator, was −563.39. In contrast to the model trained on MNIST, the difference of over 20 nats shows that, for model comparison purposes, the variational lower bound is quite loose. For comparison, we also trained square ICA and a mixture of factor analyzers (MFA) using code from [16, 17]. Square ICA achieves a test log probability of −551.14, and MFA with 50 mixture components and a 30-dimensional latent space achieves −502.30, clearly outperforming DBNs. 6 Discussion Our new Monte Carlo procedure is formally unbiased in estimating P(v). In practice it is likely to underestimate the (log-)probability of a test set. Although the algorithm involves Markov chains, importance sampling underlies the estimator. Therefore the methods discussed in [18] could be used to bound the probability of accidentally over-estimating a test set probability. In principle our procedure is a general technique for estimating normalizing constants. It would not always be appropriate however, as it would suffer the problems outlined in [14]. As an example our method will not succeed in estimating the global normalizing constant of an RBM. For our method to work well, a state drawn from eT(h(s) ←h∗) should look like it could be part of an equilibrium sequence H ∼P(H). The details of the algorithm arose by developing existing Monte Carlo estimators, but the starting state h(s) could be drawn from any arbitrary distribution: Qvar(H) = 1 S S X s=1 q(h(s)) P(h(s)|v)P(H) = P(v) " 1 S S X s=1 q(h(s)) P(h(s), v) # P(H). (19) As before the reciprocal of the quantity in square brackets would give an estimate of P(v). If an approximation q(h) is available that captures more mass than eT(h←h∗), this generalized estimator could perform better. We are hopeful that our method will be a natural next step in a variety of situations where improvements are sought over a deterministic approximation. Acknowledgments This research was supported by NSERC and CFI. Iain Murray was supported by the government of Canada. We thank Geoffrey Hinton and Radford Neal for useful discussions, Simon Osindero for providing preprocessed image patches of natural scenes, and the reviewers for useful comments. References [1] Ruslan Salakhutdinov and Iain Murray. On the quantitative analysis of Deep Belief Networks. In Proceedings of the International Conference on Machine Learning, volume 25, pages 872–879, 2008. [2] Geoffrey E. Hinton, Simon Osindero, and Yee Whye Teh. A fast learning algorithm for deep belief nets. Neural Computation, 18(7):1527–1554, 2006. [3] Tom Minka. Divergence measures and message passing. TR-2005-173, Microsoft Research, 2005. [4] Michael A. Newton and Adrian E. Raftery. Approximate Bayesian inference with the weighted likelihood bootstrap. Journal of the Royal Statistical Society, Series B (Methodological), 56(1):3–48, 1994. [5] Thomas L. Griffiths, Mark Steyvers, David M. Blei, and Joshua B. Tenenbaum. Integrating topics and syntax. In Advances in Neural Information Processing Systems (NIPS*17). MIT Press, 2005. [6] Hanna M. Wallach. Topic modeling: beyond bag-of-words. In Proceedings of the 23rd international conference on Machine learning, pages 977–984. ACM Press New York, NY, USA, 2006. [7] Radford M. Neal. Annealed importance sampling. Statistics and Computing, 11(2):125–139, 2001. [8] Pierre Del Moral, Arnaud Doucet, and Ajay Jasra. Sequential Monte Carlo samplers. Journal of the Royal Statistical Society B, 68(3):1–26, 2006. [9] Siddhartha Chib. Marginal likelihood from the Gibbs output. Journal of the American Statistical Association, 90(432):1313–1321, December 1995. [10] Christian Ritter and Martin A. Tanner. Facilitating the Gibbs sampler: the Gibbs stopper and the griddyGibbs sampler. Journal of the American Statistical Association, 87(419):861–868, 1992. [11] Siddhartha Chib and Ivan Jeliazkov. Marginal likelihood from the Metropolis–Hastings output. Journal of the American Statistical Association, 96(453), 2001. [12] Antonietta Mira and Geoff Nicholls. Bridge estimation of the probability density at a point. Statistica Sinica, 14:603–612, 2004. [13] Francesco Bartolucci, Luisa Scaccia, and Antonietta Mira. Efficient Bayes factor estimation from the reversible jump output. Biometrika, 93(1):41–52, 2006. [14] Radford M. Neal. Erroneous results in “Marginal likelihood from the Gibbs output”, 1999. Available from http://www.cs.toronto.edu/∼radford/chib-letter.html. [15] Simon Osindero and Geoffrey Hinton. Modeling image patches with a directed hierarchy of Markov random fields. In Advances in Neural Information Processing Systems (NIPS*20). MIT Press, 2008. [16] Aapo Hyv¨arinen. Fast and robust fixed-point algorithms for independent component analysis. IEEE Transactions on Neural Networks, 10(3):626–634, 1999. [17] Zoubin Ghahramani and Geoffrey E. Hinton. The EM algorithm for mixtures of factor analyzers. Technical Report CRG-TR-96-1, University of Toronto, 1997. [18] Vibhav Gogate, Bozhena Bidyuk, and Rina Dechter. Studies in lower bounding probability of evidence using the Markov inequality. In 23rd Conference on Uncertainty in Artificial Intelligence (UAI), 2007. A Real-valued latents and Metropolis–Hastings There are technical difficulties with the original Chib-style approach applied to Metropolis–Hastings and continuous latent variables. The continuous version of equation (9), P(h∗|v) = R T(h∗←h)P(h|v) dh ≈ 1 S PS s=1 T(h∗←h(s)), h(s) ∼P(H), (20) doesn’t work if T is the Metropolis–Hastings operator. The Dirac-delta function at h=h∗contains a significant part of the integral, which is ignored by samples from P(h|v) with probability one. Following [11], the fix is to instead integrate over the generalized detailed balance relationship (5). Chib and Jeliazkov implicitly took out the h∗=h point from all of their integrals. We do the same: P(h∗|v) = R dh eT(h∗←h)P(h|v) . R dh T(h←h∗). (21) The numerator can be estimated as before. As both integrals omit h = h∗, the denominator is less than one when T contains a delta function. For Metropolis–Hastings: T(h ←h∗) = q(h; h∗) min 1, a(h; h∗)  , where a(h; h∗) is an easy-to-compute acceptance ratio. Sampling from q(h; h∗) and averaging min(1, a(h; h∗)) provides an estimate of the denominator. In our importance sampling approach there is no need to separately approximate an additional quantity. The algorithm in figure 1 still applies if the T’s are interpreted as probability density functions. If, due to a rejection, h∗is drawn in step 2. then the sum in step 7. will contain an infinite term giving a trivial underestimate P(v)=0. (Steps 3–6 need not be performed in this case.) On repeated runs, the average estimate is still unbiased, or an underestimate for chains that can’t mix. Alternatively, the variational approach (19) could be applied together with Metropolis–Hastings sampling.
2008
26
3,511
Counting Solution Clusters in Graph Coloring Problems Using Belief Propagation Lukas Kroc Ashish Sabharwal Bart Selman Department of Computer Science Cornell University, Ithaca NY 14853-7501, U.S.A. {kroc,sabhar,selman}@cs.cornell.edu ∗ Abstract We show that an important and computationally challenging solution space feature of the graph coloring problem (COL), namely the number of clusters of solutions, can be accurately estimated by a technique very similar to one for counting the number of solutions. This cluster counting approach can be naturally written in terms of a new factor graph derived from the factor graph representing the COL instance. Using a variant of the Belief Propagation inference framework, we can efficiently approximate cluster counts in random COL problems over a large range of graph densities. We illustrate the algorithm on instances with up to 100, 000 vertices. Moreover, we supply a methodology for computing the number of clusters exactly using advanced techniques from the knowledge compilation literature. This methodology scales up to several hundred variables. 1 Introduction Message passing algorithms, in particular Belief Propagation (BP), have been very successful in efficiently computing interesting properties of succinctly represented large spaces, such as joint probability distributions. Recently, these techniques have also been applied to compute properties of discrete spaces, in particular, properties of the space of solutions of combinatorial problems. For example, for propositional satisfiability (SAT) and graph coloring (COL) problems, marginal probability information about the uniform distribution over solutions (or similar combinatorial objects) has been the key ingredient in the success of BP-like algorithms. Most notably, the survey propagation (SP) algorithm utilizes this information to solve very large hard random instances of these problems [3, 11]. Earlier work on random ensembles of Constraint Satisfaction Problems (CSPs) has shown that the computationally hardest instances occur near phase boundaries, where instances go from having many globally satisfying solutions to having no solution at all (a “solution-focused picture”). In recent years, this picture has been refined and it was found that a key factor in determining the hardness of instances in terms of search algorithm (or sampling algorithm) is the question: how are the solutions spatially distributed within the search space? This has made the structure of the solution space in terms of its clustering properties a key factor in determining the performance of combinatorial search methods (a “cluster-focused picture”). Can BP-like algorithms be used to provide such cluster-focused information? For example, how many clusters are there in a solution space? How big are the clusters? How are they organized? Answers to such questions will shed further light into our understanding of these hard combinatorial problems and lead to better algorithmic approaches for reasoning about them, be it for finding one solution or answering queries of probabilistic inference about the set of solutions. The study of the solution space geometry has indeed been the focus ∗This work was supported by IISI, Cornell University (AFOSR grant FA9550-04-1-0151), DARPA (REAL grant FA8750-04-2-0216), and NSF (grant 0514429). of a number of recent papers [e.g. 1, 2, 3, 7, 9, 11], especially by the statistical physics community, which has developed extensive theoretical tools to analyze such spaces under certain structural assumptions and large size limits. We provide a purely combinatorial method for counting the number of clusters, which is applicable even to small size problems and can be approximated very well by message passing techniques. Solutions can be thought of as ‘neighbors’ if they differ in the value of one variable, and the transitive closure of the neighbor relation defines clusters in a natural manner. Counting the number of clusters is a challenging problem. To begin with, it is not even clear what is the best succinct way to represent clusters. One relatively crude but useful way is to represent a cluster by the set of ‘backbone’ variables in that cluster, i.e., variables that take a fixed value in all solutions within the cluster. Interestingly, while it is easy (polynomial time) to verify whether a variable assignment is indeed a solution of CSP, the same check is much harder for a candidate cluster represented by the set of its backbone variables. We propose one of the first scalable methods for estimating the number of clusters of solutions of graph coloring problems using a belief propagation like algorithm. While the na¨ıve method, based on enumeration of solutions and pairwise distances, scales to graph coloring problems with 50 or so nodes and a recently proposed local search based method provides estimates up to a few hundred node graphs [7], our approach—being based on BP—easily provides fast estimates for graphs with 100, 000 nodes. We validate the accuracy of our approach by also providing a fairly non-trivial exact counting method for clusters, utilizing advanced knowledge compilation techniques. Our approach works with the factor graph representation of the graph coloring problem. Yedidia et al. [12] showed that if one can write the so-called “partition function”, Z, for a quantity of interest in a factor graph with non-negative weights, then there is a fairly mechanical variational method derivation that yields belief propagation equations for estimating Z. Under certain assumptions, we derive a partition function style quantity, Z(−1), to count the number of clusters. We then use the variational method to obtain BP equations for estimating Z(−1). Our experiments with random graph coloring problems show that Z(−1) itself is an extremely accurate estimate of the number of clusters, and so is its approximation, ZBP(−1), obtained from our BP equations. 2 Preliminaries The graph coloring problem can be expressed in the form of a factor graph, a bipartite graph with two kinds of nodes. The variable nodes, ⃗x = (x1, . . . , xn), represent the variables in the problem (n vertices to be colored) with their discrete domain Dom = {c1, . . . , ck} (k colors). The factor nodes, α, . . ., with associated factor functions fα, . . . , represent the constrains of the problem (no two adjacent vertices have the same color). Each factor function is a Boolean function with arguments ⃗xα (a subset of variables from ⃗x) and range {0, 1}, and evaluates to 1 if and only if (iff) the associated constraint is satisfied. An edge connects a variable xi with factor fα iff the variable appears in the constraint represented by the factor node, which we denote by i ∈α. In the graph coloring problem, each factor function has exactly two variables. In the factor representation, each variable assignment ⃗x is thought of as having a weight equal to the product of the values that all factors evaluate to. We denote this product by F(⃗x) := Q α fα(⃗xα). In our case, the weight of an assignment ⃗x is 1 if all of the factors have value of 1, and 0 otherwise. The assignments with weight 1 correspond precisely to legal colorings, or solutions to the problem. The number of solutions can thus be expressed as the weighted sum across all possible assignments. We denote this quantity by Z, the so-called partition function: Z := X ⃗x∈Domn F(⃗x) = X ⃗x∈Domn Y α fα(⃗xα) (1) We define the solution space of a graph coloring problem to be the set of all its legal colorings. Two legal colorings (or solutions) are called neighbors if they differ in the color of one vertex. Definition 1 (Solution Cluster). A set of solutions C ⊆S of a solution space S is a cluster if it is a maximal subset such that any two solutions in C can be connected by a sequence from C where consecutive solutions are neighbors. In other words, clusters are connected components of the “solution graph” which has solutions as nodes and an edge between two solutions if they differ in the value of exactly one variable. 3 A Partition Function Style Expression for Counting Clusters In this section we consider a method for estimating the number of solution clusters of a graph coloring problem. We briefly describe the concepts here; a more in-depth treatment, including formal results, may be found in [8]. First let us extend the definition of the function F so that it may be evaluated on an extended domain DomExt := P({c1, . . . , ck}) \ ∅where c1, . . . , ck are the k domain values (colors) of each of the problem variables, and P is the power set operator (so |DomExt| = 2k −1). Each generalized assignment ⃗y ∈DomExtn thus associates a (nonempty) set of values with each original variable, defining a hypercube in the search space for F. We generalize F and fα to this extended domain in the natural way, F ′(⃗y) := Q ⃗x∈⃗y F(⃗x), and f ′ α(⃗yα) := Q ⃗xα∈⃗yα fα(⃗xα), where the relation ∈is applied point-wise, as will be the case with any relational operators used on vectors in this text. This means that F ′ evaluates to 1 on a hypercube iff F evaluates to 1 on all points within that hypercube. Let us first assume that the solution space we work with decomposes into a set of separated hypercubes, so clusters correspond exactly to the hypercubes; by separated hypercubes, we mean that points in one hypercube differ from points in others in at least two values. E.g., ⃗y1 = ({c1} , {c1} , {c1}) and ⃗y2 = ({c2} , {c3} , {c1, c2}) are separated hypercubes in three dimensions. This allows us to develop a surprisingly simple expression for counting the number of clusters, and we will later see that the same expression applies with high precision also to solution spaces of much more complex instances of graph coloring problems. Consider the indicator function χ(⃗y) for the property that ⃗y ∈DomExtn is a maximal solution hypercube contained in the solution space: χ(⃗y) := F ′(⃗y) | {z } ⃗y is legal · Y i Y vi /∈yi  1 −F ′(⃗y[yi ←yi ∪{vi}])  | {z } no point-wise generalization is legal Here ⃗y[yi ←y′ i] denotes the substitution of y′ i into yi in ⃗y. Note that if the solution clusters are in fact hypercubes, then variable values that can be “extended” independently can also be extended all at once, that is, F ′(⃗y[yi ←yi ∪{vi}]) = 1 and F ′(⃗y[yj ←yj ∪{vj}]) = 1 implies F(⃗y[yi ← yi ∪{vi} , yj ←yj ∪{vj}]) = 1. Moreover, any F ′(⃗y[yi ←yi ∪{vi}]) implies F(⃗y). Using these observations, χ(⃗y) can be reformulated by factoring out the product as follows. Here #o(⃗y) denotes the number of odd-size elements of ⃗y, and #e(⃗y) the number of even-size ones. χ(⃗y) = F ′(⃗y)  X ⃗y′∈(P(Dom))n\⃗y (−1)#o(⃗y′) Y i Y vi∈y′ i F ′(⃗y[yi ←yi ∪{vi}]) | {z } =F ′(⃗y∪⃗y′) by hypercube assumption  ⃗z:=⃗y∪⃗y′ = X ⃗z⊇⃗y (−1)#o(⃗z\⃗y)F ′(⃗z) = (−1)#e(⃗y) X ⃗z⊇⃗y (−1)#e(⃗z)F ′(⃗z) Finally, to count the number of maximal hypercubes fitting into the set of solutions, we sum the indicator function χ(⃗y) across all vectors ⃗y ∈DomExtn: X ⃗y χ(⃗y) = X ⃗y (−1)#e(⃗y) X ⃗z⊇⃗y (−1)#e(⃗z)F ′(⃗z) = X ⃗z (−1)#e(⃗z)F ′(⃗z)  X ∅/∈⃗y⊆⃗z (−1)#e(⃗y) = X ⃗z (−1)#e(⃗z)F ′(⃗z)  Y i X ∅/∈⃗yi⊆⃗zi (−1)δe(yi) | {z } =1  = X ⃗z (−1)#e(⃗z)F ′(⃗z) The expression above is important for our study, and we denote it by Z(−1): Z(−1) := X ⃗z∈DomExtn (−1)#e(⃗z)F ′(⃗z) = X ⃗y∈DomExtn (−1)#e(⃗y) Y α f ′ α(⃗yα) (2) The notation Z(−1) is chosen to emphasize its relatedness to the partition function (1) denoted by Z, and indeed the two expressions differ only in the (−1) term. It is easily seen that if the solution space consists of a set of separated hypercubes, then Z(−1) exactly captures the number of clusters (each separated hypercube is a cluster). Surprisingly, this number is remarkably accurate even for random coloring problems as we will see in Section 6, Figure 1. 4 Exact Computation of the Number of Clusters and Z(−1) Obtaining the exact number of clusters for reasonable size problems is crucial for evaluating our proposed approach based on Z(−1) and the corresponding BP equations to follow in Section 5. A na¨ıve way is to explicitly enumerate all solutions, compute their pairwise Hamming distances, and infer the cluster structure. Not surprisingly, this method does not scale well because the number of solutions typically grows exponentially as the number of variables of the graph coloring problems increases. We discuss here a much more scalable approach that uses two advanced techniques to this effect: disjunctive negation normal form (DNNF) and binary decision diagrams (BDDs). Our method scales to graph coloring problems with a few hundred variables (see experimental results) for computing both the exact number of clusters and the exact value of Z(−1). Both DNNF [6] and BDD [4] are graph based data structures that have proven to be very effective in “knowledge compilation”, i.e., in converting a 0-1 function F into a (potentially exponentially long, but often reasonably sized) standard form from which various interesting properties of F can be inferred easily, often in linear time in the size of the DNNF formula or BDD. For our purposes, we use DNNF to succinctly represent all solutions of F and a set of BDDs to represent solution clusters that we create as we traverse the DNNF representation. The only relevant details for us of these two representations are the following: (1) DNNF is represented as an acyclic directed graph with variables and their negations at the leaves and two kinds of internal nodes, “or” and “and”; “or” nodes split the set of solutions such that they differ in the value of the variable labeling the node but otherwise have identical variables; “and” nodes partition the space into disjoint sets of variables; (2) BDDs represent arbitrary sets of solutions and support efficient intersection and projection (onto a subset of variables) operations on these sets. We use the compiler c2d [5] to obtain the DNNF form for F. Since c2d works on Boolean formulas and our F often has non-Boolean domains, we first convert F to a Boolean function F ′ using a unary encoding, i.e., by replacing each variable xi of F with domain size t with t Boolean variables x′ i,j, 1 ≤j ≤t, respecting the semantics: xi = j iff xi,j = 1. In order to ensure that F and F ′ have similar cluster structure of solutions, we relax the usual condition that only one of xi,1, . . . , xi,t may be 1, thus effectively allowing the original xi to take multiple values simultaneously. This yields a generalized function: the domains of the variables of F ′ correspond to the power sets of the domains of the respective variables of F. This generalization has the following useful property: if two solutions ⃗x(1) and ⃗x(2) are neighbors in the solution space of F, then the corresponding solutions ⃗x′(1) and ⃗x′(2) are in the same cluster in the solution space of F ′. Computing the number of clusters. Given F ′, we run c2d on it to obtain an implicit representation of all solutions as a DNNF formula F ′′. Next, we traverse F ′′ from the leaf nodes up, creating clusters as we go along. Specifically, with each node U of F ′′, we associate a set SU of BDDs, one for each cluster in the sub-formula contained under U. The set of BDDs for the root node of F ′′ then corresponds precisely to the set of solution clusters of F ′, and thus of F. These BDDs are computed as follows. If U is a leaf node of F ′′, it represents a Boolean variable or its negation and SU consists of the single one-node BDD corresponding to this Boolean literal. If U is an internal node of F ′′ labeled with the variable xU and with children L and R, the set of BDDs SU is computed as follows. If U is an “or” node, then we consider the union SL ∪SR of the two sets of BDDs and merge any two of these BDDs if they are adjacent, i.e., have two solutions that are neighbors in the solution space (since the DNNF form guarantees that the BDDs in SL and SR already must differ in the value of the variable xU labeling U, the adjacency check is equivalent to testing whether the two BDDs, with xU projected out, have a solution in common; this is a straightforward projection and intersection operation for BDDs); in the worst case, this leads to |SL| + |SR| cluster BDDs in SU. Similarly, if U is an “and” node, then SU is constructed by considering the cross product {bLand bR | bL ∈SL, bR ∈SR} of the two sets of BDDs and merging adjacent resulting BDDs as before; in the worst case, this leads to |SL| · |SR| cluster BDDs in SU. Evaluating Z(−1). The exact value of Z(−1) on F ′ can also be evaluated easily once we have the DNNF representation F ′′. In fact, as is reflected in our experimental results, evaluation of Z(−1) is a much more scalable process than counting clusters because it requires a simple traversal of F ′′ without the need for maintaining BDDs. With each node U of F ′′, we associate a value VU which equals precisely the difference between the number of solutions below U with an even number of positive literals and those with an odd number of positive literals; Z(−1) then equals (−1)N times the value thus associated with the root node of F ′′. These values are computed bottomup as follows. If U is a leaf node labeled with a positive (or negative) literal, then VU = −1 (or 1, resp.). If U is an “or” node with children L and R, then VU = VL + VR. This works because L and R have identical variables. Finally, if U is an “and” node with children L and R, then VU = VLVR. This last computation works because L and R are on disjoint sets of variables and because of the following observation. Suppose L has V e L solutions with an even number of positive literals and V o L solutions with an odd number of positive literals; similarly for R. Then VU = (V e LV e R + V o LV o R) −(V e LV o R + V o LV e R) = (V e L −V o L)(V e R −V o R) = VLVR. 5 Belief Propagation Inference for Clusters We present a version of the Belief Propagation algorithm that allows us to deal with the alternating signs of Z(−1). The derivation follows closely the one given by Yedidia et al. [12] for standard BP, i.e., we will write equations for a stationary point of KL divergence of two sequences (not necessarily probability distributions in our case). Since the Z(−1) expression involves both positive and negative terms, we must appropriately generalize some of the steps. Given a function p(⃗y) (the target function, with real numbers as its range) on DomExtn that is known up to a normalization constant but with unknown marginal sums, we seek a function b(⃗y) (the trial function) to approximate p(⃗y), such that b’s marginal sums are known. The target function p(⃗y) is defined as p(⃗y) := 1 Z(−1) (−1)#e(⃗y) Q α f ′ α(⃗yα). We adopt previously used notation [12]: ⃗yα are values in ⃗y of variables that appear in factor (i.e. vertex) f ′ α; ⃗y−i are values of all variables in ⃗y except yi. The marginal sums can be extended in a similar way to allow for any number of variables fixed in ⃗y, specified by the subscript. When convenient, we treat the symbol α as a set of indices of variables in f ′ α, to be able to index them. We begin by listing the assumptions used in the derivation, both the ones that are used in the “standard” BP, and two additional ones needed for the generalization. An assumption on b(⃗y) is legitimate if the corresponding condition holds for p(⃗y). Assumptions: The standard assumptions, present in the derivation of standard BP [12], are: • Marginalization: bi(yi) = P ⃗y−i b(⃗y) and bα(⃗yα) = P ⃗y−α b(⃗y). This condition is legitimate, but cannot be enforced with a polynomial number of constraints. Moreover, it might happen that the solution found by BP does not satisfy it, which is a known problem with BP [10]. • Normalization: P yi bi(yi) = P ⃗yα bα(⃗yα) = 1. This is legitimate and explicitly enforced. • Consistency: ∀α, i ∈α, yi : bi(yi) = P ⃗yα\i bα(⃗yα). This is legitimate and explicitly enforced. • Tree-like decomposition: says that the weights b(⃗y) of each configuration can be obtained from the marginal sums as follows (di is the degree of the variable node yi in the factor graph): |b(⃗y)| = Q α |bα(⃗yα)| Q i |bi(yi)|di−1 . (The standard assumption is without the absolute values.) This assumption is not legitimate, and it is built-in, i.e., it is used in the derivation of the BP equations. To appropriately handle the signs of b(⃗y) and p(⃗y), we have two additional assumptions. These are necessary for the BP derivation applicable to Z(−1), but not for the standard BP equations. • Sign-correspondence: For all configurations ⃗y, b(⃗y) and p(⃗y) have the same sign (zero, being a singular case, is treated as having a positive sign). This is a built-in assumption and legitimate. • Sign-alternation: bi(yi) is negative iff |yi| is even, and bα(⃗yα) is negative iff #e(⃗yα) is odd. This is also a built-in assumption, but not necessarily legitimate; whether or not it is legitimate depends on the structure of the solution space of a particular problem. The Sign-alternation assumption can be viewed as an application of the inclusion-exclusion principle, and is easy to illustrate on a graph coloring problem with only two colors. In this case, if F ′(⃗y) = 1, then yi = {c1} means that yi can have color 1, yi = {c2} that yi can have color 2, and yi = {c1, c2} that yi can have both colors. The third event is included in the first two, and its probability must thus appear with a negative sign if the sum of probabilities is to be 1. Kullback-Leibler divergence: The KL-divergence is traditionally defined for probability distributions, for sequences of non-negative terms in particular. We need a more general measure, as our sequences p(⃗y) and b(⃗y) have alternating signs. But using the Sign-correspondence assumption, we observe that the usual definition of KL-divergence is still applicable, since the term in the logarithm is non-negative: D(b ∥p) := P ⃗y∈DomExtn b(⃗y) log b(⃗y) p(⃗y) = P ⃗y∈DomExtn b(⃗y) log |b(⃗y)| |p(⃗y)| . Moreover, the following Lemma shows that the two properties of KL-divergence that make it suitable for distance-minimization are still valid. Lemma 1. Let b(.) and p(.) be (possibly negative) weight functions on the same domain D, with the property that they agree on signs for all states (i.e., ∀⃗y ∈D : sign(b(⃗y)) = sign(p(⃗y))), and that they sum to the same constant (i.e., P ⃗y b(⃗y) = P ⃗y p(⃗y) = c). Then the KL-divergence D(b ∥p) satisfies D(b ∥p) ≥0 and D(b ∥p) = 0 ⇔b ≡p. The proof is essentially identical to the equivalent statement made about KL-divergence of probability distributions. We omit it here for lack of space. Minimizing D(b ∥p): We write p(⃗y) = sign(p(⃗y)) · |p(⃗y)|, and analogously for b(⃗y). This allows us to isolate the signs, and the minimization follows exactly the steps of standard BP derivation, namely we write a set of equations characterizing stationary points of D(b ∥p). At the end, using the Sign-alternation assumption, we are able to implant the signs back. BP equations: The resulting modified BP updates (denoted BP(−1) ) are, for yi ∈DomExt: ni→α(yi) = Y β∋i\α mβ→i(yi) (3) mα→i(yi) ∝ X ⃗yα\i∈DomExt|α|−1 f ′ α(⃗yα) Y j∈α\i (−1)δ(|yj| is even)nj→α(yj) (4) (Almost equivalent to standard BP, except for the (−1) term.) One would iterate these equations from a suitable starting point to find a fixed point, and then obtain the beliefs bi(yi) and bα(⃗yα) (i.e., estimates of marginal sums) using the Sign-alternation assumption and the standard BP relations: bi(yi) ∝(−1)δ(|yi| is even) Y α∋i mα→i(yi) bα(⃗yα) ∝(−1)#e(⃗yα)f ′ α(⃗yα) Y i∈α ni→α(yi) (5) To approximately count the number of clusters in large problems for which exact cluster count or exact Z(−1) evaluation is infeasible, we employ the generic BP(−1) scheme derived above. We substitute the extended factors f ′(⃗yα) into Equations (3) and (4), iterate from a random initial starting point to find a fixed point, and then use Equations (5) to compute the beliefs. The actual estimate of Z(−1) is obtained with the standard BP formula (with signs properly taken care of), where di is the degree of the variable node yi in the factor graph: log ZBP(−1) := − X α X ⃗yα bα(⃗yα) log |bα(⃗yα)| + X i (di −1) X yi bi(yi) log |bi(yi)| (6) 6 Experimental Evaluation We empirically evaluate the accuracy of our Z(−1) and ZBP(−1) approximations on an ensemble of random graph 3-coloring instances. The results are discussed in this section. Z(−1) vs. the number of clusters. The left panel of Figure 1 compares the number of clusters (on the x-axis, log-scale) with Z(−1) (on the y-axis, log-scale) for 2, 500 colorable random 3-COL instances on graphs with 20, 50, and 100 vertices with average vertex degree ranging between 1.0 and 4.7 (the threshold for 3-colorability). As can be seen, the Z(−1) expression captures the number of clusters almost exactly. The inaccuracies come mostly from low graph density regions; in all instances we tried with density > 3.0, the Z(−1) expression was exact. We remark that although uncolorable instances were not considered in this comparison, Z(−1) = 0 = num-clusters by construction. It is worth noting that for tree-structured graphs (with more than one vertex), the Z(−1) expression gives 0 for any k ≥3 colors although there is exactly one solution cluster. Moreover, given a disconnected graph with at least one tree component, Z(−1) also evaluates to 0 as it is the product of Z(−1) values over different components. We have thus removed all tree components from the generated graphs prior to computing Z(−1); tree components are easily identified and removing them does not change the number of clusters. For low graph densities, there are still some instances 5 20 50 200 1000 5000 5 20 50 200 1000 5000 Number of clusters Z(−1) |V|= 20 |V|= 50 |V|= 100 0.00 0.10 0.20 0.30 0.00 0.10 0.20 0.30 Cluster marginals Z(−1)−marginals Figure 1: Left: Z(−1) vs. number of clusters in random 3-COL problems with 20, 50 and 100 vertices, and average vertex degree between 1.0 −4.7. Right: cluster marginals vs. Z(−1)-marginals for one instance of random 3-COL problem with 100 vertices. 1 2 3 4 0.00 0.05 0.10 0.15 0.20 Average vertex degree Average log(Z)/N ZBP(−1), |V|=100K ZBP(−1), |V|=100 Z(−1), |V|=100 Figure 2: Average ZBP(−1) and Z(−1) for 3-COL vs. average vertex degrees for small and large random graphs. for which Z(−1) evaluates to 0; these instances are not visible in Figure 1 due to the log-log scale. In fact, all our instances with fewer than 5 clusters have Z(−1) = 0. This is because of other substructures for which Z(−1) evaluates to 0, e.g., cordless cycles of length not divisible by 3 (for k = 3 coloring) with attached trees. These structures, however, become rare as the density increases. Z(−1) marginals vs. clusters marginals. For a given problem instance, we can define the cluster marginal of a variable xi to be the fraction of solution clusters in which xi only appears with one particular value (i.e., xi is a backbone of the cluster). Since Z(−1) counts well the number of clusters, it is natural to ask whether it is also possible to obtain the marginals information from it. Indeed, Z(−1) does provide an estimate of the cluster marginals, and we call them Z(−1)-marginals. Recall that the semantics of factors in the extended domain is such that a variable can assume a set of values only if every value in the set yields a solution to the problem. This extends to the Z(−1) estimate of the number of clusters, and one can therefore use the principle of inclusion-exclusion to compute the number of clusters where a variable can only assume one particular value. The definition of Z(−1) conveniently provides for correct signs, and the number of clusters where xi is fixed to vi is thus estimated by P yi∋vi Z(−1)(yi), where Z(−1)(yi) is the marginal sum of Z(−1). The Z(−1)-marginal is obtained by dividing this quantity by Z(−1). The right panel of Figure 1 shows the results on one random 3-COL problem with 100 vertices. The plot shows cluster marginals and Z(−1)-marginals for one color; the points correspond to individual variables. The Z(−1)-marginals are close to perfect. This is a typical situation, although it is important to mention that Z(−1)-marginals are not always correct, or even non-negative. They are merely an estimate of the true cluster marginals, and how well they work depends on the solution space structure at hand. They are exact if the solution space decomposes into separated hypercubes and, as the figure shows, remarkably accurate also for random coloring instances. The number of clusters vs. ZBP(−1). Figure 3 depicts a comparison between ZBP(−1) and Z(−1) for the 3-COL problem on colorable random graphs of various sizes and graph densities. It compares Z(−1) (on the x-axis, log-scale) with ZBP(−1) (y-axis, log-scale) for 1, 300 colorable 3-COL instances on random graphs with 50, 100, and 200 vertices, with average vertex degree ranging from 1.0 to 4.7. The plots shows that BP is quite accurate in estimating Z(−1) for individual instances, which in turn captures the number of clusters. Instances which are not 3-colorable are not shown, and BP in general incorrectly estimates a non-zero number of clusters for them. Estimates on very large graphs and for various graph densities. Figure 2 shows similar data from a different perspective: what is shown is a rescaled average estimate of the number of clusters (y-axis) for average vertex degrees 1.0 to 4.7 (x-axis). The average is taken across different colorable instances of a given size, and the rescaling assumes that the number of clusters = exp(|V |·Σ) where Σ is a constant independent of the number of vertices [3]. The three curves show, respectively, BP’s estimate for graphs with 100, 000 vertices, BP’s estimate for graphs with 100 vertices, and Z(−1) for the same graphs of size 100. The averages are computed across 3, 000 instances of the small graphs, and only 10 instances of the large ones where the instance-to-instance variability is practically nonexistent. The fact that the curves nicely overlay shows that BP(−1) computes Z(−1) very accurately 1e+00 1e+03 1e+06 1e+09 1e+00 1e+03 1e+06 1e+09 Z(−1) ZBP(−1) |V|= 50 1e+00 1e+03 1e+06 1e+09 1e+00 1e+03 1e+06 1e+09 Z(−1) ZBP(−1) |V|= 100 1e+00 1e+03 1e+06 1e+09 1e+00 1e+03 1e+06 1e+09 Z(−1) ZBP(−1) |V|= 200 Figure 3: ZBP(−1) compared to Z(−1) for 3-COL problem on random graphs with 50, 100 and 200 vertices and average vertex degree in the range 1.0 −4.7. on average for colorable instances (where we can compare it with exact values), and that the estimate remains accurate for large problems. Note that the Survey Propagation algorithm developed by Braunstein et al. [3] also aims at computing the number of certain clusters in the solution space. However, SP counts only the number of clusters with a “typical size”, and would show non-zero values in Figure 2 only for average vertex degrees between 4.42 and 4.7. Our algorithm counts clusters of all sizes, and is very accurate in the entire range of graph densities. 7 Conclusion We discuss a purely combinatorial construction for estimating the number of solution clusters in graph coloring problems with very high accuracy. The technique uses a hypercube-based inclusionexclusion argument coupled with solution counting, and lends itself to an application of a modified belief propagation algorithm. This way, the number of clusters in huge random graph coloring instances can be accurately and efficiently estimated. Our preliminary investigation has revealed that it is possible to use combinatorial arguments to formally prove that the cluster counts estimated by Z(−1) are exact on certain kinds of solution spaces (not necessarily only for graph coloring). We hope that such insights and the cluster-focused picture will lead to new techniques for solving hard combinatorial problems and for bounding solvability transitions in random problem ensembles. References [1] D. Achlioptas and F. Ricci-Tersenghi. On the solution-space geometry of random constraint satisfaction problems. In 38th STOC, pages 130–139, Seattle, WA, May 2006. [2] J. Ardelius, E. Aurell, and S. Krishnamurthy. Clustering of solutions in hard satisfiability problems. J. Statistical Mechanics, P10012, 2007. [3] A. Braunstein, R. Mulet, A. Pagnani, M. Weigt, and R. Zecchina. Polynomial iterative algorithms for coloring and analyzing random graphs. Physical Review E, 68:036702, 2003. [4] R. E. Bryant. Graph-based algorithms for Boolean function manipulation. IEEE Transactions on Computers, 35(8):677–691, 1986. [5] A. Darwiche. New advances in compiling CNF into decomposable negation normal form. In 16th European Conf. on AI, pages 328–332, Valencia, Spain, Aug. 2004. [6] A. Darwiche. Decomposable negation normal form. J. ACM, 48(4):608–647, 2001. [7] A. Hartmann, A. Mann, and W. Radenback. Clusters and solution landscapes for vertex-cover and SAT problems. In Workshop on Physics of Distributed Systems, Stockholm, Sweden, May 2008. [8] L. Kroc, A. Sabharwal, and B. Selman. Counting solution clusters of combinatorial problems using belief propagation, 2008. (in preparation). [9] F. Krzakala, A. Montanari, F. Ricci-Tersenghi, G. Semerjian, and L. Zdeborova. Gibbs states and the set of solutions of random constraint satisfaction problems. PNAS, 104(25):10318–10323, June 2007. [10] D. Mackay, J. Yedidia, W. Freeman, and Y. Weiss. A conversation about the Bethe free energy and sum-product, 2001. URL citeseer.ist.psu.edu/mackay01conversation.html. [11] M. M´ezard, G. Parisi, and R. Zecchina. Analytic and algorithmic solution of random satisfiability problems. Science, 297(5582):812–815, 2002. [12] J. S. Yedidia, W. T. Freeman, and Y. Weiss. Constructing free-energy approximations and generalized belief propagation algorithms. IEEE Transactions on Information Theory, 51(7):2282–2312, 2005.
2008
27
3,512
Supervised Bipartite Graph Inference Yoshihiro Yamanishi Mines ParisTech CBIO Institut Curie, INSERM U900, 35 rue Saint-Honore, Fontainebleau, F-77300 France yoshihiro.yamanishi@ensmp.fr Abstract We formulate the problem of bipartite graph inference as a supervised learning problem, and propose a new method to solve it from the viewpoint of distance metric learning. The method involves the learning of two mappings of the heterogeneous objects to a unified Euclidean space representing the network topology of the bipartite graph, where the graph is easy to infer. The algorithm can be formulated as an optimization problem in a reproducing kernel Hilbert space. We report encouraging results on the problem of compound-protein interaction network reconstruction from chemical structure data and genomic sequence data. 1 Introduction The problem of bipartite graph inference is to predict the presence or absence of edges between heterogeneous objects known to form the vertices of the bipartite graph, based on the observation about the heterogeneous objects. This problem is becoming a challenging issue in bioinformatics and computational biology, because there are many biological networks which are represented by a bipartite graph structure with vertices being heterogeneous molecules and edges being interactions between them. Examples include compound-protein interaction network consisting of interactions between ligand compounds and target proteins, metabolic network consisting of interactions between substrates and enzymes, and host-pathogen protein-protein network consisting of interactions between host proteins and pathogen proteins. Especially, the prediction of compound-protein interaction networks is a key issue toward genomic drug discovery, because drug development depends heavily on the detection of interactions between ligand compounds and target proteins. The human genome sequencing project has made available the sequences of a large number of human proteins, while the high-throughput screening of large-scale chemical compound libraries is enabling us to explore the chemical space of possible compounds[1]. However, our knowledge about the such compound-protein interactions is very limited. It is therefore important is to detect unknown compound-protein interactions in order to identify potentially useful compounds such as imaging probes and drug leads from huge amount of chemical and genomic data. A major traditional method for predicting compound-protein interactions is docking simulation [2]. However, docking simulation requires 3D structure information for the target proteins. Most pharmaceutically useful target proteins are membrane proteins such as ion channels and G proteincoupled receptors (GPCRs). It is still extremely difficult and expensive to determine the 3D structures of membrane proteins, which limits the use of docking. There is therefore a strong incentive to develop new useful prediction methods based on protein sequences, chemical compound structures, and the available known compound-protein interaction information simultaneously. Recently, several supervised methods for inferring a simple graph structure (e.g., protein network, enzyme network) have been developed in the framework of kernel methods [3, 4, 5]. The corresponding algorithms of the previous methods are based on kernel canonical correlation analysis 1 [3], distance metric learning [4], and em-algorithm [5], respectively. However, the previous methods can only predict edges between homogeneous objects such as protein-protein interactions and enzyme-enzyme relations, so it is not possible to predict edges between heterogeneous objects such as compound-protein interactions and substrate-enzyme interactions, because their frameworks are based only on a simple graph structure with homogeneous vertices. In contrast, in this paper we address the problem of supervised learning of the bipartite graph rather than the simple graph. In this contribution, we develop a new supervised method for inferring the bipartite graph, borrowing the idea of distance metric learning used in the framework for inferring the simple graph [4]. The proposed method involves the learning of two mappings of the heterogeneous objects to a unified Euclidean space representing the network topology of the bipartite graph, where the graph is easy to infer. The algorithm can be formulated as an optimization problem in a reproducing kernel Hilbert space. To our knowledge, there are no statistical methods to predict bipartite graphs from observed data in a supervised context. In the results, we show the usefulness of the proposed method on the predictions of compound-protein interaction network reconstruction from chemical structure data and genomic sequence data. Vertex with a!ribute 1 in known graph Vertex with a!ribute 2 in known graph Addi"onal vertex with a!ribute 1 Addi"onal vertex with a!ribute 2 Predicted edge Known edge Figure 1: An illustration of the problem of the supervised bipartite graph inference 2 Formalism of the supervised bipartite graph inference problem Let us formally define the supervised bipartite graph inference problem. Suppose that we are given an undirected bipartite graph G = (U + V, E), where U = (u1, · · · , un1) and V = (v1, · · · , vn2) are sets of heterogeneous vertices and E ⊂(U × V ) ∪(V × U) is a set of edges. Note that the attribute of U is completely different from that of V . The problem is, given additional sets of vertices U ′ = (u′ 1, · · · , u′ m1) and V ′ = (v′ 1, · · · , v′ m2), to infer a set of new edges E′ ⊂U ′ × (V + V ′) ∪ V ′ × (U + U ′) ∪(U + U ′) × V ′ ∪(V + V ′) × U ′ involving the additional vertices in U ′ and V ′. Figure 1 shows an illustration of this problem. The prediction of compound-protein interaction networks is a typical problem which is suitable in this framework from a practical viewpoint. In this case, U corresponds to a set of compounds (known ligands), V corresponds to a set of proteins (known targets), and E corresponds to a set of known compound-protein interactions (known ligand-target interactions). U ′ corresponds to a set of additional compounds (new ligand candidates), V ′ corresponds to a set of additional proteins (new target candidates), and E′ corresponds to a set of unknown compound-protein interactions (potential ligand-target interactions). The prediction is performed based on available observations about the vertices. Sets of vertices U = (u1, · · · , un1), V = (v1, · · · , vn2), U ′ = (u′ 1, · · · , u′ m1) and V ′ = (v′ 1, · · · , v′ m2) are represented by sets of observed data X = (x1, · · · , xn1), Y = (y1, · · · , yn2), X ′ = (x′ 1, · · · , x′ m1) and Y′ = (y′ 1, · · · , y′ m2), respectively. For example, compounds are represented by molecular structures and proteins are represented by amino acid sequences. The question is how to predict unknown compound-protein interactions from compound structures and protein sequences using prior knowledge about known compound-protein interactions. Sets of U and V (X and Y) are referred to as training sets, and heterogeneous objects are represented by u and v in the sense of vertices on the bipartite graph or by x and y in the sense of objects in the observed data below. In order to deal with the data heterogeneity and take advantage of recent works on kernel similarity functions on general data structures [6], we will assume that X is a set endowed with a positive definite kernel ku, that is, a symmetric function ku : X 2 →R satisfying Pn1 i,j=1 aiajku(xi, xj) ≥0 2 for any n1 ∈N, (a1, a2, · · · , an1) ∈Rn1 and (x1, x2, · · · , xn1) ∈X. Similarly, we will assume that Y is a set endowed with a positive definite kernel kv, that is, a symmetric function kv : Y2 →R satisfying Pn2 i,j=1 aiajkv(yi, yj) ≥0 for any n2 ∈N, (a1, a2, · · · , an2) ∈Rn2 and (y1, y2, · · · , yn2) ∈Y. 3 Distance metric learning (DML) for the bipartite graph inference 3.1 Euclidean embedding and distance metric learning (DML) Suppose that a bipartite graph must be reconstructed from the similarity information about n1 objects (x1, · · · , xn1) in X (observed data for U) and n2 objects (y1, · · · , yn2) in Y (observed data for V ). One difficulty is that the attribute of observed data differs between X and Y in nature, so it is not possible to evaluate the link between (x1, · · · , xn1) and (y1, · · · , yn2) from the observed data directly. For example, in the case of compounds and proteins, each x has a chemical graph structure and each y has a sequence structure, so the data structures completely differ between x and y. Therefore, we make an assumption that n1 objects (x1, · · · , xn1) and n2 objects (y1, · · · , yn2) are implicitly embedded in a unified Euclidean space Rd, and a graph is inferred on those heterogeneous points by the nearest neighbor approach, i.e., putting an edge between heterogeneous points that are close to each other. We propose the following two step procedure for the supervised bipartite graph inference: 1. embed the heterogeneous objects into a unified Euclidean space representing the network topology of the bipartite graph, where connecting heterogeneous vertices are close to each other, through mappings f : X →Rd and g : Y →Rd 2. apply the mappings f and g to X ′ and Y′ respectively, and predict new edges between the heterogeneous objects if the distance between the points {f(x), x ∈X ∪X ′} and {g(y), y ∈Y ∪Y′} is smaller than a fixed threshold δ. While the second step in this procedure is fixed, the first step can be optimized by supervised learning of f and g using the known bipartite graph. To do so, we require the mappings f and g to map adjacent heterogeneous vertices in the known bipartite graph onto nearby positions in a unified Euclidian space Rd, in order to ensure that the known bipartite graph can be recovered to some extent by the nearest neighbor approach. Given functions f : X →R and g : Y →R, a possible criterion to assess whether connected (resp. disconnected) heterogeneous vertices are mapped onto similar (resp. dissimilar) points in R is the following: R(f, g) = P (ui,vj)∈E(f(xi) −g(yj))2 −P (ui,vj)/∈E(f(xi) −g(yj))2 P (ui,vj)∈U×V (f(xi) −g(yj))2 . (1) A small value of R(f, g) ensures that connected heterogeneous vertices tend to be closer than disconnected heterogeneous vertices in the sense of quadratic error. To represent the connectivity between heterogeneous vertices on the bipartite graph G = (U+V, E), we define a kind of the adjacency matrix Auv, where element (Auv)ij is equal to 1 (resp. 0) if vertices ui and vj are connected (resp. disconnected). Note that the size of the matrix Auv is n1 × n2. We also define a kind of the degree matrix of the heterogeneous vertices as Du and Dv, where diagonal elements (Du)ii and (Dv)jj are the degrees of vertices ui and vj (the numbers of edges involving vertices ui and vj), respectively. Note that all non-diagonal elements in Du and Dv are zero, and the sizes of the matrices are n1 × n1 and n2 × n2, respectively. Let us denote by fU = (f(x1), · · · , f(xn1))T ∈Rn1 and gV = (g(y1), · · · , g(yn2))T ∈Rn2 the values taken by f and g on the training set. If we restrict fU and fV to have zero means as Pn1 i=1 f(xi) = 0 and Pn2 i=1 g(yi) = 0, then the criterion (1) can be rewritten as follows: R(f, g) = 4 µ fU gV ¶T µ Du −Auv −AT uv Dv ¶ µ fU gV ¶ µ fU gV ¶T µ fU gV ¶ −2 (2) 3 To avoid the over-fitting problem and obtain meaningful solutions, we propose to regularize the criterion (1) by a smoothness functional on f and g based on a classical approach in statistical learning [7, 8]. We assume that that f and g belong to the reproducing kernel Hilbert space (r.k.h.s.) HU and HV defined by the kernels ku on X and kv on Y, and to use the norms of f and g as regularization operators. Let us define by ||f|| and ||g|| the norms of f and g in HU and HV . Then, the regularized criterion to be minimized becomes: R(f, g) = µ fU gV ¶T µ Du −Auv −AT uv Dv ¶ µ fU gV ¶ + λ1||f||2 + λ2||g||2 µ fU gV ¶T µ fU gV ¶ , (3) where λ1 and λ2 are regularization parameters which control the trade-off between minimizing the original criterion (1) and ensuring that the solution has a small norm in the r.k.h.s. The criterion is defined up to a scaling of the functions and the solution is therefore a direction in the r.k.h.s. Here we set additional constraints. In this case we impose the norm ||f|| = ||g|| = 1, which corresponds to an orthogonal projection onto the direction selected in the r.k.h.s. Note that the criterion can be used for extracting one-dimentional feature of the objects. In order to obtain a d-dimensional feature representation of the objects, we propose to iterate the minimization of the regularized criterion (3) under orthogonality constraints in the r.k.h.s., that is, we recursively define the p-th features fp and gp for p = 1, · · · , d as follows: (fp, gp) = arg min µ fU gV ¶T µ Du −Auv −AT uv Dv ¶ µ fU gV ¶ + λ1||f||2 + λ2||g||2 µ fU gV ¶T µ fU gV ¶ (4) under the orthogonality constraints: f⊥f1, · · · , fp−1, and g⊥g1, · · · , gp−1. In the prediction process, we map any new objects x′ ∈X ′ and y′ ∈Y′ by the mappings f and g respectively, and predict new edges between the heterogeneous objects if the distance between the points {f(x), x ∈X ∪X ′} and {g(y), y ∈Y ∪Y′} is smaller than a fixed threshold δ. 3.2 Algorithm Let ku and kv be the kernels on the sets X and Y, where the kernels are both centered in HU and HV . According to the representer theorem [9] in the r.k.h.s., for any p = 1, · · · , d, the solution to equation (4) has the following expansions: fp(x) = n1 X j=1 αp,jku(xj, x), gp(y) = n2 X j=1 βp,jkv(yj, y), (5) for some vector αp = (αp,1, · · · , αp,n1)T ∈Rn1 and βp = (βp,1, · · · , βp,n2)T ∈Rn2. Let Ku and Kv be the Gram matrices of the kernels ku and ku such that (Ku)ij = ku(xi, xj), i, j = 1, · · · , n1 and (Kv)ij = kv(yi, yj), i, j = 1, · · · , n2. The corresponding feature vectors fp,U and gp,V can be written as fp,U = Kuαp and gp,V = Kvβp, respectively. The squared norms of features f and g in HU and HV are equal to ||f||2 = αT Kuα and ||g||2 = βT Kvβ, so the normalization constraints for f and g can be written as αT Kuα = βT Kvβ = 1. The orthogonarity constraints fp⊥fq and gp⊥gq (p ̸= q) can be written by αT p Kuαq = 0 and βT p Kvβq = 0. Using the above representations, the minimization problem of R(f, g) is equivalent to finding α and β which minimize R(f, g) = µ α β ¶T µ KuDuKu −KuAuvKv −KvAT uvKu KvDvKv ¶ µ α β ¶ + λ1αT Kuα + λ2βT Kvβ µ α β ¶T µ KuKu 0 0 KvKv ¶ µ α β ¶ , (6) 4 under the following orthogonality constraints: αT Kuα1 = · · · = αT Kuα(p−1) = 0, βT Kvβ1 = · · · = βT Kvβ(p−1) = 0. Taking the differential of equation (6) with respect to α and β and setting to zero, the solution of the first vectors α1 and β1 can be obtained as the eigenvectors associated with the smallest (nonnegative) eigenvalue in the following generalized eigenvalue problem: µ KuDuKu + λ1Ku −KuAuvKv −KvAT uvKu KvDvKv + λ2Kv ¶ µ α β ¶ = ρ µ KuKu 0 0 KvKv ¶ µ α β ¶ (7) Sequentially, the solutions of vectors α1, · · · , αd and β1, · · · , βd can be obtained as the eigenvectors associated with d smallest (non-negative) eigenvalues in the above generalized eigenvalue problem. 4 Relationship with other methods The process of embedding heterogeneous objects into the same space is similar to correspondence analysis (CA) [10] and Co-Occurence Data Embedding (CODE) [11] which are unsupervised methods to embed the rows and columns of a contingency table (adjacency matrix Auv in this study) into a low dimensional Euclidean space. However, critical differences with our proposed method are as follows: i) the above methods cannot use observed data (X and Y in this study) about heterogeneous nodes for prediction, because the algorithms are based only on co-occurence information (Auv in this study), and ii) we need to define a new representation of not only the objects in the training set but also additional objects outside of the training set. Therefore, it is not possible to directly apply the above methods to the bipartite graph inference problem. Recall that the goal of the ordinary CA is to find embedding functions φ : U →R and ψ : V →R which maximize the following correlation coefficient: corr(φ, ψ) = P i,j I{(ui, vj) ∈E}φ(ui)ψ(vj) pP i duiφ(ui)2qP j dvjψ(vj)2 , (8) where I{·} is an indicator function which returns 1 if the argument is true or 0 otherwise, dui (resp. dvj) is the degree of node ui (resp. vj), and P i φ(ui) = 0 (resp. P j ψ(vj) = 0) is assumed [10]. Here we attempt to consider an extension of the CA using the idea of kernel methods so that it can work in the context of the bipartite graph inference problem. The method is referred to as kernel correspondence analysis (KCA) below. To formulate the KCA, we propose to replace the embedding functions φ : U →R and ψ : V →R by functions f : X →R and g : Y →R, where f and g belong to the r.k.h.s. HU and HV defined by the kernels ku on X and kv on Y. Then, we consider maximizing the following regularized correlation coefficient: corr(f, g) = P i,j I{(ui, vj) ∈E}f(xi)g(yj) pP i duif(xi)2 + λ1||f||2qP j dvjg(yj)2 + λ2||g||2 , (9) where λ1 and λ2 are regularization parameters which control the trade-off between maximizing the original correlation coefficient between two features and ensuring that the solution has a small norm in the r.k.h.s. In order to obtain a d-dimensional feature representation and deal with the scale issue, we propose to iterate the maximization of the regularized correlation coefficient (9) under orthogonality constraints in the r.k.h.s., that is, we recursively define the p-th features fp and gp for p = 1, · · · , d as (fp, gp) = arg max corr(f, g) under the orthogonality constraints: f⊥f1, · · · , fp−1 and g⊥g1, · · · , gp−1 and the normalization constraints: ||f|| = ||g|| = 1. Using the function expansions in equation (5) and related matrix representations defined in the previous section, the maximization problem of the regularized correlation coefficient in equation (9) is equivalent to finding α and β which maximize corr(f, g) = αT KuAuvKvβ p αT KuDuKuα + λ1αT Kuα q βT KvDvKvβ + λ2βT Kvβ . (10) 5 Taking the differential of equation (10) with respect to α and β and setting to zero, the solution of the first vectors α1 and β1 can be obtained as the eigenvectors associated with the largest eigenvalue in the following generalized eigenvalue problem: µ 0 KuAuvKv KvAT uvKu 0 ¶ µ α β ¶ = ρ µ KuDuKu + λ1Ku 0 0 KvDvKv + λ2Kv ¶ µ α β ¶ . (11) Sequentially, the solutions of vectors α1, · · · , αd and β1, · · · , βd can be obtained as the eigenvectors associated with d largest eigenvalues in the above generalized eigenvalue problem. The final form of KCA is similar to that of kernel canonical correlation analysis (KCCA) [12, 13], so KCA can be regarded as a variant of KCCA. However, the critical differences between KCA and KCCA are as follows: i) the objects are the same across two different data in KCCA, while the objects are different across two different data in KCA, and ii) KCCA cannot deal with co-occurence information about the objects. In the experiment below, we are interested in the performance comparison between the distance learning in DML and correlation maximization in KCA. A similar extension might be possible for CODE as well, but it is out of scope in this paper. 5 Experiment 5.1 Data In this study we focus on compound-protein interaction networks made by four pharmaceutically useful protein classes: enzymes, ion channels, G protein-coupled receptors (GPCRs), and nuclear receptors. The information about compound-protein interactions were obtained from the KEGG BRITE [14], SuperTarget [15] and DrugBank databases [16]. The number of known interactions involving enzymes, ion channels, GPCRs, and nuclear receptors is 5449, 3020, 1124, and 199, respectively. The number of proteins involving the interactions is 1062, 242, 134, and 38, respectively, and the number of compounds involving the interactions is 1123, 475, 417, and 115, respectively. The compound set includes not only drugs but also experimentally confirmed ligand compounds. These data are regarded as gold standard sets to evaluate the prediction performance below. Chemical structures of the compounds and amino acid sequences of the human proteins were obtained from the KEGG database [14]. We computed the kernel similarity value of chemical structures between compounds using the SIMCOMP algorithm [17], where the kernel similarity value between two compounds is computed by Tanimoto coefficient defined as the ratio of common substructures between two compounds based on a graph alignment algorithm. We computed the sequence similarities between the proteins using Smith-Waterman scores based on the local alignment between two amino acid sequences [18]. In this study we used the above similarity measures as kernel functions, but the Smith-Waterman scores are not always positive definite, so we added an appropriate identify matrix such that the corresponding kernel Gram matrix is positive definite, which is related with [19]. All the kernel matrices are normalized such that all diagonals are ones. 5.2 Performance evaluation As a baseline method, we used the nearest neighbor (NN) method, because this idea has been used in traditional molecular screening in many public databases. Given a new ligand candidate compound, we find a known ligand compound (in the training set) sharing the highest structure similarity with the new compound, and predict the new compound to interact with proteins known to interact with the nearest ligand compound. Likewise, given a new target candidate protein, we find a known target protein (in the training set) sharing the highest sequence similarity with the new protein, and predict the new protein to interact with ligand compounds known to interact with the nearest target protein. Newly predicted compound-protein interaction pairs are assigned prediction scores with the highest structure or sequence similarity values involving new compounds or new proteins in order to draw the ROC curve below. We tested the three different methods: NN, KCA, and DML on their abilities to reconstruct the four compound-protein interaction networks. We performed the following 5-fold cross-validation procedure: the gold standard set was split into 5 subsets of roughly equal size by compounds and proteins, each subset was then taken in turn as a test set, and the training is performed on the remaining 4 6 Table 1: AUC (ROC scores) for each interaction class, where ”train c.”, ”train p.”, ”test c.”, and ”test p.” indicates training compounds, training proteins, test compounds and test proteins, respectively. Data Prediction class Nearest neighbor Kernel correspondence Distance metric (NN) analysis (KCA) learning (DML) Enzyme i) test c. vs train p. 0.655 ± 0.011 0.741 ± 0.011 0.843 ± 0.006 ii) train c. vs test p. 0.758 ± 0.008 0.839 ± 0.009 0.878 ± 0.003 iii) test c. vs test p. 0.500 ± 0.000 0.692 ± 0.008 0.782 ± 0.013 iv) all c. vs all p. 0.684 ± 0.006 0.778 ± 0.008 0.852 ± 0.020 Ion i) test c. vs train p. 0.712 ± 0.004 0.768 ± 0.008 0.800 ± 0.004 channel ii) train c. vs test p. 0.896 ± 0.008 0.927 ± 0.004 0.945 ± 0.002 iii) test c. vs test p. 0.500 ± 0.000 0.748 ± 0.004 0.771 ± 0.008 iv) all c. vs all p. 0.770 ± 0.004 0.838 ± 0.005 0.864 ± 0.002 GPCR i) test c. vs train p. 0.714 ± 0.005 0.848 ± 0.002 0.882 ± 0.005 ii) train c. vs test p. 0.781 ± 0.026 0.895 ± 0.025 0.936 ± 0.004 iii) test c. vs test p. 0.500 ± 0.000 0.823 ± 0.038 0.864 ± 0.013 iv) all c. vs all p. 0.720 ± 0.013 0.866 ± 0.015 0.904 ± 0.003 Nuclear i) test c. vs train p. 0.715 ± 0.009 0.808 ± 0.018 0.832 ± 0.013 receptor ii) train c. vs test p. 0.683 ± 0.010 0.784 ± 0.012 0.812 ± 0.036 iii) test c. vs test p. 0.500 ± 0.000 0.670 ± 0.053 0.747 ± 0.049 iv) all c. vs all p. 0.675 ± 0.004 0.784 ± 0.011 0.815 ± 0.024 sets. We draw a receiver operating curve (ROC), the plot of true positives as a function of false positives based on various thresholds δ, where true positives are correctly predicted interactions and false positives are predicted interactions that are not present in the gold standard interactions. The performance was evaluated by AUC (area under the ROC curve) score. The regularization parameter λ and the number of features d are optimized by applying the internal cross-validation within the training set with the AUC score as a target criterion in the case of KCA and DML. To obtain robust results, we repeated the above cross-validation experiment five times, and computed the average and standard deviation of the resulting AUC scores. Table 1 shows the resulting AUC scores for different sets of predictions depending on whether the compound and/or the protein were in the initial training set or not. Compounds and proteins in the training set are called training compounds and proteins whereas those not in the training set are called test compounds and proteins. Four different classes are then possible: i) test compounds vs training proteins, ii) training compounds vs test proteins, iii) test compounds vs test proteins, and iv) all the possible predictions (the average of the above three parts). Comparing the three different methods, DML seems to have the best performance for all four types of compound-protein interaction networks, and outperform the other methods KCA and NN at a significant level. The worst performance of NN implies that raw compound structure or protein sequence similarities do not always reflect the tendency of interaction partners in true compound-protein interaction networks. Among the four prediction classes, predictions where neither the protein nor the compound are in the training set (iii) are weakest, but even then reliable predictions are possible in DML. Note that the NN method cannot predict iii) test vs test interaction, because it depends on the template information about known ligand compounds and known target proteins. These results suggest that the feature space learned by DML successfully represents the network topology of the bipartite graph structure of compound-protein networks, and the correlation maximization learning used in KCA is not enough to reflect the network topology of the bipartite graph. 6 Concluding remarks In this paper, we developed a new supervised method to infer the bipartite graph from the viewpoint of distance metric learning (DML). The originality of the proposed method lies in the embedding of heterogeneous objects forming vertices on the bipartite graph into a unified Euclidian space and in the learning of the distance between heterogeneous objects with different data structures in the unified feature space. We also discussed the relationship with correspondence analysis (CA) and kernel canonical correlation analysis (KCCA). In the experiment, it is shown that the proposed method DML outperforms the other methods on the problem of compound-protein interaction network reconstruction from chemical structure and genomic sequence data. From a practical viewpoint, the 7 proposed method is useful for virtual screening of a huge number of ligand candidate compounds being generated with various biological assays and target candidate proteins toward genomic drug discovery. It should be also pointed out that the proposed method can be applied to other network prediction problems such as metabolic network reconstruction, host-pathogen protein-protein interaction prediction, and customer-product recommendation system as soon as they are represented by bipartite graphs. References [1] C.M. Dobson. Chemical space and biology. Nature, 432:824–828, 2004. [2] M. Rarey, B. Kramer, T. Lengauer, and G. Klebe. A fast flexible docking method using an incremental construction algorithm. J Mol Biol, 261:470–489, 1996. [3] Y. Yamanishi, J.P. Vert, and M. Kanehisa. Protein network inference from multiple genomic data: a supervised approach. Bioinformatics, 20 Suppl 1:i363–370, 2004. [4] J.-P. Vert and Y. Yamanishi. Supervised graph inference. Advances in Neural Information and Processing System, pages 1433–1440, 2005. [5] T. Kato, K. Tsuda, and K. Asai. Selective integration of multiple biological data for supervised network inference. Bioinformatics, 21:2488–2495, 2005. [6] B. Sch¨olkopf, K. Tsuda, and J.P. Vert. Kernel Methods in Computational Biology. MIT Press, 2004. [7] G. Wahba. Splines Models for Observational Data: Series in Applied Mathematics. SIAM, Philadelphia, 1990. [8] F. Girosi, M. Jones, and T. Poggio. Regularization theory and neural networks architectures. Neural Computation, 7:219–269, 1995. [9] J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Camb. Univ. Press, 2004. [10] M.J. Greenacre. Theory and applications of correspondence analysis. Academic Press, 1984. [11] A. Globerson, G. Chechik, F. Pereira, and N. Tishby. Euclidean embedding of co-occurrence data. Advances in Neural Information and Processing System, pages 497–504, 2005. [12] S. Akaho. A kernel method for canonical correlation analysis. International. Meeting on Psychometric Society (IMPS2001), 2001. [13] F.R. Bach and M.I. Jordan. Kernel independent component analysis. Journal of Machine Learning Research, 3:1–48, 2002. [14] M. Kanehisa, S. Goto, M. Hattori, K.F. Aoki-Kinoshita, M. Itoh, S. Kawashima, T. Katayama, M. Araki, and M. Hirakawa. From genomics to chemical genomics: new developments in kegg. Nucleic Acids Res., 34(Database issue):D354–357, Jan 2006. [15] S. Gunther, S. Guenther, M. Kuhn, M. Dunkel, and et al. Supertarget and matador: resources for exploring drug-target relationships. Nucleic Acids Res, 2007. [16] D.S. Wishart, C. Knox, A.C. Guo, D. Cheng, S. Shrivastava, D. Tzur, B. Gautam, and M. Hassanali. Drugbank: a knowledgebase for drugs, drug actions and drug targets. Nucleic Acids Res, 2007. [17] M. Hattori, Y. Okuno, S. Goto, and M. Kanehisa. Development of a chemical structure comparison method for integrated analysis of chemical and genomic information in the metabolic pathways. J. Am. Chem. Soc., 125:11853–11865, 2003. [18] T.F. Smith and M.S. Waterman. Identification of common molecular subsequences. J Mol Biol, 147:195– 197, 1981. [19] H. Saigo, J.P. Vert, N. Ueda, and T. Akutsu. Protein homology detection using string alignment kernels. Bioinformatics, 20:1682–1689, 2004. 8
2008
28
3,513
Domain Adaptation with Multiple Sources Yishay Mansour Google Research and Tel Aviv Univ. mansour@tau.ac.il Mehryar Mohri Courant Institute and Google Research mohri@cims.nyu.edu Afshin Rostamizadeh Courant Institute New York University rostami@cs.nyu.edu Abstract This paper presents a theoretical analysis of the problem of domain adaptation with multiple sources. For each source domain, the distribution over the input points as well as a hypothesis with error at most ǫ are given. The problem consists of combining these hypotheses to derive a hypothesis with small error with respect to the target domain. We present several theoretical results relating to this problem. In particular, we prove that standard convex combinations of the source hypotheses may in fact perform very poorly and that, instead, combinations weighted by the source distributions benefit from favorable theoretical guarantees. Our main result shows that, remarkably, for any fixed target function, there exists a distribution weighted combining rule that has a loss of at most ǫ with respect to any target mixture of the source distributions. We further generalize the setting from a single target function to multiple consistent target functions and show the existence of a combining rule with error at most 3ǫ. Finally, we report empirical results for a multiple source adaptation problem with a real-world dataset. 1 Introduction A common assumption in theoretical models of learning such as the standard PAC model [16], as well as in the design of learning algorithms, is that training instances are drawn according to the same distribution as the unseen test examples. In practice, however, there are many cases where this assumption does not hold. There can be no hope for generalization, of course, when the training and test distributions vastly differ, but when they are less dissimilar, learning can be more successful. A typical situation is that of domain adaptation where little or no labeled data is at one’s disposal for the target domain, but large amounts of labeled data from a source domain somewhat similar to the target, or hypotheses derived from that source, are available instead. This problem arises in a variety of applications in natural language processing [4,7,10], speech processing [8,9,11,13–15], computer vision [12], and many other areas. This paper studies the problem of domain adaptation with multiple sources, which has also received considerable attention in many areas such as natural language processing and speech processing. An example is the problem of sentiment analysis which consists of classifying a text sample such as a movie review, restaurant rating, or discussion boards, or other web pages. Information about a relatively small number of domains such as movies or books may be available, but little or none can be found for more difficult domains such as travel. We will consider the following problem of multiple source adaptation. For each source i ∈[1, k], the learner receives the distribution Di of the input points corresponding to that source as well as a hypothesis hi with loss at most ǫ on that source. The learner’s task consists of combining the k hypotheses hi, i ∈[1, k], to derive a hypothesis h with small loss with respect to the target distribution. The target distribution is assumed to be a mixture of the distributions Di. We will discuss both the case where the mixture is known to the learner and the case where it is unknown. 1 Note that the distribution Di is defined over the input points and bears no information about the labels. In practice, Di is estimated from large amounts of unlabeled points typically available from source i. An alternative set-up for domain adaptation with multiple sources is one where the learner is not supplied with a good hypothesis hi for each source but where instead he has access to the labeled training data for each source domain. A natural solution consists then of combining the raw labeled data from each source domain to form a new sample more representative of the target distribution and use that to train a learning algorithm. This set-up and the type of solutions just described have been in fact explored extensively in applications [8,9,11,13–15]. However, several empirical observations motivated our study of hypothesis combination, in addition to the theoretical simplicity and clarity of this framework. First, in some applications such as very large-vocabulary speech recognition, often the original raw data used to derive each domain-dependent model is no more available [2,9]. This is because such models are typically obtained as a result of training based on many hours of speech with files occupying hundreds of gigabytes of disk space, while the models derived require orders of magnitude less space. Thus, combining raw labeled data sets is not possible in such cases. Secondly, a combined data set can be substantially larger than each domain-specific data set, which can significantly increase the computational cost of training and make it prohibitive for some algorithms. Thirdly, combining labeled data sets requires the mixture parameters of the target distribution to be known, but it is not clear how to produce a hypothesis with a low error rate with respect to any mixture distribution. Few theoretical studies have been devoted to the problem of adaptation with multiple sources. BenDavid et al. [1] gave bounds for single source adaptation, then Blitzer et al. [3] extended the work to give a bound on the error rate of a hypothesis derived from a weighted combination of the source data sets for the specific case of empirical risk minimization. Crammer et al. [5, 6] also addressed a problem where multiple sources are present but the nature of the problem differs from adaptation since the distribution of the input points is the same for all these sources, only the labels change due to varying amounts of noise. We are not aware of a prior theoretical study of the problem of adaptation with multiple sources analyzed here. We present several theoretical results relating to this problem. We examine two types of hypothesis combination. The first type is simply based on convex combinations of the k hypotheses hi. We show that this natural and widely used hypothesis combination may in fact perform very poorly in our setting. Namely, we give a simple example of two distributions and two matching hypotheses, each with zero error for their respective distribution, but such that any convex combination has expected absolute loss of 1/2 for the equal mixture of the distributions. This points out a potentially significant weakness of a convex combination. The second type of hypothesis combination, which is the main one we will study in this work, takes into account the probabilities derived from the distributions. Namely, the weight of hypothesis hi on an input x is proportional to λiDi(x), were λ is the set of mixture weights. We will refer to this method as the distribution weighted hypothesis combination. Our main result shows that, remarkably, for any fixed target function, there exists a distribution weighted combining rule that has a loss of at most ǫ with respect to any mixture of the k distributions. We also show that there exists a distribution weighted combining rule that has loss at most 3ǫ with respect to any consistent target function (one for which each hi has loss ǫ on Di) and any mixture of the k distributions. In some sense, our results establish that the distribution weighted hypothesis combination is the “right” combination rule, and that it also benefits from a well-founded theoretical guarantee. The remainder of this paper is organized as follows. Section 2 introduces our theoretical model for multiple source adaptation. In Section 3, we analyze the abstract case where the mixture parameters of the target distribution are known and show that the distribution weighted hypothesis combination that uses as weights these mixture coefficients achieves a loss of at most ǫ. In Section 4, we give a simple method to produce an error of Θ(kǫ) that does not require the prior knowledge of the mixture parameters of the target distribution. Our main results showing the existence of a combined hypothesis performing well regardless of the target mixture are given in Section 5 for the case of a fixed target function, and in Section 6 for the case of multiple target functions. Section 7 reports empirical results for a multiple source adaptation problem with a real-world dataset. 2 2 Problem Set-Up Let X be the input space, f : X →R the target function to learn, and L: R×R →R a loss function penalizing errors with respect to f. The loss of a hypothesis h with respect to a distribution D and loss function L is denoted by L(D, h, f) and defined as L(D, h, f) = Ex∼D[L(h(x), f(x))] = P x∈X L(h(x), f(x))D(x). We will denote by ∆the simplex ∆= {λ: λi ≥0 ∧Pk i=1 λi = 1} of Rk. We consider an adaptation problem with k source domains and a single target domain. The input to the problem is the set of k source distributions D1, . . . , Dk and k corresponding hypotheses h1, . . . , hk such that for all i ∈[1, k], L(Di, hi, f) ≤ǫ, for a fixed ǫ ≥0. The distribution of the target domain, DT , is assumed to be a mixture of the k source distributions Dis, that is DT (x) = Pk i=1 λiDi(x), for some unknown mixture weight vector λ ∈∆. The adaptation problem consists of combing the hypotheses his to derive a hypothesis with small loss on the target domain. Since the target distribution DT is assumed to be a mixture, we will refer to this problem as the mixture adaptation problem. A combining rule for the hypotheses takes as an input the his and outputs a single hypothesis h: X →R. We define two combining rules of particular interest for our purpose: the linear combining rule which is based on a parameter z ∈∆and which sets the hypothesis to h(x) = Pk i=1 zihi(x); and the distribution weighted combining rule also based on a parameter z ∈∆which sets the hypothesis to h(x) = Pk i=1 ziDi(x) Pk j=1 zjDj(x)hi(x) when Pk j=1 zjDj(x) > 0. This last condition always holds if Di(x) > 0 for all x ∈X and some i ∈[1, k]. We define H to be the set of all distribution weighted combining rules. Given the input to the adaptation problem we have implicit information about the target function f. We define the set of consistent target functions, F, as follows, F = {g: ∀i ∈[1, k], L(Di, hi, g) ≤ǫ} . By definition, the target function f is an element of F. We will assume that the following properties hold for the loss function L: (i) L is non-negative: L(x, y) ≥0 for all x, y ∈R; (ii) L is convex with respect to the first argument: L(Pk i=1 λixi, y) ≤ Pk i=1 λiL(xi, y) for all x1, . . . , xk, y ∈R and λ ∈∆; (iii) L is bounded: there exists M ≥0 such that L(x, y) ≤M for all x, y ∈R; (iv) L(x, y) is continuous in both x and y; and (v) L is symmetric L(x, y) = L(y, x). The absolute loss defined by L(x, y) = |x −y| will serve as our primary motivating example. 3 Known Target Mixture Distribution In this section we assume that the parameters of the target mixture distribution are known. Thus, the learning algorithm is given λ∈∆such that DT (x)=Pk i=1 λiDi(x). A good starting point would be to study the performance of a linear combining rule. Namely the classifier h(x) = Pk i=1 λihi(x). While this seems like a very natural classifier, the following example highlights the problematic aspects of this approach. Consider a discrete domain X = {a, b} and two distributions, Da and Db, such that Da(a) = 1 and Db(b) = 1. Namely, each distribution puts all the weight on a single element in X. Consider the target function f, where f(a) = 1 and f(b) = 0, and let the loss be the absolute loss. Let h0 = 0 be the function that outputs 0 for all x ∈X and similarly h1 = 1. The hypotheses h1 and h0 have zero expected absolute loss on the distributions Da and Db, respectively, i.e., ǫ = 0. Now consider the target distribution DT with λa = λb = 1/2, thus DT (a) = DT (b) = 1/2. The hypothesis h(x) = (1/2)h1(x) + (1/2)h0(x) always outputs 1/2, and has an absolute loss of 1/2. Furthermore, for any other parameter z of the linear combining rule, the expected absolute loss of h(x) = zh1(x)+(1−z)h0(x) with respect to DT is exactly 1/2. We have established the following theorem. Theorem 1. There is a mixture adaptation problem with ǫ = 0 for which any linear combination rule has expected absolute loss of 1/2. 3 Next we show that the distribution weighted combining rule produces a hypothesis with a low expected loss. Given a mixture DT (x) = Pk i=1 λiDi(x), we consider the distribution weighted combining rule with parameter λ, which we denote by hλ. Recall that, hλ(x) = k X i=1 λiDi(x) Pk j=1 λjDj(x) hi(x) = k X i=1 λiDi(x) DT (x) hi(x) . Using the convexity of L with respect to the first argument, the loss of hλ with respect to DT and a target f ∈F can be bounded as follows, L(DT , hλ, f) = X x∈X L(hλ(x), f(x))DT (x) ≤ X x∈X k X i=1 λiDi(x)L(hi(x), f(x)) = k X i=1 λiǫi ≤ǫ, where ǫi := L(Di, hi, f) ≤ǫ. Thus, we have derived the following theorem. Theorem 2. For any mixture adaptation problem with target distribution Dλ(x) = Pk i=1 λiDi(x), the expected loss of the hypothesis hλ is at most ǫ with respect to any target function f ∈F: L(Dλ, hλ, f) ≤ǫ. 4 Simple Adaptation Algorithms In this section we show how to construct a simple distribution weighted hypothesis that has an expected loss guarantee with respect to any mixture. Our hypothesis hu is simply based on equal weights, i.e., ui = 1/k, for all i ∈[1, k]. Thus, hu(x) = k X i=1 (1/k)Di(x) Pk j=1(1/k)Dj(x) hi(x) = k X i=1 Di(x) Pk j=1 Dj(x) hi(x). We show for hu an expected loss bound of kǫ, with respect to any mixture distribution DT and target function f ∈F. (Proof omitted.) Theorem 3. For any mixture adaptation problem the expected loss of hu is at most kǫ, for any mixture distribution DT and target function f ∈F, i.e., L(DT , hu, f) ≤kǫ. Unfortunately, the hypothesis hu can have an expected absolute loss as large as Ω(kǫ). (Proof omitted.) Theorem 4. There is a mixture adaptation problem for which the expected absolute loss of hu is Ω(kǫ). Also, for k = 2 there is an input to the mixture adaptation problem for which the expected absolute loss of hu is 2ǫ −ǫ2. 5 Existence of a Good Hypothesis In this section, we will show that for any target function f ∈F there is a distribution weighted combining rule hz that has a loss of at most ǫ with respect to any mixture DT . We will construct the proof in two parts. In the first part, we will show, using a simple reduction to a zero-sum game, that one can obtain a mixture of hzs that guarantees a loss bounded by ǫ. In the second part, which is the more interesting scenario, we will show that for any target function f ∈F there is a single distribution weighted combining rule hz that has loss of at most ǫ with respect to any mixture DT . This later part will require the use of Brouwer fixed point theorem to show the existence of such an hz. 5.1 Zero-sum game The adaptation problem can be viewed as a zero-sum game between two players, NATURE and LEARNER. Let the input to the mixture adaptation problem be D1, . . . , Dk, h1, . . . , hk and ǫ, and fix a target function f ∈F. The player NATURE picks a distribution Di while the player LEARNER selects a distribution weighted combining rule hz ∈H. The loss when NATURE plays Di and LEARNER plays hz is L(Di, hz, f). Let us emphasize that the target function f ∈F is fixed beforehand. The objective of NATURE is to maximize the loss and the objective of LEARNER is to minimize the loss. We start with the following lemma, 4 Lemma 1. Given any mixed strategy of NATURE, i.e., a distribution µ over Di’s, then the following action of LEARNER hµ ∈H has expected loss at most ǫ, i.e., L(Dµ, hµ, f) ≤ǫ. The proof is identical to that of Theorem 2. This almost establishes that the value of the game is at most ǫ. The technical part that we need to take care of is the fact that the action space of LEARNER is infinite. However, by an appropriate discretization of H we can derive the following theorem. Theorem 5. For any target function f ∈F and any δ > 0, there exists a function h(x) = Pm j=1 αjhzj(x), where hzi ∈H, such that L(DT , h, f) ≤ǫ + δ for any mixture distribution DT (x) = Pk i=1 λiDi(x). Since we can fix δ > 0 to be arbitrarily small, this implies that a linear mixture of distribution weighted combining rules can guarantee a loss of almost ǫ with respect to any product distribution. 5.2 Single distribution weighted combining rule In the previous subsection, we showed that a mixture of hypotheses in H would guarantee a loss of at most ǫ. Here, we will considerably strengthen the result and show that there is a single hypothesis in H for which this guarantee holds. Unfortunately our loss is not convex with respect to h ∈H, so we need to resort to a more powerful technique, namely the Brouwer fixed point theorem. For the proof we will need that the distribution weighted combining rule hz be continuous in the parameter z. In general, this does hold due to the existence of points x ∈X for which Pk j=1 zjDj(x) = 0. To avoid this discontinuity, we will modify the definition of hz to hη z, as follows. Claim 1. Let U denote the uniform distribution over X, then for any η > 0 and z ∈∆, let hη z : X →R be the function defined by hη z(x) = k X i=1 ziDi(x) + ηU(x)/k Pk j=1 zjDj(x) + ηU(x) hi(x). Then, for any distribution D, L(D, hη z, f) is continuous in z.1 Let us first state Brouwer’s fixed point theorem. Theorem 6 (Brouwer Fixed Point Theorem). For any compact and convex non-empty set A ⊂Rn and any continuous function f : A →A, there is a point x ∈A such that f(x) = x. We first show that there exists a distribution weighted combining rule hη z for which the losses L(Di, hη z, f) are all nearly the same. Lemma 2. For any target function f ∈F and any η, η′ >0, there exists z ∈∆, with zi ̸= 0 for all i ∈[1, k], such that the following holds for the distribution weighted combining rule hη z ∈H: L(Di, hη z, f) = γ + η′ −η′ zik ≤γ + η′ for any 1 ≤i ≤k, where γ = Pk j=1 zjL(Dj, hη z, f). Proof. Fix η′ > 0 and let Lz i = L(Di, hη z, f) for all z ∈∆and i ∈[1, m]. Consider the mapping φ: ∆→∆defined for all z ∈∆by [φ(z)]i = (ziLz i + η′/k)/ (Pk j=1 zjLz j + η′), where [φ(z)]i, is the ith coordinate of φ(x), i ∈[1, m]. By Claim 1, φ is continuous. Thus, by Brouwer’s Fixed Point Theorem, there exists z ∈∆such that φ(z) = z. This implies that zi = (ziLz i +η′/k)/(Pk j=1 zjLz j +η′). Since η′ > 0, we must have zi ̸= 0 for any i ∈[1, m]. Thus, we can divide by zi and write Lz i +η′/(zik) = (Pk j=1 zjLz j)+η′. Therefore, Lz i = γ+η′−η′/(zik) with γ = Pk j=1 zjLz j. 1In addition to continuity, the perturbation to hz, hη z, also helps us ensure that none of the mixture weights zi is zero in the proof of the Lemma 2 . 5 Note that the lemma just presented does not use the structure of the distribution weighted combining rule, but only the fact that the loss is continuous in the parameter z ∈∆. The lemma applies as well to the linear combination rule and provides the same guarantee. The real crux of the argument is, as shown in the next lemma, that γ is small for a distribution weighted combining rule (while it can be very large for a linear combination rule). Lemma 3. For any target function f ∈F and any η, η′ > 0, there exists z ∈∆such that L(Dλ, hη z, f) ≤ǫ + ηM + η′ for any λ ∈∆. Proof. Let z be the parameter guaranteed in Lemma 2. Then L(Di, hη z, f) = γ + η′ −η′/(zik) ≤ γ +η′, for 1 ≤i ≤k. Consider the mixture Dz, i.e., set the mixture parameter to be z. Consider the quantity L(Dz, hη z, f). On the one hand, by definition, L(Dz, hη z, f) = Pk i=1 ziL(Di, hη z, f) and thus L(Dz, hη z, f) = γ. On the other hand, L(Dz,hη z, f) = X x∈X Dz(x)L(hη z(x), f(x)) ≤ X x∈X Dz(x) Dz(x) + ηU(x) k X i=1 (ziDi(x) + ηU(x) k )L(hi(x), f(x)) ! ≤ X x∈X k X i=1 ziDi(x)L(hi(x), f(x)) ! + X x∈X ηMU(x) = k X i=1 ziL(Di, hi, f) + ηM = k X i=1 ziǫi + ηM ≤ǫ + ηM . Therefore γ ≤ǫ + ηM. To complete the proof, note that the following inequality holds for any mixture Dλ: L(Dλ, hη z, f) = k X i=1 λiL(Di, hη z, f) ≤γ + η′, which is at most ǫ + ηM + η′. By setting η = δ/(2M) and η′ = δ/2, we can derive the following theorem. Theorem 7. For any target function f ∈F and any δ > 0, there exists η > 0 and z ∈∆, such that L(Dλ, hη z, f) ≤ǫ + δ for any mixture parameter λ. 6 Arbitrary target function The results of the previous section show that for any fixed target function there is a good distribution weighted combining rule. In this section, we wish to extend these results to the case where the target function is not fixed in advanced. Thus, we seek a single distribution weighted combining rule that can perform well for any f ∈F and any mixture Dλ. Unfortunately, we are not able to prove a bound of ǫ + o(ǫ) but only a bound of 3ǫ. To show this bound we will show that for any f1, f2 ∈F and any hypothesis h the difference of loss is bounded by at most 2ǫ. Lemma 4. Assume that the loss function L obeys the triangle inequality, i.e., L(f, h) ≤L(f, g) + L(g, h). Then for any f, f ′ ∈F and any mixture DT , the inequality L(DT , h, f ′) ≤L(DT , h, f) + 2ǫ holds for any hypothesis h. Proof. Since our loss function obeys the triangle inequality, for any functions f, g, h, the following holds, L(D, f, h) ≤L(D, f, g) + L(D, g, h). In our case, we observe that replacing g with any f ′ ∈F gives, L(Dλ, f, h) ≤L(Dλ, f ′, h) + L(Dλ, f, f ′). We can bound the term L(Dλ, f, f ′) with a similar inequality, L(Dλ, f, f ′) ≤L(Dλ, f, hλ) + L(Dλ, f ′, hλ) ≤2ǫ, where hλ is the distribution weighted combining rule produced by choosing z = λ and using Theorem 2. Therefore, for any f, f ′ ∈F we have, L(Dλ, f, h) ≤L(Dλ, f ′, h) + 2ǫ, which completes the proof. We derived the following corollary to Theorem 7. Corollary 1. Assume that the loss function L obeys the triangle inequality. Then, for any δ > 0, there exists η > 0 and z ∈∆, such that for any mixture parameter λ and any f ∈F, L(Dλ, hη z, f) ≤3ǫ + δ. 6 1 2 3 4 5 6 1.5 1.6 1.7 1.8 1.9 2 2.1 MSE Uniform Mixture Over 4 Domains In−Domain Out−Domain 0 0.2 0.4 0.6 0.8 1 1.4 1.6 1.8 2 2.2 2.4 Mixture = α book + (1 − α) kitchen α MSE weighted linear book kitchen 0 0.2 0.4 0.6 0.8 1 1.4 1.6 1.8 2 2.2 2.4 α MSE Mixture = α dvd + (1 − α) electronics weighted linear dvd electronics (a) (b) Figure 1: (a) MSE performance for a target mixture of four domains (1: books, 2: dvd, 3: electronics, 4: kitchen 5: linear, 6: weighted). (b) MSE performance under various mixtures of two source domains, plot left: book and kitchen, plot right: dvd and electronics. 7 Empirical results This section reports the results of our experiments with a distribution weighted combining rule using real-world data. In our experiments, we fixed a mixture target distribution Dλ and considered the distribution weighted combining rule hz, with z = λ. Since we used real-world data, we did not have access to the domain distributions. Instead, we modeled each distribution and used large amounts of unlabeled data available for each source to estimate the model’s parameters. One could have thus expected potentially significantly worse empirical results than the theoretical ones, but this turned out not to be an issue in our experiments. We used the sentiment analysis dataset found in [4].2 The data consists of review text and rating labels, taken from amazon.com product reviews within four different categories (domains). These four domains consist of book, dvd, electronics and kitchen reviews, where each domain contains 2000 data points. 3 In our experiments, we fixed a mixture target distribution Dλ and considered the distribution weighted combining rule hz, with z = λ. In our first experiment, we considered mixtures of all four domains, where the test set was a uniform mixture of 600 points, that is the union of 150 points taken uniformly at random from each domain. The remaining 1,850 points from each domain were used to train the base hypotheses.4 We compared our proposed weighted combining rule to the linear combining rule. The results are shown in Figure 1(a). They show that the base hypotheses perform poorly on the mixture test set, which justifies the need for adaptation. Furthermore, the distribution weighted combining rule is shown to perform at least as well as the worst in-domain performance of a base hypothesis, as expected from our bounds. Finally, we observe that this real-world data experiment gives an example in which a linear combining rule performs poorly compared to the distribution weighted combining rule. In other experiments, we considered the mixture of two domains, where the mixture is varied according to the parameter α ∈{0.1, 0.2, . . ., 1.0}. For each plot in Figure 1 (b), the test set consists of 600α points from the first domain and 600(1 −α) points from the second domain, where the first and second domains are made clear in the figure. The remaining points that were not used for testing were used to train the base hypotheses. The results show the linear shift from one domain to the other, as is evident from the performance of the two base hypotheses. The distribution weighted combining rule outperforms the base hypotheses as well as the linear combining rule. 2http://www.seas.upenn.edu/˜mdredze/datasets/sentiment/. 3The rating label, an integer between 1 and 5, was used as a regression label, and the loss measured by the mean squared error (MSE). All base hypotheses were generated using Support Vector Regression (SVR) [17] with the trade-off parameters C = 8, ǫ = 0.1, and a Gaussian kernel with parameter g = 0.00078. The SVR solutions were obtained using the libSVM software library ( http://www.csie.ntu.edu.tw/˜cjlin/libsvm/). Our features were defined as the set of unigrams appearing five times or more in all domains. This defined about 4000 unigrams. We used a binary feature vector encoding the presence or absence of these frequent unigrams to define our instances. To model the domain distributions, we used a unigram statistical language model trained on the same corpus as the one used to define the features. The language model was created using the GRM library (http://www.research.att.com/˜fsmtools/grm/). 4Each experiment was repeated 20 times with random folds. The standard deviation found was far below what could be legibly displayed in the figures. 7 Thus, our preliminary experiments suggest that the distribution weighted combining rule performs well in practice and clearly outperforms a simple linear combining rule. Furthermore, using statistical language models as approximations to the distribution oracles seem to be sufficient in practice and can help produce a good distribution weighted combining rule. 8 Conclusion We presented a theoretical analysis of the problem of adaptation with multiple sources. Domain adaptation is an important problem that arises in a variety of modern applications where limited or no labeled data is available for a target application and our analysis can be relevant in a variety of situations. The theoretical guarantees proven for the distribution weight combining rule provide it with a strong foundation. Its empirical performance with a real-world data set further motivates its use in applications. Much of the results presented were based on the assumption that the target distribution is some mixture of the source distributions. A further analysis suggests however that our main results can be extended to arbitrary target distributions. Acknowledgments We thank Jennifer Wortman for helpful comments on an earlier draft of this paper and Ryan McDonald for discussions and pointers to data sets. The work of M. Mohri and A. Rostamizadeh was partly supported by the New York State Office of Science Technology and Academic Research (NYSTAR). References [1] Shai Ben-David, John Blitzer, Koby Crammer, and Fernando Pereira. Analysis of representations for domain adaptation. In Proceedings of NIPS 2006. MIT Press, 2007. [2] Jacob Benesty, M. Mohan Sondhi, and Yiteng Huang, editors. Springer Handbook of Speech Processing. Springer, 2008. [3] John Blitzer, Koby Crammer, A. Kulesza, Fernando Pereira, and Jennifer Wortman. Learning bounds for domain adaptation. In Proceedings of NIPS 2007. MIT Press, 2008. [4] John Blitzer, Mark Dredze, and Fernando Pereira. Biographies, Bollywood, Boom-boxes and Blenders: Domain Adaptation for Sentiment Classification. In ACL 2007, Prague, Czech Republic, 2007. [5] Koby Crammer, Michael Kearns, and Jennifer Wortman. Learning from Data of Variable Quality. In Proceedings of NIPS 2005, 2006. [6] Koby Crammer, Michael Kearns, and Jennifer Wortman. Learning from multiple sources. In Proceedings of NIPS 2006, 2007. [7] Mark Dredze, John Blitzer, Pratha Pratim Talukdar, Kuzman Ganchev, Joao Graca, and Fernando Pereira. Frustratingly Hard Domain Adaptation for Parsing. In CoNLL 2007, Prague, Czech Republic, 2007. [8] Jean-Luc Gauvain and Chin-Hui. Maximum a posteriori estimation for multivariate gaussian mixture observations of markov chains. IEEE Transactions on Speech and Audio Processing, 2(2):291–298, 1994. [9] Frederick Jelinek. Statistical Methods for Speech Recognition. The MIT Press, 1998. [10] Jing Jiang and ChengXiang Zhai. Instance Weighting for Domain Adaptation in NLP. In Proceedings of ACL 2007, pages 264–271, Prague, Czech Republic, 2007. Association for Computational Linguistics. [11] C. J. Legetter and Phil C. Woodland. Maximum likelihood linear regression for speaker adaptation of continuous density hidden markov models. Computer Speech and Language, pages 171–185, 1995. [12] Aleix M. Mart´ınez. Recognizing imprecisely localized, partially occluded, and expression variant faces from a single sample per class. IEEE Trans. Pattern Anal. Mach. Intell., 24(6):748–763, 2002. [13] S. Della Pietra, V. Della Pietra, R. L. Mercer, and S. Roukos. Adaptive language modeling using minimum discriminant estimation. In HLT ’91: Proceedings of the workshop on Speech and Natural Language, pages 103–106, Morristown, NJ, USA, 1992. Association for Computational Linguistics. [14] Brian Roark and Michiel Bacchiani. Supervised and unsupervised PCFG adaptation to novel domains. In Proceedings of HLT-NAACL, 2003. [15] Roni Rosenfeld. A Maximum Entropy Approach to Adaptive Statistical Language Modeling. Computer Speech and Language, 10:187–228, 1996. [16] Leslie G. Valiant. A theory of the learnable. ACM Press New York, NY, USA, 1984. [17] Vladimir N. Vapnik. Statistical Learning Theory. Wiley-Interscience, New York, 1998. 8
2008
29
3,514
Learning Bounded Treewidth Bayesian Networks Gal Elidan Department of Statistics Hebrew University Jerusalem, 91905, Israel galel@huji.ac.il Stephen Gould Department of Electrical Engineering Stanford University Stanford, CA 94305, USA sgould@stanford.edu Abstract With the increased availability of data for complex domains, it is desirable to learn Bayesian network structures that are sufficiently expressive for generalization while also allowing for tractable inference. While the method of thin junction trees can, in principle, be used for this purpose, its fully greedy nature makes it prone to overfitting, particularly when data is scarce. In this work we present a novel method for learning Bayesian networks of bounded treewidth that employs global structure modifications and that is polynomial in the size of the graph and the treewidth bound. At the heart of our method is a triangulated graph that we dynamically update in a way that facilitates the addition of chain structures that increase the bound on the model’s treewidth by at most one. We demonstrate the effectiveness of our “treewidth-friendly” method on several real-life datasets. Importantly, we also show that by using global operators, we are able to achieve better generalization even when learning Bayesian networks of unbounded treewidth. 1 Introduction Recent years have seen a surge of readily available data for complex and varied domains. Accordingly, increased attention has been directed towards the automatic learning of complex probabilistic graphical models [22], and in particular learning the structure of a Bayesian network. With the goal of making predictions or providing probabilistic explanations, it is desirable to learn models that generalize well and at the same time have low inference complexity or a small treewidth [23]. While learning optimal tree-structured models is easy [5], learning the optimal structure of general and even quite simple (e.g., poly-trees, chains) Bayesian networks is computationally difficult [8, 10, 19]. Several works attempt to generalize the tree-structure result of Chow and Liu [5], either by making assumptions about the true distribution (e.g., [1, 21]), by searching for a local maxima over tree mixtures [20], or by approximate methods that are polynomial in the size of the graph but exponential in the treewidth bound (e.g., [3, 15]). In the context of general Bayesian networks, the thin junction tree approach of Bach and Jordan [2] is a local greedy search procedure that relies at each step on tree-decomposition heuristic techniques for computing an upper bound the true treewidth of the model. Like any local search approach, this method does not provide performance guarantees but is appealing in its ability to efficiently learn models with an arbitrary treewidth bound. The thin junction tree method, however, suffers from two important limitations. First, while useful on average, even the best of the tree-decomposition heuristics exhibit some variance in the treewidth estimate [16]. As a result, a single edge addition can lead to a jump in the treewidth estimate despite the fact that it can increase the true treewidth by at most one. More importantly, structure learning scores (e.g., BIC, BDe) tend to learn spurious edges that result in overfitting when the number of samples is relatively small, a phenomenon that is made worse by a fully greedy approach. Intuitively, to generalize well, we want to learn bounded treewidth Bayesian networks where structure modifications are globally beneficial (i.e., contribute to the score in many regions of the network). In this work we propose a novel method for efficiently learning Bayesian networks of bounded treewidth that addresses these concerns. At the heart of our method is a dynamic update of the triangulation of the model in a way that is tree-width friendly: the treewidth of the triangulated graph (upper bound on the model’s true treewidth) is guaranteed to increase by at most one when an 1 edge is added to the network. Building on the single edge triangulation, we characterize sets of edges that are jointly treewidth-friendly. We use this characterization in a dynamic programming approach for learning the optimal treewidth-friendly chain with respect to a node ordering. Finally, we learn a bounded treewidth Bayesian network by iteratively augmenting the model with such chains. Instead of using local edge modifications, our method progresses by incrementally adding chain structures that are globally beneficial, improving our ability to generalize. We are also able to guarantee that the bound on the model’s treewidth grows by at most one at each iteration. Thus, our method resembles the global nature of Chow and Liu [5] more closely than the thin junction tree approach of Bach and Jordan [2], while being applicable in practice to any desired treewidth. We evaluate our method on several challenging real-life datasets and show that our method is able to learn richer models that generalize better than the thin junction tree approach as well as an unbounded aggressive search strategy. Furthermore, we show that even when learning models with unbounded treewidth, by using global structure modification operators, we are better able to cope with the problem of local maxima and learn better models. 2 Background: Bayesian networks and tree decompositions A Bayesian network [22] is a pair (G, Θ) that encodes a joint probability distribution over a finite set X = {X1, . . . , Xn} of random variables. G is a directed acyclic graph whose nodes correspond to the variables in X. The parameters ΘXi|Pai encode local conditional probability distributions (CPDs) for each node Xi given its parents in G. Together, these define a unique joint probability distribution over X given by P(X1, . . . , Xn) = Qn i=1 P(Xi | Pai). Given a structure G and a complete training set D, estimating the (regularized) maximum likelihood (ML) parameters is easy for many choices of CPDs (see [14] for details). Learning the structure of a network, however, is generally NP-hard [4, 10, 19] as the number of possible structures is superexponential in the number of variables. In practice, structure learning relies on a greedy search procedure that examines easy to evaluate local structure changes (add, delete or reverse an edge). This search is usually guided by a decomposable score that balances the likelihood of the data and the complexity of the model (e.g., BIC [24], Bayesian score [14]). Chow and Liu [5] showed that the ML tree can be learned efficiently. Their result is easily generalized to any decomposable score. Given a model, we are interested in the task of inference, or evaluating queries of the form P(Y | Z) where Y and Z are arbitrary subsets of X. This task is, in general, NP-hard [7], except when G is tree structured. The actual complexity of inference in a Bayesian network is proportional to its treewidth [23] which, roughly speaking, measures how closely the network resembles a tree. The notions of tree-decompositions and treewidth were introduced by Robertson and Seymour [23]:1 Definition 2.1: A tree-decomposition of an undirected graph H = (V, E) is a pair ({Ci}i∈T , T ), where T is a tree, {Ci} is a subset of V such that S i∈T Ci = V and where • for all edges (v, w) ∈E there exists an i ∈T with v ∈Ci and w ∈Ci. • for all i, j, k ∈T : if j is on the (unique) path from i to k in T , then Ci ∩Ck ⊆Cj. The treewidth of a tree-decomposition is defined to be maxi∈T |Ci| −1. The treewidth TW(H) of an undirected graph H is the minimum treewidth over all possible tree-decompositions of H. An equivalent notion of treewidth can be phrased in terms of a graph that is a triangulation of H. Definition 2.2: An induced path P in an undirected graph H is a path such that for every nonadjacent vertices pi, pj ∈P there is no edge (pi—pj) ∈H. A triangulated (chordal) graph is an undirected graph with no induced cycles. Equivalently, it is an undirected graph in which every cycle of length four or more contains a chord. It can be easily shown that the treewidth of a triangulated graph is the size of the maximal clique of the graph minus one [23]. The treewidth of an undirected graph H is then the minimum treewidth of all triangulations of H. For the underlying directed acyclic graph of a Bayesian network, the treewidth can be characterized via a triangulation of the moralized graph. Definition 2.3: A moralized graph M of a directed acyclic graph G is an undirected graph that has an edge (i—j) for every (i →j) ∈G and an edge (p—q) for every pair (p →i), (q →i) ∈G. 1The tree-decomposition properties are equivalent to the corresponding family preserving and running intersection properties of clique trees introduced by Lauritzen and Spiegelhalter [17] at around the same time. 2 Input: dataset D, treewidth bound K Output: a network with treewidth ≤K G ←best scoring tree M+ ←undirected skeleton of G k ←1 While k < K O ←node ordering given G and M+ C ←best chain with respect to O G ←G ∪C Foreach (i →j) ∈C do M+ ←EdgeUpdate(M+, (i →j)) k ←maximal clique size of M+ Greedily add edges while treewidth ≤K Return G s cM t p1 p2 v2 v3 v1 s cM t p1 p2 s cM t p1 p2 (a) (b) (c) cM t p1 p2 s cM t p1 p2 s s cM t p1 p2 (d) (e) (f) Figure 1: (left) Outline of our algorithm for learning Bayesian networks of bounded treewidth. (right) An example of the different steps of our triangulation procedure (b)-(e) when (s →t) is added to the graph in (a). The blocks are {s, v1}, {v1, cM}, and {cM, v2, v3, p1, p2, t} with corresponding cut-vertices v1 and cM. The augmented graph (e) has a treewidth of three (maximal clique of size four). An alternative triangulation (f), connecting cM to t, would result in a maximal clique of size five. The treewidth of a Bayesian network graph G is defined as the treewidth of its moralized graph M. It follows that the maximal clique of any moralized triangulation of G is an upper bound on the treewidth of the model, and thus its inference complexity. 3 Learning Bounded Treewidth Bayesian Networks In this section we outline our approach for learning Bayesian networks given an arbitrary treewidth bound that is polynomial in both the number of variables and the desired treewidth. We rely on global structure modifications that are optimal with respect to a node ordering. At the heart of our method is the idea of using a dynamically maintained triangulated graph to upper bound the treewidth of the current model. When an edge is added to the Bayesian network we update this triangulated graph in a way that is not only guaranteed to produce a valid triangulation, but that is also treewidth-friendly. That is, our update is guaranteed to increase the size of the maximal clique of the triangulated graph, and hence the treewidth bound, by at most one. An important property of our edge update is that we can characterize the parts of the network that are “contaminated” by the new edge. This allows us to define sets of edges that are jointly treewidth-friendly. Building on the characterization of these sets, we propose a dynamic programming approach for efficiently learning the optimal treewidth-friendly chain with respect to a node ordering. Figure 1 shows pseudo-code for our method. Briefly, we learn a Bayesian network with bounded treewidth K by starting from a Chow-Liu tree and iteratively augmenting the current structure with an optimal treewidth-friendly chain. During each iteration (below the treewidth bound) we apply our treewidth-friendly edge update procedure that maintains a moralized and triangulated graph for the model at hand. Appealingly, as each global modification can increase the treewidth by at most one, at least K such chains will be added before we face the problem of local maxima. In practice, as some chains do not increase the treewidth, many more such chains are added for a given K. Theorem 3.1: Given a treewidth bound K and dataset over N variables, the algorithm outlined in Figure 1 runs in time polynomial in N and K. This result relies on the efficiency of each step of the algorithm and that there can be at most N · K iterations (≤|edges|) before exceeding the treewidth bound. In the next sections we develop the edge update and best scoring chain procedures and show that both are polynomial in N and K. 4 Treewidth-Friendly Edge Update The basic building block of our method is a procedure for maintaining a valid triangulation of the Bayesian network. An appealing feature of this procedure is that the treewidth bound is guaranteed to grow by at most one after the update. We first consider single edge (s →t) addition to the model. For clarity of exposition, we start with a simple variant of our procedure, and later refine this to allow for multiple edge additions while maintaining our guarantee on the treewidth bound. 3 To gain intuition into how the dynamic nature of our update is useful, we use the notion of induced paths or paths with no shortcuts (see Section 2), and make explicit the following obvious fact: Observation 4.1: Let G be a Bayesian network structure and let M+ be a moralized triangulation of G. Let M(s→t) be M+ augmented with the edge (s—t) and with the edges (s—p) for every parent p of t in G. Then, every non-chordal cycle in M(s→t) involves s and either t or a parent of t and an induced path between the two vertices. Stated simply, if the graph was triangulated before the addition of (s →t) to the Bayesian network, then we only need to triangulate cycles created by the addition of the new edge or those forced by moralization. This observation immediately suggests a straight-forward single-source triangulation whereby we simply add an edge (s—v) for every node v on an induced path between s and t or its parents before the edge update. Clearly, this naive method results in a valid moralized triangulation of G ∪(s →t). Surprisingly, we can also show that it is treewidth-friendly. Theorem 4.2: The treewidth of the graph produced by the single-source triangulation procedure is greater than the treewidth of the input graph M+ by at most one. Proof: (outline) For the treewidth to increase by more than one, some maximal C in M+ needs to connect to two new nodes. Since all edges are being added from s, this can only happen in one of two ways: (i) either t, a parent p of t, or a node v on induced path between s and t is also connected to C, but not part of C, or (ii) two such (non-adjacent) nodes exist and s is in C. In either case one edge is missing after the update procedure preventing the formation of a larger clique. One problem with the proposed single-source triangulation, despite it being treewidth-friendly, is that many vertices are connected to the source node, making the triangulations shallow. This can have an undesirable effect on future edge additions and increases the chances of the formation of large cliques. We can alleviate this problem with a refinement of the single-source triangulation procedure that makes use of the concepts of cut-vertices, blocks, and block trees. Definition 4.3: A block of an undirected graph H is a set of connected nodes that cannot be disconnected by the removal of a single vertex. By convention, if the edge (u—v) is in H then u and v are in the same block. Vertices that separate (are in the intersection of) blocks are called cut-vertices. It is easy to see that between every two nodes in a block of size greater than two there are at least two distinct paths, i.e. a cycle. There are also no simple cycles involving nodes in different blocks. Definition 4.4: The (unique) block tree B of an undirected graph H is a graph with nodes that correspond both to cut-vertices and to blocks of H. The edges in the block tree connect any block node Bi with a cut-vertex node vj if and only if vj ∈Bi in H. It can be easily shown that any path in H between two nodes in different blocks passes through all the cut-vertices along the path between the blocks in B. An important consequence that follows from Dirac [11] is that an undirected graph whose blocks are triangulated is overall triangulated. Our refined treewidth-friendly triangulation procedure (illustrated via an example in Figure 1) makes use of this fact as follows. First, the triangulated graph is augmented with the edge (s—t) and any edges needed for moralization (Figure 1(b) and (c)). Second, a block level triangulation is carried out by zig-zagging across cut-vertices along the unique path between the blocks containing s and t and its parents (Figure 1(d)). Next, within each block (not containing s or t) along the path, a single-source triangulation is performed with respect to the “entry” and “exit” cut-vertices. This short-circuits any other node path through (and within) the block. For the block containing s the single-source triangulation is performed between s and the “exit” cut-vertex. The block containing t and its parents is treated differently: we add chords directly from s to any node v within the block that is on an induced path between s and t (or parents of t) (Figure 1(e)). This is required to prevent moralization and triangulation edges from interacting in a way that will increase the treewidth by more than one (e.g., Figure 1(f)). If s and t happen to be in the same block, then we only triangulate the induced paths between s and t, i.e., the last step outlined above. Finally, in the special case that s and t are in disconnected components of G, the only edges added are those required for moralization. Theorem 4.5: Our revised edge update procedure results in a triangulated graph with a treewidth at most one greater than that of the input graph. Furthermore, it runs in polynomial time. Proof: (outline) First, observe that the final step of adding chords emanating from s is a singlesource triangulation once the other steps have been performed. Since each block along the block path between s and t is triangulated separately, we only need to consider the zig-zag triangulation between blocks. As this creates 3-cycles, the graph must also be triangulated. To see that the treewidth 4 increases by at most one, we use similar arguments to those used in the proof of Theorem 4.2, and observe that the zig-zag triangulation only touches cut-vertices and any three of these vertices could not have been in the same clique. The fact that the update procedure runs in polynomial time follows from the fact that an adaptation (not shown for lack of space) of maximum cardinality search (see, for example [16]) can be used to efficiently identify all induced nodes between s and t. Multiple Edge Updates. We now consider the addition of multiple edges to the graph G. To ensure that multiple edges do not interact in ways that will increase the treewidth bound by more than one, we need to characterize the nodes contaminated by each edge addition—a node v is contaminated by the adding (s →t) to G if it is incident to a new edge added during our treewidth friendly triangulation. Below are several examples of contaminated sets (solid nodes) incident to edges added (dashed) by our edge update procedure for different candidate edge additions (s →t) to the Bayesian network on the left. In all examples except the last treewidth is increased by one. s t s t s t s t s t Using the notion of contamination, we can characterize sets of edges that are jointly treewidthfriendly. We will use this to learn optimal treewidth friendly chains given a ordering in Section 5. Theorem 4.6: (Treewidth-friendly set). Let G be a graph structure and M+ be its corresponding moralized triangulation. If {(si →ti)} is a set of candidate edges satisfying the following: • the contaminated sets of any (si →ti) and (sj →tj) are disjoint, or, • the contaminated sets overlap at a single cut-vertex, but the endpoints of each edge are not in the same block and the block paths between the endpoints do not overlap; then adding all edges to G can increase the treewidth bound by at most one. Proof: (outline) The theorem holds trivially for the first condition. Under the second condition, the only common vertex is a cut-vertex. However, since all other contaminated nodes are in in different blocks, they cannot interact to form a large clique. 5 Learning Optimal Treewidth-Friendly Chains In the previous section we described our edge update procedure and characterized edge chains that jointly increase the treewidth bound by at most one. We now use this to search for optimal chain structures that satisfy Theorem 4.6, and are thus treewidth friendly, given a topological node ordering. On the surface, one might question the need for a specific node ordering altogether if chain global operators are to be used—given the result of Chow and Liu [5], one might expect that learning the optimal chain with respect to any ordering can be carried out efficiently. However, Meek [19] showed that learning an optimal chain over a set of random variables is computationally difficult and the result can be generalized to learning a chain conditioned the current model. Thus, during any iteration of our algorithm, we cannot expect to find the overall optimal chain. Instead, we commit to a single node ordering that is topologically consistent (each node appears after its parent in the network) and learn the optimal treewidth-friendly chain with respect to that order (we briefly discuss the details of our ordering below). To find such a chain in polynomial time, we use a straightforward dynamic programming approach: the best treewidth-friendly chain that contains (Os →Ot) is the concatenation of: • the best chain from the first node O1 to OF , the first node contaminated by (Os →Ot) • the edge (Os →Ot) • the best chain starting from the last node contaminated OL to the last node in the order ON. O1 Os OF OL ON optimal chain optimal chain Ot We note that when the end nodes are not separating cut-vertices, we maintain a gap so that the contamination sets are disjoint and the conditions of Theorem 4.6 are met. 5 5 10 15 20 25 30 35 40 45 50 55 60 -5 -4 -3 -2 -1 0 1 Test log-loss / instance Treewidth bound Ours Thin Junction-tree Aggressive 0 5 10 15 20 25 30 0 1 2 3 4 5 6 7 8 9 10 11 Iteration Treewidth bound Length of chain 0 10 20 30 40 50 60 5 10 15 20 25 30 Treewidth bound Runtime in minutes Thin Junction tree Ours Figure 2: Gene expression results: (left) 5-fold mean test log-loss per/instance vs. treewidth bound. Our method (solid blue squares) is compared to the thin junction tree method (dashed red circles), and an unbounded aggressive search (dotted black). (middle) the treewidth estimate and the number of edges in the chain during the iterations of a typical run with the bound set to 10. (right) shows running time as a function of the bound. Formally, we define C[i, j] as the optimal chain whose contamination is limited to the range [Oi,Oj] and our goal is to compute C[1, N]. Using F to denote the first node ordered in the contamination set of (s →t) (and L for the last), we can compute C[1, N] via the following recursive update principle C[i, j] = ( maxs,t:F =i,L=j(s →t) no split maxk=i+1:j−1 C[i, k] ∪C[k, j] split ∅ leave a gap where the maximization is with respect to the structure score (e.g., BIC). That is, the best chain in a subsequence [i, j] in the ordering is the maximum of three alternatives: edges whose contamination boundaries are exactly i and j (no split); two chains that are joined at some node i < k < j (split); a gap between i and j when there is no positive edge whose contamination is in [i, j]. Finally, for lack of space we only provide a brief description of our topological node ordering. Intuitively, since edges contaminate nodes along the block path between the edge’s endpoints (see Section 4), we want to adopt a DFS ordering over the blocks so as to facilitate as many edges as possible between different branches of the block tree. We order nodes with a block by the distance from the “entry” vertex as motivated by the following result on the distance dM min (u, v) between nodes u, v in the triangulated graph M+ (proof not shown for lack of space): Theorem 5.1: Let r, s, t be nodes in a block B in the triangulated graph M+ with dM min (r, s) ≤ dM min (r, t). Then for any v on an induced path between s and t we have dM min (r, v) ≤dM min (r, t). The efficiency of our method outlined in Figure 1 in the number of variables and the treewidth bound (Theorem 3.1) now follows from the efficiency of the ordering and chain learning procedures. 6 Experimental Evaluation We compare our approach on four real-world datasets to several methods. The first is an improved variant of the thin junction tree method [2]. We start (as in our method) with a Chow-Liu forest and iteratively add the single best scoring edge as long as the treewidth bound is not exceeded. To make the comparison independent of the choice of triangulation method, at each iteration we replace the heuristic triangulation (best of maximum cardinality search or minimum fill-in [16], which in practice had negligible differences) with our triangulation if it results in a lower treewidth.The second baseline is an aggressive structure learning approach that combines greedy edge modifications with a TABU list (e.g., [13]) and random moves and that is not constrained by a treewidth bound. Where relevant we also compare our results to the results of Chechetka and Guestrin [3]. Gene Expression. We first consider a continuous dataset of the expression of yeast genes (variables) in 173 experiments (instances) [12]. We learn sigmoid Bayesian networks using the BIC structure score [24] using the fully observed set of 89 genes that participate in general metabolic processes. Here a learned model indicates possible regulatory or functional connections between genes. Figure 2(a) shows test log-loss as a function of treewidth bound. The first obvious phenomenon is that both our method and the thin junction tree approach are superior to the aggressive baseline. As one might expect, the aggressive baseline achieves a higher BIC score on training data (not shown), but overfits due to the scarcity of the data. The consistent superiority of our method over thin junction trees demonstrates that a better choice of edges, i.e., ones chosen by a global operator, can lead to increased robustness and better generalization. Indeed, even when the treewidth bound 6 100 200 300 400 500 600 700 800 900 1000 -65 -60 -55 -50 -45 Test log-loss / instance Training instances Ours Thin Junction-tree Aggressive Chechetka+Guestrin 100 200 300 400 500 600 700 800 900 1000 -38 -36 -34 -32 -30 Test log-loss / instance Training instances Ours Thin Junction-tree Aggressive Chechetka+Guestrin Figure 3: 5-fold mean test log-loss/instance for a treewidth bound of two vs. training set size for the temperature (left) and traffic (right) datasets. Compared are our approach (solid blue squares), the thin junction tree method (dashed red circles), an aggressive unbounded search (dotted black), and the method of Chechetka and Guestrin [3] (dash-dot magenta diamonds). 2 4 6 8 10 12 14 16 18 20 -36 -34 -32 -30 -28 Test log-loss / instance Treewidth bound Ours Thin Junction-tree [unordered] Figure 4: Average log-loss vs. treewidth bound for the Hapmap data. Compared are an unbounded aggressive search (dotted) and unconstrained (thin) and constrained by the DNA order (thick) variants of ours and the thin junction tree method. is increased past the saturation point, our method surpasses both baselines. In this case, we are learning unbounded networks and all benefit comes from the global nature of our updates. To qualitatively illustrate the progression of our algorithm, in Figure 2(b) we plot the number of edges in the chain and the treewidth estimate at the end of each iteration for a typical run. Our algorithm aggressively adds multi-edge chains until the treewidth bound is reached, at which point (iteration 24) it becomes fully greedy. To appreciate the non-triviality of some of the chains learned with 4−7 edges, we recall that the chains are added after a Chow-Liu model was initially learned. It is also worth noting that despite their complexity, some chains do not increase the treewidth estimate and we typically have more than K iterations where chains with more than one edge are added. The number of such iterations is still polynomially bounded as for a Bayesian network with N variables adding more than K · N edges will necessarily result in a treewidth that is greater than K. To evaluate the efficiency of our method we measured its running time as a function of the treewidth bound. Figure 2(c) shows results for the gene expression dataset. Observe that our method and the greedy thin junction tree approach are both approximately linear in the treewidth bound. Appealingly, the additional computation our method requires is not significant (≤25%). This should not come as a surprise as the bulk of the time is spent on the collection of the data sufficient statistics. It is also worth discussing the range of treewidths we considered in the above experiment as well as the Haplotype experiment below. While treewidths greater than 25 seem excessive for exact inference, state-of-the-art techniques (e.g., [9, 18]) can reasonably handle inference in networks of this complexity. Furthermore, as our results show, it is beneficial in practice to learn such models. Thus, combining our method with state-of-the-art inference techniques can allow practitioners to push the envelope of the complexity of models learned for real applications that rely on exact inference. The Traffic and Temperature Datasets. We now compare our method to the mutual-information based LPACJT approach of Chechetka and Guestrin [3] (we compare to the better variant). As their method is exponential in the treewidth and cannot be used in the gene expression setting, we compare to it on the two discrete real-life datasets Chechetka and Guestrin [3] considered: the temperature data is from a deployment of 54 sensor nodes; the traffic dataset contains traffic flow information measured every 5 minutes in 32 locations in California. To make the comparison fair, we used the same discretization and train/test splits. Furthermore, as their method can only be applied to a small treewidth bound, we also limited our model to a treewidth of two. Figure 3 compares the different methods. Both our method and the thin junction tree approach significantly outperform the LPACJT on small sample sizes. This result is consistent with the results reported in Chechetka and Guestrin [3] and is due to the fact that the LPACJT method does not facilitate the use of regularization which is crucial in the sparse-data regime. The performance of our method is comparable to the greedy thin junction tree approach with no obvious superiority of either method. This should not come as a surprise since the fact that the unbounded aggressive search is not significantly better suggests that the strong signal in the data can be captured rather easily. In fact, Chechetka and Guestrin [3] show that even a Chow-Liu tree does rather well on these datasets (compare this to the gene expression dataset where the aggressive variant was superior even at a treewidth of five). Haplotype Sequences. Finally we consider a more difficult discrete dataset of a sequence of single nucleotide polymorphism (SNP) alleles from the Human HapMap project [6]. Our model is defined over 200 SNPs (binary variables) from chromosome 22 of a European population consisting of 60 individuals (we considered several different sequences along the chromosome with similar results). 7 In this case, there is a natural ordering of variables that corresponds to the position of the SNPs in the DNA sequence. Figure 4 shows test log-loss results when this ordering is enforced (thicker) and when it is not (thinner). The superiority of our method when the ordering is used is obvious while the performance of the thin junction tree method degrades. This can be expected as the greedy method does not make use of a node ordering, while our method provides optimality guarantees with respect to a variable ordering at each iteration. Whether constrained to the natural variable ordering or not, our method ultimately also surpasses the unbounded aggressive search. 7 Discussion and Future Work In this work we presented a novel method for learning Bayesian networks of bounded treewidth in time that is polynomial in both the number of variables and the treewidth bound. Our method builds on an edge update algorithm that dynamically maintains a valid moralized triangulation in a way that facilitates the addition of chains that are guaranteed to increase the treewidth bound by at most one. We demonstrated the effectiveness of our treewidth-friendly method on real-life datasets, and showed that by utilizing global structure modification operators, we are able to learn better models than competing methods, even when the treewidth of the models learned is not constrained. Our method can be viewed as a generalization of the work of Chow and Liu [5] that is constrained to a chain structure but that provides an optimality guarantee (with respect to a node ordering) at every treewidth. In addition, unlike the thin junction trees approach of Bach and Jordan [2], we provide a guarantee that our estimate of the treewidth bound will not increase by more than one at each iteration. Furthermore, we add multiple edges at each iteration, which in turn allows us to better cope with the problem of local maxima in the search. To our knowledge, ours is the first method for efficiently learning Bayesian networks with an arbitrary treewidth bound that is not fully greedy. Our method motivates several exciting future directions. It would be interesting to see to what extent we could overcome the need to commit to a specific node ordering at each iteration. While we provably cannot consider every ordering, it may be possible to polynomially provide a reasonable approximation. Second, it may be possible to refine our characterization of the contamination that results from an edge update, which in turn may facilitate the addition of more complex treewidthfriendly structures at each iteration. Finally, we are most interested in exploring whether tools similar to the ones employed in this work could be used to dynamically update the bounded treewidth structure that is the approximating distribution in a variational approximate inference setting. References [1] P. Abbeel, D. Koller, and A. Ng. Learning factor graphs in poly. time & sample complexity. JMLR, 2006. [2] F. Bach and M. I. Jordan. Thin junction trees. In NIPS, 2001. [3] A. Chechetka and C. Guestrin. Efficient principled learning of thin junction trees. In NIPS. 2008. [4] D. Chickering. Learning Bayesian networks is NP-complete. In Learning from Data: AI & Stats V. 1996. [5] C. Chow and C. Liu. Approx. discrete distrib. with dependence trees. IEEE Trans. on Info. Theory, 1968. [6] The International HapMap Consortium. The international hapmap project. Nature, 2003. [7] G. F. Cooper. The computationl complexity of probabilistic inference using belief networks. AI, 1990. [8] P. Dagum and M. Luby. An optimal approximation algorithm for baysian inference. AI, 1993. [9] A. Darwiche. Recursive conditioning. Artificial Intelligence, 2001. [10] S. Dasgupta. Learning polytrees. In UAI, 1999. [11] G. A. Dirac. On rigid circuit graphs. Abhandlungen aus dem Math. Seminar der Univ. Hamburg 25, 1961. [12] A. Gasch et al. Genomic expression program in the response of yeast cells to environmental changes. Molecular Biology of the Cell, 2000. [13] F. Glover and M. Laguna. Tabu search. In Modern Heuristic Tech. for Comb. Problems, 1993. [14] D. Heckerman. A tutorial on learning Bayesian networks. Technical report, Microsoft Research, 1995. [15] D. Karger and N. Srebro. Learning markov networks: maximum bounded tree-width graphs. In Symposium on Discrete Algorithms, 2001. [16] A. Koster, H. Bodlaender, and S. Van Hoesel. Treewidth: Computational experiments. Technical report, Universiteit Utrecht, 2001. [17] S. Lauritzen and D. Spiegelhalter. Local computations with probabilities on graphical structures. Journal of the Royal Statistical Society, 1988. [18] R. Marinescu and R. Dechter. And/or branch-and-bound for graphical models. IJCAI, 2005. [19] C. Meek. Finding a path is harder than finding a tree. Journal of Artificial Intelligence Research, 2001. [20] M. Meila and M. I. Jordan. Learning with mixtures of trees. JMLR, 2000. [21] M. Narasimhan and J. Bilmes. Pac-learning bounded tree-width graphical models. In UAI, 2004. [22] J. Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, 1988. [23] N. Robertson and P. Seymour. Graph minors II. algorithmic aspects of tree-width. J. of Algorithms, 1987. [24] G. Schwarz. Estimating the dimension of a model. Annals of Statistics, 6:461–464, 1978. 8
2008
3
3,515
Weighted Sums of Random Kitchen Sinks: Replacing minimization with randomization in learning Paper #858 Abstract Randomized neural networks are immortalized in this AI Koan: In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6. “What are you doing?” asked Minsky. “I am training a randomly wired neural net to play tic-tac-toe,” Sussman replied. “Why is the net wired randomly?” asked Minsky. Sussman replied, “I do not want it to have any preconceptions of how to play.” Minsky then shut his eyes. “Why do you close your eyes?” Sussman asked his teacher. “So that the room will be empty,” replied Minsky. At that moment, Sussman was enlightened. We analyze shallow random networks with the help of concentration of measure inequalities. Specifically, we consider architectures that compute a weighted sum of their inputs after passing them through a bank of arbitrary randomized nonlinearities. We identify conditions under which these networks exhibit good classification performance, and bound their test error in terms of the size of the dataset and the number of random nonlinearities. 1 Introduction In the earliest days of artificial intelligence, the bottom-most layer of neural networks consisted of randomly connected “associator units” that computed random binary functions of their inputs [1]. These randomized shallow networks have largely been superceded by optimally, or nearly optimally, tuned shallow architectures such as weighted sums of positive definite kernels (as in Support Vector Machines), or weigted sums of weak classifiers (as in Adaboost). But recently, architectures that randomly transform their inputs have been resurfacing in the machine learning community [2, 3, 4, 5], largely motivated by the fact that randomization is computationally cheaper than optimization. With the help of concentration of measure inequalities on function spaces, we show that training a shallow architecture by randomly choosing the nonlinearities in the first layer results in a classifier that is not much worse than one constructed by optimally tuning the nonlinearities. The main technical contributions of the paper are an approximation error bound (Lemma 1), and a synthesis of known techniques from learning theory to analyze random shallow networks. Consider the problem of fitting a function f : X →R to a training data set of m input-output pairs {xi, yi}i=1...m, drawn iid from some unknown distribution P(x, y), with xi ∈X and yi ∈±1. The fitting problem consists of finding an f that minimizes the empirical risk Remp[f] ≡1 m m X i=1 c(f(xi), yi). (1) The loss c(y, y′) penalizes the deviation between the prediction f(x) and the label y. Popular choices for c are the hinge loss, max(0, 1 −yy′), used in the Support Vector Machine [6], the exponential loss, e−yy′, used in Adaboost [7, 8], and the quadratic loss, (y −y′)2, used in matching pursuit [9] and regularized least squares classification [10]. 1 Similarly to kernel machines and Adaboost, we will consider functions of the form f(x) = P∞ i=1 α(wi)φ(x; wi) or f(x) = R α(w)φ(x; w) dw, where feature functions φ : X × Ω→R, parameterized by some vector w ∈Ω, are weighted by a function α : Ω→R. In kernel machines, the feature functions φ are the eigenfunctions of a positive definite kernel k, and in Adaboost they are typically decision trees or stumps. Adaboost [8, 7] and matching pursuit [11, 9] find approximate empirical risk minimizer over this class of functions by greedily minimizing over a finite number of scalar weights α and parameter vectors w jointly: minimize w1, . . . , wK ∈Ω α ∈A Remp " K X k=1 φ(x; wk)αk # . (2) But it is also possible to randomize over w and minimize over α. Rather than jointly optimizing over α and w, the following algorithm first draws the parameters of the nonlinearities randomly from a pre-specificied distribution p. Then with w fixed, it fits the weights α optimally via a simple convex optimization: Algorithm 1 The Weighted Sum of Random Kitchen Sinks fitting procedure. Input: A dataset {xi, yi}i=1...m of m points, a bounded feature function |φ(x; w)| ≤1, an integer K, a scalar C, and a probability distribution p(w) on the parameters of φ. Output: A function ˆf(x) = PK k=1 φ(x; wk)αk. Draw w1, . . . , wK iid from p. Featurize the input: zi ←[φ(xi; w1), . . . , φ(xi; wK)]⊤. With w fixed, solve the empirical risk minimization problem minimize α∈RK 1 m m X i=1 c α⊤zi, yi  (3) s.t. ∥α∥∞≤C/K. (4) In pratice, we let C be large enough that the constraint (4) remains inactive. The when c is the quadratic loss, the minimization (3) is simple linear least squares, and when c is the hinge loss, it amounts of fitting a linear SVM to a dataset of m K-dimensional feature vectors. Randomly setting the nonlinearities is appealing for several reasons. First, the fitting procedure is simple: Algorithm 1 can be implemented in a few lines of MATLAB code even for complex feature functions φ, whereas fitting nonlinearities with Adaboost requires much more care. This flexibility allows practioners to experiment with a wide variety of nonlinear feature fuctions without first having to devise fitting procedures for them. Second, the algorithm is fast: experiments show between one and three orders of magnitude speedup over Adaboost. On the down side, one might expect to have to tune the sampling distribution p for each dataset. But in practice, we find that to obtain accuracies that are competitive with Adaboost, the same sampling distribution can be used for all the datasets we considered if the coordinates of the data are first zero-meaned and rescaled to unit variance. Formally, we show that Algorithm 1 returns a function that has low true risk. The true risk of a function f is R[f] ≡ E (x,y)∼P c(f(x), y), (5) and measures the expected loss of f on as-yet-unseen test points, assuming these test points are generated from the same distribution that generated the training data. The following theorem states that with very high probability, Algorithm 1 returns a function whose true risk is near the lowest true risk attainable by functions in the class Fp defined below: Theorem 1 (Main result). Let p be a distribution on Ω, and let φ satisfy supx,w |φ(x; w)| ≤1. Define the set Fp ≡  f(x) = Z Ω α(w)φ(x; w) dw |α(w)| ≤Cp(w)  . (6) 2 Suppose c(y, y′) = c(yy′), with c(yy′) L-Lipschitz. Then for any δ > 0, if the training data {xi, yi}i=1...m are drawn iid from some distribution P, Algorithm 1 returns a function ˆf that satisfies R[ ˆf] −min f∈Fp R[f] ≤O  1 √m + 1 √ K  LC q log 1 δ  (7) with probability at least 1 −2δ over the training dataset and the choice of the parameters w1, . . . , wK. Note that the dependence on δ in the bound is logarithmic, so even small δ’s do not cause the bound to blow up. The set Fp is a rich class of functions. It consists of functions whose weights α(w) decays more rapidly than the given sampling distribution p. For example, when φ(x; w) are sinusoids with frequency w, Fp is the set of all functions whose Fourier transforms decay faster than C p(w). We prove the theorem in the next section, and demonstrate the algorithm on some sample datasets in Section 4. The proof of the theorem provides explicit values for the constants in the big O notation. 2 Proof of the Main Theorem Algorithm 1 returns a function that lies in the random set ˆFw ≡ ( f(x) = K X k=1 αkφ(x; wk) |αk| ≤C K ) . (8) The bound in the main theorem can be decomposed in a standard way into two bounds: 1. An approximation error bound that shows that the lowest true risk attainable by a function in ˆFw is not much larger than the lowest true risk attainable in Fp (Lemma 2). 2. An estimation error bound that shows that the true risk of every function in ˆFw is close to its empirical risk (Lemma 3). The following Lemma is helpful in bounding the approximation error: Lemma 1. Let µ be a measure on X, and f ∗a function in Fp. If w1, . . . , wK are drawn iid from p, then for any δ > 0, with probability at least 1 −δ over w1, . . . , wK, there exists a function ˆf ∈ˆFw so that sZ X  ˆf(x) −f ∗(x) 2 dµ(x) ≤ C √ K  1 + q 2 log 1 δ  . (9) The proof relies on Lemma 4 of the Appendix, which states that the average of bounded vectors in a Hilbert space concentrates towards its expectation in the Hilbert norm exponentially fast. Proof. Since f ∗∈Fp, we can write f ∗(x) = R Ωα(w)φ(x; w) dw. Construct the functions fk = βkφ(·; wk), k = 1 . . . K, with βk ≡α(ωk) p(ωk) , so that E fk = f ∗. Let ˆf(x) = PK k=1 βk K φ(x; ωk) be the sample average of these functions. Then ˆf ∈ˆFw because |βk/K| ≤C/K. Also, under the inner product ⟨f, g⟩= R f(x)g(x) dµ(x), ∥βkφ(·; wk)∥≤C. The Lemma follows by applying Lemma 4 to f1, . . . , fK under this inner product. Lemma 2 (Bound on the approximation error). Suppose c(y, y′) is L-Lipschitz in its first argument. Let f ∗be a fixed function in Fp. If w1, . . . , wK are drawn iid from p, then for any δ > 0, with probability at least 1 −δ over w1, . . . , wK, there exists a function ˆf ∈ˆFw that satisfies R[ ˆf] ≤R[f ∗] + LC √ K  1 + q 2 log 1 δ  . (10) 3 Proof. For any two functions f and g, the Lipschitz condition on c followed by the concavity of square root gives R[f] −R[g] = E c(f(x), y) −c(g(x), y) ≤E |c(f(x), y) −c(g(x), y)| (11) ≤L E |f(x) −g(x)| ≤L p E(f(x) −g(x))2. (12) The lemma then follows from Lemma 1. Next, we rely on a standard result from statistical learning theory to show that for a given choice of w1, . . . , wK the empirical risk of every function in ˆFw is close to its true risk. Lemma 3 (Bound on the estimation error). Suppose c(y, y′) = c(yy′), with c(yy′) L-Lipschitz. Let w1, · · · , wK be fixed. If {xi, yi}i=1...m are drawn iid from a fixed distribution, for any δ > 0, with probability at least 1 −δ over the dataset, we have ∀f∈ˆ Fw |R[f] −Remp[f]| ≤ 1 √m  4LC + 2|c(0)| + LC q 1 2 log 1 δ  . (13) Proof sketch. By H¨older, the functions in ˆFw are bounded above by C. The Rademacher complexity of ˆFw can be shown to be bounded above by C/√m (see the Appendix). The theorem follows by results from [12] which are summarized in Theorem 2 of the Appendix. Proof of Theorem 1. Let f ∗be a minimizer of R over Fp, ˆf a minimizer of Remp over ˆFw (the output of the algorithm), and ˆf ∗a minimizer of R over ˆFw. Then R[ ˆf] −R[f ∗] = R[ ˆf] −R[ ˆf ∗] + R[ ˆf ∗] −R[f ∗] (14) ≤|R[ ˆf] −R[ ˆf ∗]| + R[ ˆf ∗] −R[f ∗]. (15) The first term in the right side is an estimation error: By Lemma 3, with probability at least 1 −δ, |R[ ˆf ∗] −Remp[ ˆf ∗]| ≤ϵest and simultaneously, |R[ ˆf] −Remp[ ˆf]| ≤ϵest, where ϵest is the right side of the bound in Lemma 3. By the optimality of ˆf, Remp[ ˆf] ≤Remp[ ˆf ∗]. Combining these facts gives that with probability at least 1 −δ, |R[ ˆf] −R[ ˆf ∗]| ≤2ϵest = 2 √m  4LC + 2|c(0)| + LC q 1 2 log 1 δ  . The second term in Equation (15) is the approximation error, and by Theorem 1, with probability at least 1 −δ, it is bounded above by ϵapp = LC √ K  1 + q 2 log 1 δ  . By the union bound, with probability at least 1−2δ, the right side of Equation (15) is bounded above by 2ϵest + ϵapp. 3 Related Work Greedy algorithms for fitting networks of the form (2) have been analyzed, for example, in [7, 11, 9]. Zhang analyzed greedy algorithms and a randomized algorithm similar to Algorithm 1 for fitting sparse Gaussian processes to data, a more narrow setting than we consider here. He obtained bounds on the expected error for this sparse approximation problem by viewing these methods as stochastic gradient descent. Approximation error bounds such as that of Maurey [11][Lemma 1], Girosi [13] and Gnecco and Sanguineti [14] rely on random sampling to guarantee the existence of good parameters w1, . . . , wk, but they require access to the representation of f ∗to actually produce these parameters. These approximation bounds cannot be used to guarantee the performance of Algorithm 1 because Algorithm 1 is oblivious of the data when it generates the parameters. Lemma 2 differs from these bounds in that it relies on f ∗only to generate the weights α1, . . . , αK, but it remains oblivious to f ∗when generating the parameters by sampling them from p instead. Furthermore, because ˆFw is smaller than the classes considered by [11, 14], the approximation error rate in Lemma 1 matches those of existing approximation error bounds. 4 0 100 200 300 400 500 600 700 800 900 1000 14 16 18 20 22 24 26 # weak learners (K) % error Adaboost RKS 0 100 200 300 400 500 600 700 800 900 1000 10 −1 10 0 10 1 10 2 # weak learners (K) training+testing time (seconds) Adaboost RKS 14 15 16 17 18 19 20 21 22 23 24 10 −1 10 0 10 1 10 2 % error training+testing time (seconds) Adaboost RKS 0 50 100 150 200 250 300 350 400 10 12 14 16 18 20 22 24 26 28 30 # weak learners (K) % error Adaboost RKS 0 50 100 150 200 250 300 350 400 10 0 10 1 10 2 10 3 # weak learners (K) training+testing time (seconds) Adaboost RKS 10 12 14 16 18 20 22 24 26 28 30 10 0 10 1 10 2 10 3 % error training+testing time (seconds) Adaboost RKS 0 100 200 300 400 500 600 700 6 8 10 12 14 16 18 20 # weak learners (K) % error Adaboost RKS 0 100 200 300 400 500 600 700 10 −3 10 −2 10 −1 10 0 10 1 10 2 10 3 # weak learners (K) training+testing time (seconds) Adaboost RKS 6 8 10 12 14 16 18 20 10 −3 10 −2 10 −1 10 0 10 1 10 2 10 3 % error training+testing time (seconds) Adaboost RKS Figure 1: Comparisons between Random Kitchen Sinks and Adaboosted decision stumps on adult (first row), activity (second row), and KDDCUP99 (third row). The first column plots test error of each classifier as a function of K. The accuracy of Random Kitchen Sinks catches up to that of Adaboost as K grows. The second column plots the total training and testing time as a function of K. For a given K, Random Kitchen Sinks is between two and three orders of magnitude faster than Adaboost. The third column combines the previous two columns. It plots testing+training time required to achieve a desired error rate. For a given error rate, Random Kitchen Sinks is between one and three orders of magnitude faster than Adaboost. 4 Experiments Since others have already empirically demonstrated the benefits of random featurization [2, 3, 4, 5], we only a present a few illustrations in this section. We compared Random Kitchen Sinks with Adaboost on three classification problems: The adult dataset has roughly 32,000 training instances. Each categorical variable was replaced by a binary indicator variable over the categories, resulting in 123 dimensions per instance. The test set consists of 15,000 instances. KDDCUP99 is a network intrusion detection problem with roughly 5,000,000 127dimensional training instances, subsampled to 50,000 instances. The test set consists of 150,000 instances. activity is a human activity recognition dataset with 20,0000 223-dimensional instance, of which about 200 are irrelevant for classification. The test set constists of 50,000 instances. The datasets were preprocessed by zero-meaning and rescaling each dimension to unit variance. The feature functions in these experiments were decision stumpsφ(x; w) = sign(xwd −wt), which simply determine whether the wdth dimension of x is smaller or greater than the threshold wt. The sampling distribution p for Random Kitchen Sinks drew the threshold parameter wt from a normal distribution and the coordinate wd from a uniform distribution over the coorindates. For some experiments, we could afford to run Random Kitchen Sinks for larger K than Adaboost, and these runs are included in the plots. We used the quadratic loss, but find no substantial differences in quality under the hinge loss (though there is degradation in speed by a factor of 2-10). We used MATLAB optimized versions of Adaboost and Random Kitchen Sinks, and report wall clock time in seconds. Figure 1 compares the results on these datasets. Adaboost expends considerable effort in choosing the decision stumps and achieves good test accuracy with a few of them. Random Kitchen Sinks 5 50 100 150 200 250 300 350 400 450 0.15 0.2 0.25 0.3 0.35 0.4 K ||α||∞ Figure 2: The L∞norm of α returned by RKS for 500 different runs of RKS with various settings of K on adult. ∥α∥∞decays with K, which justifies dropping the constraint (4) in practice. requires more nonlinearities to achieve similar accuracies. But because it is faster than Adaboost, it can produce classifiers that are just as accurate as Adaboost’s with more nonlinearities in less total time. In these experiments, Random Kitchen Sinks is almost as accurate as Adaboost but faster by one to three orders of magnitude. We defer the details of the following experiments to a technical report: As an alternative to Adaboost, we have experimented with conjugate gradient-descent based fitting procedures for (2), and find again that randomly generating the nonlinearities produces equally accurate classifiers using many more nonlinearities but in much less time. We obtain similar results as those of Figure 1 with the random features of [4], and random sigmoidal ridge functions φ(x; w) = σ(w′x), To simplify the implementation of Random Kitchen Sinks, we ignore the constraint (4) in practice. The scalar C controls the size of ˆFw and Fp, and to eliminate the constraint, we implicitly set C it to a large value so that the constraint is never tight. However, for the results of this paper to hold, C cannot grow faster than K. Figure 2 shows that the L∞norm of the unconstrained optimum of (3) for the adult dataset does decays linearly with K, so that there exists a C that does not grow with K for which the constraint is never tight, thereby justifying dropping the constraint. 5 Discussion and Conclusions Various hardness of approximation lower bounds for fixed basis functions exist (see, for example [11]). The guarantee in Lemma 1 avoids running afoul of these lower bounds because it does not seek to approximate every function in Fp simultaneously, but rather only the true risk minimizer with high probability. It may be surprising that Theorem 1 holds even when the feature functions φ are nearly orthogonal. The result works because the importance sampling constraint |α(w)| ≤Cp(w) ensures that a feature function does not receive a large weight if it is unlikely to be sampled by p. When the feature functions are highly linearly dependent, better bounds can be obtained because any f(x) = R α(w)φ(x; w) can be rewritten as f(x) = R α′(w)φ(x; w) with |α′|/p ≤|α|/p, improving the importance ratio C. This intuition can be formalized via the the Rademacher complexity of φ, a result which we leave for future work. One may wonder whether Algorithm 1 has good theoretical guarantees on Fp because Fp is too small small class of functions. Indeed, when φ are the Fourier bases, |α|/p ≤C implies R Ω|α(w)| dw ≤C, so every function in Fp has an absolutely integrable Frourier transform. Thus Fp is smaller than the set considered by Jones [9] for greedy matching pursuit, and for which he obtained an approximation rate of O(1/ √ K). The most reliable way to show that Fp is rich enough for practical applications is to conduct experiments with real data. The experiment show that Fp indeed contains good predictors. The convergence rate for Adaboost [7] is exponentially fast in K, which at first appears to be much faster than 1/ √ K. However, the base of the exponent is the minimum weighted margin encountered by the algorithm through all iterations, a quantity that is difficult to bound a priori. This makes a direct comparison of the bounds difficult, though we have tried to provide empirical comparisons. 6 A Exponentially Fast Concentration of Averages towards the Mean in a Hilbert Space Lemma 4. Let X = {x1, · · · , xK} be iid random variables in a ball H of radius M centered around the origin in a Hilbert space. Denote their average by X = 1 K PK k=1 xk. Then for any δ > 0, with probability at least 1 −δ, X −E X ≤M √ K  1 + q 2 log 1 δ  . (16) Proof. We use McDiarmid’s inequality to show that the scalar function f(X) = X −EX X is concentrated about its mean, which shrinks as O(1/ √ K). The function f is stable under perturbation of its ith argument. Let ˜X = {x1, · · · , ˜xi, · · · , xK} be a copy of X with the ith element replaced by an arbitrary element of H. Applying the triangle inequality twice gives |f(X) −f( ˜X)| = |∥X −E X∥−∥˜X −E X∥| ≤∥X −˜X∥≤∥xi −˜xi∥ K ≤2M K . (17) To bound the expectation of f, use the familiar identity about the variance of the average of iid random variables E X −E X 2 = 1 K E ∥x∥2 −∥E x∥2 , (18) in conjunction with Jensen’s inequality and the fact that ∥x∥≤M to get E f(X) ≤ p E f 2(X) = q E X −E X 2 ≤M √ K . (19) This bound for the expectation of f and McDiarmid’s inequality give Pr X  f(X) −M √ K ≥ϵ  ≤Pr X  f(X) −E f(X) ≥ϵ  ≤exp  −Kϵ2 2M 2  (20) To get the final result, set δ to the right hand side, solve for ϵ, and rearrange. B Generalization bounds that use Rademacher complexity One measure of the size of a class F of functions is its Rademacher complexity: Rm[F] ≡ E x1,··· ,xm σ1,··· ,σm " sup f∈F 1 m m X i=1 σif(xi) # , The variables σ1, · · · , σm are iid Bernouli random variables that take on the value -1 or +1 with equal probability and are independent of x1, . . . , xm. The Rademacher complexity of ˆFw can be bounded as follows. Define S ≡  α ∈RK ∥α∥∞≤C K : Rm[ ˆFw] = E σ,X sup α∈S 1 m m X i=1 σi K X k=1 αkφ(xi; ωk) ! = E σ,X sup α∈S K X k=1 αk 1 m m X i=1 σiφ(xi; ωk) (21) ≤E σ,X C K K X k=1 1 m m X i=1 σiφ(xi; ωk) ≤E X C K K X k=1 v u u tE σ 1 m m X k=1 σiφ(xi; ωk) !2 (22) = E X C K K X k=1 v u u tE σ 1 m2 m X k=1 φ2(xi; ωk) ≤C K K X k=1 r 1 m ≤C/√m, (23) 7 where the first inequality follows by H¨older, the second by the concavity of square root, the third by the fact that conditioned on ω, Eσ σiφ(xi; ω)σjφ(xj; ω) = 0 when i ̸= j, and the fourth follows by the boundedness of φ. The following theorem is a summary of the results from [12]: Theorem 2. Let F be a class of bounded functions so that supx |f(x)| ≤C for all f ∈F, and suppose c(y, y′) = c(yy′), with c(yy′) L-Lipschitz. Then with probability at least 1 −δ with respect to training samples {xi, yi}m drawn from a probabilisty distribution P on X × {−1, +1}, every function in F satisfies R[f] ≤Remp[f] + 4LRm[F] + 2|c(0)| √m + LC r 1 2m log 1 δ . (24) References [1] H. D. Block. The perceptron: a model for brain functioning. Review of modern physics, 34:123–135, January 1962. [2] Y. Amit and D. Geman. Shape quantization and recognition with randomized trees. Neural Computation, 9(7):1545–1588, 1997. [3] F. Moosmann, B. Triggs, and F. Jurie. Randomized clustering forests for building fast and discriminative visual vocabularies. In Advances in Neural Information Processing Systems (NIPS), 2006. [4] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In Advances in Neural Information Processing Systems (NIPS), 2007. [5] W. Maass and H. Markram. On the computational power of circuits of spiking neurons. Journal of Computer and System Sciences, 69:593–616, December 2004. [6] E. Osuna, R. Freund, and F. Girosi. Training support vector machines: an application to face detection. In Computer Vision and Pattern Recognition (CVPR), 1997. [7] R. E. Schapire. The boosting approach to machine learning: An overview. In D. D. Denison, M. H. Hansen, C. Holmes, B. Mallick, and B. Yu, editors, Nonlinear Estimation and Classification. Springer, 2003. [8] J. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: a statistical view of boosting. Technical report, Dept. of Statistics, Stanford University, 1998. [9] L. K. Jones. A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training. The Annals of Statistics, 20(1):608–613, March 1992. [10] R. Rifkin, G. Yeo, and T. Poggio. Regularized least squares classification. Advances in Learning Theory: Methods, Model and Applications, NATO Science Series III: Computer and Systems Sciences, 190, 2003. [11] A.R. Barron. Universal approximation bounds for superpositions of a sigmoidal function. IEEE Transactions on Information Theory, 39:930–945, May 1993. [12] P. L. Bartlett and S. Mendelson. Rademacher and Gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research (JMLR), 3:463–482, 2002. [13] F. Girosi. Approximation error bounds that use VC-bounds. In International Conference on Neural Networks, pages 295–302, 1995. [14] G. Gnecco and M. Sanguineti. Approximation error bounds via Rademacher’s complexity. Applied Mathematical Sciences, 2(4):153–176, 2008. 8
2008
30
3,516
Regularized Learning with Networks of Features Ted Sandler, Partha Pratim Talukdar, and Lyle H. Ungar Department of Computer & Information Science, University of Pennsylvania {tsandler,partha,ungar}@cis.upenn.edu John Blitzer Department of Computer Science, U.C. Berkeley blitzer@cs.berkeley.edu Abstract For many supervised learning problems, we possess prior knowledge about which features yield similar information about the target variable. In predicting the topic of a document, we might know that two words are synonyms, and when performing image recognition, we know which pixels are adjacent. Such synonymous or neighboring features are near-duplicates and should be expected to have similar weights in an accurate model. Here we present a framework for regularized learning when one has prior knowledge about which features are expected to have similar and dissimilar weights. The prior knowledge is encoded as a network whose vertices are features and whose edges represent similarities and dissimilarities between them. During learning, each feature’s weight is penalized by the amount it differs from the average weight of its neighbors. For text classification, regularization using networks of word co-occurrences outperforms manifold learning and compares favorably to other recently proposed semi-supervised learning methods. For sentiment analysis, feature networks constructed from declarative human knowledge significantly improve prediction accuracy. 1 Introduction For many important problems in machine learning, we have a limited amount of labeled training data and a very high-dimensional feature space. A common approach to alleviating the difficulty of learning in these settings is to regularize a model by penalizing a norm of its parameter vector. The most commonly used norms in classification, L1 and L2, assume independence among model parameters [1]. However, we often have access to information about dependencies between parameters. For example, with spatio-temporal data, we usually know which measurements were taken at points nearby in space and time. And in natural language processing, digital lexicons such as WordNet can indicate which words are synonyms or antonyms [2]. For the biomedical domain, databases such as KEGG and DIP list putative protein interactions [3, 4]. And in the case of semi-supervised learning, dependencies can be inferred from unlabeled data [5, 6]. Consequently, we should be able to learn models more effectively if we can incorporate dependency structure directly into the norm used for regularization. Here we introduce regularized learning with networks of features, a framework for constructing customized norms on the parameters of a model when we have prior knowledge about which parameters are likely to have similar values. Since our focus is on classification, the parameters we consider are feature weights in a linear classifier. The prior knowledge is encoded as a network or graph whose nodes represent features and whose edges represent similarities between the features in terms of how likely they are to have similar weights. During learning, each feature’s weight is penalized by the amount it differs from the average weight of its neighbors. This regularization objective is closely connected to the unsupervised dimensionality reduction method, locally linear embedding (LLE), proposed by Roweis and Saul [7]. In LLE, each data instance is assumed to be a linear combination of its nearest neighbors on a low dimensional manifold. In this work, each feature’s weight is preferred (though not required) to be a linear combination of the weights of its neighbors. Similar to other recent methods for incorporating prior knowledge in learning, our framework can be viewed as constructing a Gaussian prior with non-diagonal covariance matrix on the model parameters [6, 8]. However, instead of constructing the covariance matrix directly, it is induced from a network. The network is typically sparse in that each feature has only a small number of neighbors. However, the induced covariance matrix is generally dense. Consequently, we can implicitly construct rich and dense covariance matrices over large feature spaces without incurring the space and computational blow-ups that would be incurred if we attempted to construct these matrices explicitly. Regularization using networks of features is especially appropriate for high-dimensional feature spaces such as are encountered in text processing where the local distances required by traditional manifold classification methods [9, 10] may be difficult to estimate accurately, even with large amounts of unlabeled data. We show that regularization with feature-networks derived from word co-occurrence statistics outperforms manifold regularization and another, more recent, semisupervised learning approach [5] on the task of text classification. Feature network based regularization also supports extensions which provide flexibility in modeling parameter dependencies, allowing for feature dissimilarities and the introduction of feature classes whose weights have common but unknown means. We demonstrate that these extensions improve classification accuracy on the task of classifying product reviews in terms of how favorable they are to the products in question [11]. Finally, we contrast our approach with related regularization methods. 2 Regularized Learning with Networks of Features We assume a standard supervised learning framework in which we are given a training set of instances T = {(xi, yi)}n i=1 with xi ∈Rd and associated labels yi ∈Y. We wish to learn a linear classifier parameterized by weight vector w ∈Rd by minimizing a convex loss function l(x, y ; w) over the training instances, (xi, yi). For many problems, the dimension, d, is much larger than the number of labeled instances, n. Therefore, it is important to impose some constraints on w. Here we do this using a directed network or graph, G, whose vertices, V = {1, ..., d}, correspond to the features of our model and whose edges link features whose weights are believed to be similar. The edges of G are non-negative with larger weights indicating greater similarity. Conversely, a weight of zero means that two features are not believed a priori to be similar. As has been shown elsewhere [5, 6, 8], such similarities can be inferred from prior domain knowledge, auxiliary task learning, and statistics computed on unlabeled data. For the time being we assume that G is given and defer its construction until section 4, experimental work. The weights of G are encoded by a matrix, P, where Pij ≥0 gives the weight of the directed edge from vertex i to vertex j. We constrain the out-degree of each vertex to sum to one, P j Pij = 1, so that no feature “dominates” the graph. Because the semantics of the graph are that linked features should have similar weights, we penalize each feature’s weight by the squared amount it differs from the weighted average of its neighbors. This gives us the following criterion to optimize in learning: loss(w) = n X i=1 l(xi, yi ; w) + α d X j=1 ¡ wj − X k Pjk wk ¢2 + β ∥w∥2 2, (1) where we have added a ridge term to make the loss strictly convex. The hyperparameters α and β specify the amount of network and ridge regularization respectively. The regularization penalty can be rewritten as w⊤Mw where M = α (I −P)⊤(I −P)+β I. The matrix M is symmetric positive definite, and therefore our criterion possesses a Bayesian interpretation in which the weight vector, w, is a priori normally distributed with mean zero and covariance matrix 2M −1. Minimizing equation (1) is equivalent to finding the MAP estimate for w. The gradient of (1) with respect to w is ∇w loss = Pn i=1 ∇w l(xi, yi ; w) + 2Mw and therefore requires only an additional matrix multiply on top of computing the loss over the training data. If P is sparse, as it is in our experiments—i.e., it has only kd entries for k ≪d—then the matrix multiply is O(d). Thus equation (1) can be minimized very quickly. Additionally, the induced covariance matrix M −1 will typically be dense even though P is sparse, showing that we can construct dense covariance structures over w without incurring storage and computation costs. 2.1 Relationship to Locally Linear Embedding Locally linear embedding (LLE) is an unsupervised learning method for embedding high dimensional data in a low dimensional vector space. The data { ⃗Xi}n i=1 is assumed to lie on a low dimensional manifold of dimension c within a high dimensional vector space of dimension d with c ≪d. Since the data lies on a manifold, each point is approximately a convex combination of its nearest neighbors on the manifold. That is, ⃗Xi ≈P j∼i Pij ⃗Xj, where j ∼i denotes the samples, j, which lie close to i on the manifold. As above, the matrix P has non-negative entries and its rows sum to one. The set of low dimensional coordinates, {⃗Yi}n i=1, ⃗Yi ∈Rc, are found by minimizing the sum of squares cost: cost({⃗Yi}) = X i ∥⃗Yi − X j Pij ⃗Yj∥2 2, (2) subject to the constraint that the {⃗Yi} have unit variance in each of the c dimensions. The solution to equation (2) is found by performing eigen-decomposition on the matrix (I −P)⊤(I −P) = UΛU ⊤where U is the matrix of eigenvectors and Λ is the diagonal matrix of eigenvalues. The LLE coordinates are obtained from the eigenvectors, u1, ..., uc whose eigenvalues, λ1, ..., λc, are smallest1 by setting ⃗Yi = (u1i, ..., uci)⊤. Looking at equation (1) and ignoring the ridge term, it is clear that our feature network regularization penalty is identical to LLE except that the embedding is found for the feature weights rather than data instances. However, there is a deeper connection. If we let L(Y, Xw) denote the unregularized loss over the training set where X is the n × d matrix of instances and Y is the n-vector of class labels, we can express equation (1) in matrix form as w∗= argmin w L(Y, Xw) + w⊤¡ α (I −P)⊤(I −P) + β I ¢ w. (3) Defining ˜X to be XU(αΛ + β I)−1/2 where U and Λ are from the eigen-decomposition above, it is not hard to show that equation (3) is equivalent to the alternative ridge regularized learning problem ˜w∗= argmin ˜w L(Y, ˜X ˜w) + ˜w⊤˜w. (4) That is, the two minimizers, w and ˜w, yield the same predictions: ˆY = Xw = ˜X ˜w. Consequently, we can view feature network regularization as: 1) finding an embedding for the features using LLE in which all of the eigenvectors are used and scaled by the inverse square-roots of their eigenvalues (plus a smoothing term, βI, that makes the inverse well-defined); 2) projecting the data instances onto these coordinates; and 3) learning a ridge-penalized model for the new representation. In using all of the eigenvectors, the dimensionality of the feature embedding is not reduced. However, in scaling the eigenvectors by the inverse square-roots of their eigenvalues, the directions of least cost in the network regularized problem become the directions of maximum variance in the associated ridge regularized problem, and hence are the directions of least cost in the ridge problem. As a result, the effective dimensionality of the learning problem is reduced to the extent that the distribution of inverted eigenvalues is sharply peaked. When the best representation for classification has high dimension, it is faster to solve (3) than to compute a large eigenvector basis and solve (4). In the high dimensional problems of section 4, we find that regularization with feature networks outperforms LLE-based regression. 3 Extensions to Feature Network Regularization In this section, we pose a number of extensions and alternatives to feature network regularization as formulated in section 2, including the modeling of classes of features whose weights are believed to share the same unknown means, the incorporation of feature dissimilarities, and two alternative regularization criteria based on the graph Laplacian. 1More precisely, eigenvectors u2, ..., uc+1 are used so that the {⃗Yi} are centered. 3.1 Regularizing with Classes of Features In machine learning, features can often be grouped into classes, such that all the weights of the features in a given class are drawn from the same underlying distribution. For example, words can be grouped by part of speech, by meaning (as in WordNet’s synsets), or by clustering based on the words they co-occur with or the documents they occur in. Using an appropriately constructed feature graph, we can model the case in which the underlying distributions are believed to be Gaussians with known, identical variances but with unknown means. That is, the case in which there are k disjoint classes of features {Ci}k i=1 whose weights are drawn i.i.d. N(µi, σ2) with µi unknown but σ2 known and shared across all classes. The straight-forward approach to modeling this scenario might seem to be to link all the features within a class to each other, forming a clique, but this does not lead to the desired interpretation. Additionally, the number of edges in this construction scales quadratically in the clique sizes, resulting in feature graphs that are not sparse. Our approach is therefore to create k additional “virtual” features, f1, ..., fk, that do not appear in any of the data instances but whose weights ˆµ1, ..., ˆµk serve as the estimates for the true but unknown means, µ1, ..., µk. In creating the feature graph, we link each feature to the virtual feature for its class with an edge of weight one. The virtual features, themselves, do not possess any out-going links. Denoting the class of feature i as c(i), and setting the hyperparameters α and β in equation (1) to 1/(2σ2) and 0, respectively, yields a network regularization cost of 1 2σ−2 Pd i=1(wi −ˆµc(i))2. Since the virtual features do not appear in any instances, i.e. their values are zero in every data instance, their weights are free to take on whatever values minimize the network regularization cost in (1), in particular the estimates of the class means, µ1, ..., µk. Consequently, minimizing the network regularization penalty maximizes the log-likelihood for the intended scenario. We can extend this construction to model the case in which the feature weights are drawn from a mixture of Gaussians by connecting each feature to a number of virtual features with edge weights that sum to one. 3.2 Incorporating Feature Dissimilarities Feature network regularization can also be extended to induce features to have opposing weights. Such feature “dissimilarities” can be useful in tasks such as sentiment prediction where we would like weights for words such as “great” or “fantastic” to have opposite signs from their negated bigram counterparts “not great” and “not fantastic,” and from their antonyms. To model dissimilarities, we construct a separate graph whose edges represent anti-correlations between features. Regularizing over this graph enforces each feature’s weight to be equal to the negative of the average of the neighboring weights. To do this, we encode the dissimilarity graph using a matrix Q, defined analogously to the matrix P, and add the term P i ¡ wi + P j Qij wj ¢2 to the network regularization criterion, which can be written as w⊤(I +Q)⊤(I +Q)w. The matrix (I +Q)⊤(I +Q) is positive semidefinite like its similarity graph counterpart. Goldberg et al. [12] use a similar construction with the graph Laplacian in order to incorporate dissimilarities between instances in manifold learning. 3.3 Regularizing Features with the Graph Laplacian A natural alternative to the network regularization criterion given in section (2) is to regularize the feature weights using a penalty derived from the graph Laplacian [13]. Here, the feature graph’s edge weights are given by a symmetric matrix, W, whose entries, Wij ≥0, give the weight of the edge between features i and j. The Laplacian penalty is 1 2 P i,j Wij(wi −wj)2 which can be written as w⊤(D−W) w, where D = diag(W1) is the vertex degree matrix. The main difference between the Laplacian penalty and the network penalty in equation (1) is that the Laplacian penalizes each edge equally (modulo the edge weights) whereas the network penalty penalizes each feature equally. In graphs where there are large differences in vertex degree, the Laplacian penalty will therefore focus most of the regularization cost on features with many neighbors. Experiments in section 4 show that the criterion in (1) outperforms the Laplacian penalty as well as a related penalty derived from the normalized graph Laplacian, 1 2 P i,j Wij(wi/√Dii −wj/ p Djj)2. The normalized Laplacian penalty assumes that p Djjwi ≈√Diiwj, which is different from assuming that linked features should have similar weights. 60 100 200 500 1000 2000 Number of Training Instances Accuracy 30 40 50 60 70 80 FNR LLE Regression PCR Norm. Laplacian Laplacian Ridge Penalty 100 200 500 1000 Number of Training Instances Accuracy 30 40 50 60 70 80 FNR Manifold (Loc/Glob) ASO Top ASO Bottom Figure 1: Left: Accuracy of feature network regularization (FNR) and five baselines on “20 newsgroups” data. Right: Accuracy of FNR compared to reported accuracies of three other semi-supervised learning methods. 4 Experiments We evaluated logistic regression augmented with feature network regularization on two natural language processing tasks. The first was document classification on the 20 Newsgroups dataset, a well-known document classification benchmark. The second was sentiment classification of product reviews, the task of classifying user-written reviews according to whether they are favorable or unfavorable to the product under review based on the review text [11]. Feature graphs for the two tasks were constructed using different information. For document classification, the feature graph was constructed using feature co-occurrence statistics gleaned from unlabeled data. In sentiment prediction, both co-occurrence statistics and prior domain knowledge were used. 4.1 Experiments on 20 Newsgroups We evaluated feature network based regularization on the 20 newsgroups classification task using all twenty classes. The feature set was restricted to the 11,376 words which occurred in at least 20 documents, not counting stop-words. Word counts were transformed by adding one and taking logs. To construct the feature graph, each feature (word) was represented by a binary vector denoting its presence/absence in each of the 20,000 documents of the dataset. To measure similarity between features, we computed cosines between these binary vectors. Each feature was linked to the 25 other features with highest cosine scores, provided that the scores were above a minimum threshold of 0.10. The edge weights of the graph were set to these cosine scores and the matrix P was constructed by normalizing each vertex’s out-degree to sum to one. Figure 1 (left) shows feature network regularization compared against five other baselines: logistic regression with an L2 (ridge) penalty; principal components logistic regression (PCR) in which each instance was projected onto the largest 200 right singular vectors of the n × d matrix, X; LLElogistic regression in which each instance was projected onto the smallest 200 eigenvectors of the matrix (I −P)⊤(I −P) described in section 2; and logistic regression regularized by the normalized and unnormalized graph Laplacians described in section 3.3. Results at each training set size are averages of five trials with training sets sampled to contain an equal number of documents per class. For ridge, the amount of L2 regularization was chosen using cross validation on the training set. Similarly, for feature network regularization and the Laplacian regularizers, the hyperparameters α and β were chosen through cross validation on the training set using a simple grid search. The ratio of α to β tended to be around 100:1. For PCR and LLE-logistic regression, the number of eigenvectors used was chosen to give good performance on the test set at both large and small training set sizes. All models were trained using L-BFGS with a maximum of 200 iterations. Learning a single model took between between 30 seconds and two minutes, with convergence typically achieved before the full 200 iterations. 2 10 50 250 1000 Books Training Instances 50 60 70 80 sim sim+dissim ridge 2 10 50 250 1000 DVDs Training Instances sim sim+dissim ridge 2 10 50 250 1000 Electronics Training Instances sim sim+dissim ridge 2 10 50 250 1000 Kitchen Appliances Training Instances sim sim+dissim ridge Figure 2: Accuracy of feature network regularization on the sentiment datasets using feature classes and dissimilarity edges to regularize the small sent of SentiWordNet features. The results in figure 1 show that feature network regularization with a graph constructed from unlabeled data outperforms all baselines and increases accuracy by 4%-17% over the plain ridge penalty, an error reduction of 17%-30%. Additionally, it outperforms the related LLE regression. We conjecture this is because in tuning the hyperparameters, we can adaptively tune the dimensionality of the underlying data representation. Additionally, by scaling the eigenvectors by their eigenvalues, feature network regularization keeps more information about the directions of least cost in weight space than does LLE regression, which does not rescale the eigenvectors but simply keeps or discards them (i.e. scales them by 1 or 0). Figure 1 (right) compares feature network regularization against two external approaches that leverage unlabeled data: a multi-task learning approach called alternating structure optimization (ASO), and our reimplementation of a manifold learning method which we refer to as “local/global consistency” [5, 10]. To make a fair comparison against the reported results for ASO, training sets were sampled so as not to necessarily contain an equal number of documents per class. Accuracies are given for the highest and lowest performing variants of ASO reported in [5]. Our reimplementation of local/global consistency used the same document preprocessing described in [10]. However, the graph was constructed so that each document had only K = 10 neighbors (the authors in [10] use a fully connected graph which does not fit in memory for the entire 20 newsgroups dataset). Classification accuracy of local/global consistency did not vary much with K and up to 500 neighbors were tried for each document. Here we see that feature network regularization is competitive with the other semi-supervised methods and performs best at all but the smallest training set size. 4.2 Sentiment Classification For sentiment prediction, we obtained the product review datasets used in [11]. Each dataset consists of reviews downloaded from Amazon.com for one of four different product domains: books, DVDs, electronics, and kitchen appliances. The reviews have an associated number of “stars,” ranging from 0 to 5, rating the quality of a product. The goal of the task is to predict whether a review has more than (positive) or less than (negative) 3 stars associated with it based only on the text in the review. We performed two sets of experiments in which prior domain knowledge was incorporated using feature networks. In both, we used a list of sentimentally-charged words obtained from the SentiWordNet database [14], a database which associates positive and negative sentiment scores to each word in WordNet. In the first experiment, we constructed a set of feature classes in the manner described in section 3.1 to see if such classes could be used to boot-strap weight polarities for groups of features. In the second, we computed similarities between words in terms of the similarity of their co-occurrence’s with the sentimentally charged words. From SentiWordNet we extracted a list of roughly 200 words with high positive and negative sentiment scores that also occurred in the product reviews at least 100 times. Words to which SentiWordNet gave a high ‘positive’ score were placed in a “positive words” cluster and words given a high ‘negative’ score were placed in a “negative words” cluster. As described in section 3.1, all words in the positive cluster were attached to a virtual feature representing the mean feature weight of the positive cluster words, and all words in the negative cluster were attached to a virtual weight representing the mean weight of the negative cluster words. We also added a dissimilarity edge (described in section 3.2) between the positive and negative clusters’ virtual features to induce the two 50 100 250 500 1000 Books Training Instances 60 65 70 75 80 85 90 FNR Ridge Penalty 50 100 250 500 1000 DVDs Training Instances FNR Ridge Penalty 50 100 250 500 1000 Electronics Training Instances FNR Ridge Penalty 50 100 250 500 1000 Kitchen Appliances Training Instances FNR Ridge Penalty Figure 3: Accuracy of feature network and ridge regularization on four sentiment classification datasets. classes of features to have opposite means. As shown in figure 2, imposing feature clusters on the two classes of words improves performance noticeably while the addition of the feature dissimilarity edge does not yield much benefit. When it helps, it is only for the smallest training set sizes. This simple set of experiments demonstrated the applicability of feature classes for inducing groups of features to have similar means, and that the words extracted from SentiWordNet were relatively helpful in determining the sentiment of a review. However, the number of features used in these experiments was too small to yield reasonable performance in an applied setting. Thus we extended the feature sets to include all unigram and bigram word-features which occurred in ten or more reviews. The total number of reviews and size of the feature sets is given in table 1. Dataset Instances Features Edges books 13,161 29,404 470,034 DVDs 13,005 31,475 419,178 electronics 8,922 15,104 343,890 kitchen 7,760 11,658 305,926 Table 1: Sentiment Data Statistics The method used to construct the feature graph in the 20 newsgroups experiments was not well suited for sentiment prediction since plain feature co-occurrence statistics tended to find groups of words that showed up in reviews for products of the same type, e.g., digital cameras or laptops. While such similarities are useful in predicting what type of product is being reviewed, they are of little help in determining whether a review is favorable or unfavorable. Thus, to align features along dimensions of ‘sentiment,’ we computed the correlations of all features with the SentiWordNet features so that each word was represented as a 200 dimensional vector of correlations with these highly charged sentiment words. Distances between these correlation vectors were computed in order to determine which features should be linked. We next computed each feature’s 100 nearest neighbors. Two features were linked if both were in the other’s set of nearest 100 neighbors. For simplicity, the edge weights were set to one and the graph weight matrix was then row-normalized in order to construct the matrix P. The number of edges in each feature graph is given in table 1. The ‘kitchen’ dataset was used as a development dataset in order to arrive at the method for constructing the feature graph and for choosing the hyperparameter values: α = 9.9 and β = 0.1. Figure 3 gives accuracy results for all four sentiment datasets at training sets of 50 to 1000 instances. The results show that linking features which are similarly correlated with sentiment-loaded words yields improvements on every dataset and at every training set size. 5 Related Work Most similar to the work presented here is that of the fused lasso (Tibshirani et al. [15]) which can be interpreted as using the graph Laplacian regularizer but with an L1 norm instead of L2 on the residuals of weight differences: P i P j∼i |wi −wj| and all edge weights set to one. As the authors discuss, an L1 penalty prefers that weights of linked features be exactly equal so that the residual vector of weight differences is sparse. L1 is appropriate if the true weights are believed to be exactly equal, but in many settings, features are near copies of one another whose weights should be similar rather than identical. Thus in these settings, penalizing squared differences rather than absolute ones is more appropriate. Optimizing L1 feature weight differences also leads to a much harder optimization problem, making it less applicable in large scale learning. Li and Li [13] regularize feature weights using the normalized graph Laplacian in their work on biomedical prediction tasks. As shown, this criterion does not work as well on the text prediction problems considered here. Krupka and Tishby [8] proposed a method for inducing feature-weight covariance matrices using distances in a “meta-feature” space. Under their framework, two features positively covary if they are close in this space and approach independence as they grow distant. The authors represent each feature i as a vector of meta-features, ui, and compute the entries of the feature weight covariance matrix, Cij = exp(−1 2σ2 ∥ui −uj∥2). Obviously, the choice of which is more appropriate, a feature graph or metric space, is application dependent. However, it is less obvious how to incorporate feature dissimilarities in a metric space. A second difference is that our work defines the regularizer in terms of C−1 ≈(I −P)⊤(I −P) rather than C itself. While C−1 is constructed to be sparse with a nearest neighbors graph, the induced covariance matrix, C, need not be sparse. Thus, working with C−1 allows for construct dense covariance matrices without having to explicitly store them. Finally, Raina et al. [6] learn a feature-weight covariance matrix via auxiliary task learning. Interestingly, the entries of this covariance matrix are learned jointly with a regression model for predicting feature weight covariances as a function of meta-features. However, since their approach explicitly predicts each entry of the covariance matrix, they are restricted to learning smaller models, consisting of hundreds rather than tens of thousands of features. 6 Conclusion We have presented regularized learning with networks of features, a simple and flexible framework for incorporating expectations about feature weight similarities in learning. Feature similarities are modeled using a feature graph and the weight of each feature is preferred to be close to the average of its neighbors. On the task of document classification, feature network regularization is superior to several related criteria, as well as to a manifold learning approach where the graph models similarities between instances rather than between features. Extensions for modeling feature classes, as well as feature dissimilarities, yielded benefits on the problem of sentiment prediction. References [1] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. Springer New York, 2001. [2] C. Fellbaum. WordNet: an electronic lexical database. MIT Press, 1998. [3] H. Ogata, S. Goto, K. Sato, W. Fujibuchi, H. Bono, and M. Kanehisa. KEGG: Kyoto Encyclopedia of Genes and Genomes. Nucleic Acids Research, 27(1):29–34, 1999. [4] I. Xenarios, D.W. Rice, L. Salwinski, M.K. Baron, E.M. Marcotte, and D. Eisenberg. DIP: The Database of Interacting Proteins. Nucleic Acids Research, 28(1):289–291, 2000. [5] R.K. Ando and T. Zhang. A Framework for Learning Predictive Structures from Multiple Tasks and Unlabeled Data. JMLR, 6:1817–1853, 2005. [6] R. Raina, A.Y. Ng, and D. Koller. Constructing informative priors using transfer learning. In ICML, 2006. [7] S.T. Roweis and L.K. Saul. Nonlinear Dimensionality Reduction by Locally Linear Embedding. Science, 290(5500):2323–2326, 2000. [8] E. Krupka and N. Tishby. Incorporating Prior Knowledge on Features into Learning. In AISTATS, 2007. [9] M. Belkin, P. Niyogi, and V. Sindhwani. Manifold regularization: a geometric framework for lerning from lableed and unlabeled examples. JMLR, 7:2399–2434, 2006. [10] D. Zhou, O. Bousquet, T.N. Lal, J. Weston, and B. Sch¨olkopf. Learning with local and global consistency. In NIPS, 2004. [11] J. Blitzer, M. Dredze, and F. Pereira. Biographies, Bollywood, Boom-boxes and Blenders: Domain Adaptation for Sentiment Classification. In ACL, 2007. [12] A.B. Goldberg, X. Zhu, and S. Wright. Dissimilarity in Graph-Based Semi-Supervised Classification. In AISTATS, 2007. [13] C. Li and H. Li. Network-constrained regularization and variable selection for analysis of genomic data. Bioinformatics, 24(9):1175–1182, 2008. [14] A. Esuli and F. Sebastiani. SentiWordNet: A Publicly Available Lexical Resource For Opinion Mining. In LREC, 2006. [15] R. Tibshirani, M. Saunders, S. Rosset, J. Zhu, and K. Knight. Sparsity and Smoothness via the Fused Lasso. Journal of the Royal Statistical Society Series B, 67(1):91–108, 2005.
2008
31
3,517
Breaking Audio CAPTCHAs Jennifer Tam Computer Science Department Carnegie Mellon University 5000 Forbes Ave, Pittsburgh 15217 jdtam@cs.cmu.edu Sean Hyde Electrical and Computer Engineering Carnegie Mellon University 5000 Forbes Ave, Pittsburgh 15217 sean.a.hyde@gmail.com Jiri Simsa Computer Science Department Carnegie Mellon University 5000 Forbes Ave, Pittsburgh 15217 jsimsa@cs.cmu.edu Luis Von Ahn Computer Science Department Carnegie Mellon University 5000 Forbes Ave, Pittsburgh 15217 biglou@cs.cmu.edu Abstract CAPTCHAs are computer-generated tests that humans can pass but current computer systems cannot. CAPTCHAs provide a method for automatically distinguishing a human from a computer program, and therefore can protect Web services from abuse by so-called “bots.” Most CAPTCHAs consist of distorted images, usually text, for which a user must provide some description. Unfortunately, visual CAPTCHAs limit access to the millions of visually impaired people using the Web. Audio CAPTCHAs were created to solve this accessibility issue; however, the security of audio CAPTCHAs was never formally tested. Some visual CAPTCHAs have been broken using machine learning techniques, and we propose using similar ideas to test the security of audio CAPTCHAs. Audio CAPTCHAs are generally composed of a set of words to be identified, layered on top of noise. We analyzed the security of current audio CAPTCHAs from popular Web sites by using AdaBoost, SVM, and k-NN, and achieved correct solutions for test samples with accuracy up to 71%. Such accuracy is enough to consider these CAPTCHAs broken. Training several different machine learning algorithms on different types of audio CAPTCHAs allowed us to analyze the strengths and weaknesses of the algorithms so that we could suggest a design for a more robust audio CAPTCHA. 1 Introduction CAPTCHAs [1] are automated tests designed to tell computers and humans apart by presenting users with a problem that humans can solve but current computer programs cannot. Because CAPTCHAs can distinguish between humans and computers with high probability, they are used for many different security applications: they prevent bots from voting continuously in online polls, automatically registering for millions of spam email accounts, automatically purchasing tickets to buy out an event, etc. Once a CAPTCHA is broken (i.e., computer programs can successfully pass the test), bots can impersonate humans and gain access to services that they should not. Therefore, it is important for CAPTCHAs to be secure. To pass the typical visual CAPTCHA, a user must correctly type the characters displayed in an image of distorted text. Many visual CAPTCHAs have been broken with machine learning techniques [2]-[3], though some remain secure against such attacks. Because visually impaired users who surf the Web using screen-reading programs cannot see this type of CAPTCHA, audio CAPTCHAs were created. Typical audio CAPTCHAs consist of one or several speakers saying letters or digits at randomly spaced intervals. A user must correctly identify the digits or characters spoken in the audio file to pass the CAPTCHA. To make this test difficult for current computer systems, specifically automatic speech recognition (ASR) programs, background noise is injected into the audio files. Since no official evaluation of existing audio CAPTCHAs has been reported, we tested the security of audio CAPTCHAs used by many popular Web sites by running machine learning experiments designed to break them. In the next section, we provide an overview of the literature related to our project. Section 3 describes our methods for creating training data, and section 4 describes how we create classifiers that can recognize letters, digits, and noise. In section 5, we discuss how we evaluated our methods on widely used audio CAPTCHAs and we give our results. In particular, we show that the audio CAPTCHAs used by sites such as Google and Digg are susceptible to machine learning attacks. Section 6 mentions the proposed design of a new more secure audio CAPTCHA based on our findings. 2 Literature review To break the audio CAPTCHAs, we derive features from the CAPTCHA audio and use several machine learning techniques to perform ASR on segments of the CAPTCHA. There are many popular techniques for extracting features from speech. The three techniques we use are mel-frequency cepstral coefficients (MFCC), perceptual linear prediction (PLP), and relative spectral transform-PLP (RASTA-PLP). MFCC is one of the most popular speech feature representations used. Similar to a fast Fourier transform (FFT), MFCC transforms an audio file into frequency bands, but (unlike FFT) MFCC uses mel-frequency bands, which are better for approximating the range of frequencies humans hear. PLP was designed to extract speaker-independent features from speech [4]. Therefore, by using PLP and a variant such as RASTA-PLP, we were able to train our classifiers to recognize letters and digits independently of who spoke them. Since many different people recorded the digits used in one of the types of audio CAPTCHAs we tested, PLP and RASTA-PLP were needed to extract the features that were most useful for solving them. In [4]-[5], the authors conducted experiments on recognizing isolated digits in the presence of noise using both PLP and RASTA-PLP. However, the noise used consisted of telephone or microphone static caused by recording in different locations. The audio CAPTCHAs we use contain this type of noise, as well as added vocal noise and/or music, which is supposed to make the automated recognition process much harder. The authors of [3] emphasize how many visual CAPTCHAs can be broken by successfully splitting the task into two smaller tasks: segmentation and recognition. We follow a similar approach in that we first automatically split the audio into segments, and then we classify these segments as noise or words. In early March 2008, concurrent to our work, the blog of Wintercore Labs [6] claimed to have successfully broken the Google audio CAPTCHA. After reading their Web article and viewing the video of how they solve the CAPTCHAs, we are unconvinced that the process is entirely automatic, and it is unclear what their exact pass rate is. Because we are unable to find any formal technical analysis of this program, we can neither be sure of its accuracy nor the extent of its automation. 3 Creation of training data Since automated programs can attempt to pass a CAPTCHA repeatedly, a CAPTCHA is essentially broken when a program can pass it more than a non-trivial fraction of the time; e.g., a 5% pass rate is enough. Our approach to breaking the audio CAPTCHAs began by first splitting the audio files into segments of noise or words: for our experiments, the words were spoken letters or digits. We used manual transcriptions of the audio CAPTCHAs to get information regarding the location of each spoken word within the audio file. We were able to label our segments accurately by using this information. We gathered 1,000 audio CAPTCHAs from each of the following Web sites: google.com, digg.com, and an older version of the audio CAPTCHA in recaptcha.net. Each of the CAPTCHAs was annotated with the information regarding letter/digit locations provided by the manual transcriptions. For each type of CAPTCHA, we randomly selected 900 samples for training and used the remaining 100 for testing. Using the digit/letter location information provided in the manual CAPTCHA transcriptions, each training CAPTCHA is divided into segments of noise, the letters a-z, or the digits 0-9, and labeled as such. We ignore the annotation information of the CAPTCHAs we use for testing, and therefore we cannot identify the size of those segments. Instead, each test CAPTCHA is divided into a number of fixed-size segments. The segments with the highest energy peaks are then classified using machine learning techniques (Figure 1). Since the size of a feature vector extracted from a segment generally depends on the size of the segment, using fixed-size segments allows each segment to be described with a feature vector of the same length. We chose the window size by listening to a few training segments and adjusted accordingly to ensure that the segment contained the entire digit/letter. There is undoubtedly a more optimal way of selecting the window size, however, we were still able to break the three CAPTCHAs we tested with our method. Figure 1: A test audio CAPTCHA with the fixed-size segments containing the highest energy peaks highlighted. The information provided in the manual transcriptions of the audio CAPTCHAs contains a list of the time intervals within which words are spoken. However, these intervals are of variable size and the word might be spoken anywhere within this interval. To provide fixedsize segments for training, we developed the following heuristic. First, divide each file into variable-size segments using the time intervals provided and label each segment accordingly. Then, within each segment, detect the highest energy peak and return its fixed-size neighborhood labeled with the current segment’s label. This heuristic achieved nearly perfect labeling accuracy for the training set. Rare mistakes occurred when the highest energy peak of a digit or letter segment corresponded to noise rather than to a digit or letter. To summarize this subsection, an audio file is transformed into a set of fixed-size segments labeled as noise, a digit between 0 and 9, or a letter between a and z. These segments are then used for training. Classifiers are trained for one type of CAPTCHA at a time. 4 Classifier construction From the training data we extracted five sets of features using twelve MFCCs and twelfth order spectral (SPEC) and cepstral (CEPS) coefficients from PLP and RASTA-PLP. The Matlab functions for extracting these features were provided online at [7] and as part of the Voicebox package. We use AdaBoost, SVM, and k-NN algorithms to implement automated digit and letter recognition. We detail our implementation of each algorithm in the following subsections. 4 .1 Ada Boost Using decision stumps as weak classifiers for AdaBoost, anywhere from 11 to 37 ensemble classifiers are built. The number of classifiers built depends on which type of CAPTCHA we are solving. Each classifier trains on all the segments associated with that type of CAPTCHA, and for the purpose of building a single classifier, segments are labeled by either -1 (negative example) or +1 (positive example). Using cross-validation, we choose to use 50 iterations for our AdaBoost algorithm. A segment can then be classified as a particular letter, digit, or noise according to the ensemble classifier that outputs the number closest to 1. 4 .2 S u p p ort vector mach ine To conduct digit recognition with SVM, we used the C++ implementations of libSVM [8] version 2.85 with C-SMV and RBF kernel. First, all feature values are scaled to the range of -1 to 1 as suggested by [8]. The scale parameters are stored so that test samples can be scaled accordingly. Then, a single multiclass classifier is created for each set of features using all the segments for a particular type of CAPTCHA. We use cross-validation and grid search to discover the optimal slack penalty (C=32) and kernel parameter (γ=0.011). 4 .3 k-nearest neig h b or (k-NN) We use k-NN as our final method for classifying digits. For each type of CAPTCHA, five different classifiers are created by using all of the training data and the five sets of features associated with that particular type of CAPTCHA. Again we use cross-validation to discover the optimal parameter, in this case k=1. We use Euclidian distance as our distance metric. 5 Assessment of current audio CAPTCHAs Our method for solving CAPTCHAs iteratively extracts an audio segment from a CAPTCHA, inputs the segment to one of our digit or letter recognizers, and outputs the label for that segment. We continue this process until the maximum solution size is reached or there are no unlabeled segments left. Some of the CAPTCHAs we evaluated have solutions that vary in length. Our method ensures that we get solutions of varying length that are never longer than the maximum solution length. A segment to be classified is identified by taking the neighborhood of the highest energy peak of an as yet unlabeled part of the CAPTCHA. Once a prediction of the solution to the CAPTCHA is computed, it is compared to the true solution. Given that at least one of the audio CAPTCHAs allows users to make a mistake in one of the digits (e.g., reCAPTCHA), we compute the pass rate for each of the different types of CAPTCHAs with all of the following conditions: • The prediction matches the true solution exactly. • Inserting one digit into the prediction would make it match the solution exactly. • Replacing one digit in the prediction would make it match the solution exactly. • Removing one digit from the prediction would make it match the solution exactly. However, since we are only sure that these conditions apply to reCAPTCHA audio CAPTCHAs, we also calculate the percentage of exact solution matches in our results for each type of audio CAPTCHA. These results are described in the following subsections. 5 .1 Goog le Google audio CAPTCHAs consist of one speaker saying random digits 0-9, the phrase “once again,” followed by the exact same recorded sequence of digits originally presented. The background noise consists of human voices speaking backwards at varying volumes. A solution can range in length from five to eight words. We set our classifier to find the 12 loudest segments and classify these segments as digits or noise. Because the phrase “once again” marks the halfway point of the CAPTCHA, we preprocessed the audio to only serve this half of the CAPTCHA to our classifiers. It is important to note, however, that the classifiers were always able to identify the segment containing “once again,” and these segments were identified before all other segments. Therefore, if necessary, we could have had our system cut the file in half after first labeling this segment. For AdaBoost, we create 12 classifiers: one classifier for each digit, one for noise, and one for the phrase “once again.” Our results (Table 1) show that at best we achieved a 90% pass rate using the “one mistake” passing conditions and a 66% exact solution match rate. Using SVM and the “one mistake” passing conditions, at best we achieve a 92% pass rate and a 67% exact solution match. For k-NN, the “one mistake” pass rate is 62% and the exact solution match rate is 26%. Table 1: Google audio CAPTCHA results: Maximum 67% accuracy was achieved by SVM. Classifiers Used AdaBoost SVM k-NN One mistake exact match one mistake exact match one mistake exact match MFCC 88% 61% 92% 67% 30% 1% PLPSPEC 90% 66% 90% 67% 60% 26% PLPCEPS 90% 66% 92% 67% 62% 23% RAS TAPLPSPEC 88% 48% 90% 61% 29% 1% Features Used RAS TAPLPCEPS 90% 63% 92% 67% 33% 2% 5 .2 Digg Digg CAPTCHAs also consist of one speaker, in this case saying a random combination of letters and digits. The background noise consists of static or what sounds like trickling water and is not continuous throughout the entire file. We noticed in our training data that the following characters were never present in a solution: 0, 1, 2, 5, 7, 9, i, o, z. Since the Digg audio CAPTCHA is also the verbal transcription of the visual CAPTCHA, we believe that these characters are excluded to avoid confusion between digits and letters that are similar in appearance. The solution length varies between three and six words. Using AdaBoost, we create 28 classifiers: one classifier for each digit or letter that appears in our training data and one classifier for noise. Perhaps because we had fewer segments to train with and there was a far higher proportion of noise segments, AdaBoost failed to produce any correct solutions. We believe that the overwhelming number of negative training examples versus the small number of positive training samples used to create each decision stump severely affected AdaBoost’s ability to classify audio segments correctly. A histogram of the training samples is provided in Figure 2 to illustrate the amount of training data available for each character. When using SVM, the best feature set passed with 96% using “one mistake” passing conditions and passed with 71% when matching the solution exactly. For k-NN, the best feature set produced a 90% “one mistake” pass rate and a 49% exact solution match. Full results can be found in Table 2. Table 2: Digg audio CAPTCHA results: Maximum 71% accuracy was achieved by SVM. Classifiers Used AdaBoost SVM k-NN one mistake exact match one mistake exact match one mistake exact match MFCC - - 96% 71% 89% 49% PLPSPEC - - 94% 65% 90% 47% PLPCEPS - - 96% 71% 64% 17% RAS TAPLPSPEC - - 17% 3% 67% 17% Features Used RAS TAPLPCEPS - - 96% 71% 82% 34% 0 100 200 300 400 500 600 700 800 900 1000 noise 3 4 6 8 a b c d e f g h j k l m n p q r s t u v w x y Segment Label # of Segments Figure 2: Digg CAPTCHA training data distribution. 5 .3 reCAPTCHA The older version of reCAPTCHA’s audio CAPTCHAs we tested consist of several speakers who speak random digits. The background noise consists of human voices speaking backwards at varying volumes. The solution is always eight digits long. For AdaBoost, we create 11 classifiers: one classifier for each digit and one classifier for noise. Because we know that the reCAPTCHA passing conditions are the “one mistake” passing conditions, SVM produces our best pass rate of 58%. Our best exact match rate is 45% (Table 3). Table 3: reCAPTCHA audio CAPTCHA results: Maximum 45% accuracy was achieved by SVM. Classifiers Used AdaBoost SVM k-NN one mistake exact match one mistake exact match one mistake exact match MFCC 18% 6% 56% 43% 22% 11% PLPSPEC 27% 10% 58% 39% 43% 25% PLPCEPS 23% 10% 56% 45% 29% 14% RAS TAPLPSPEC 9% 3% 36% 18% 24% 4% Features Used RAS TAPLPCEPS 9% 3% 46% 30% 32% 12% 6 Properties of weak versus strong CAPTCHAs From our results, we note that the easiest CAPTCHAs to break were from Digg. Google had the next strongest CAPTCHAs followed by the strongest from reCAPTCHA. Although the Digg CAPTCHAs have the largest vocabulary, giving us less training data per label, the same woman recorded them all. More importantly, the same type of noise is used throughout the entire CAPTCHA. The noise sounds like running water and static which sounds very different from the human voice and does not produce the same energy spikes needed to locate segments, therefore making segmentation quite easy. The CAPTCHAs from Google and reCAPTCHA used other human voices for background noise, making segmentation much more difficult. Although Google used a smaller vocabulary than Digg and also only used one speaker, Google’s background noise made the CAPTCHA more difficult to solve. After listening to a few of Google’s CAPTCHAs, we noticed that although the background noise consisted of human voices, the same background noise was repeated. reCAPTCHA had similar noise to Google, but they had a larger selection of noise thus making it harder to learn. reCAPTCHA also has the longest solution length making it more difficult to get perfectly correct. Finally, reCAPTCHA used many different speakers causing it to be the strongest CAPTCHA of the three we tested. In conclusion, an audio CAPTCHA that consists of a finite vocabulary and background noise should have multiple speakers and noise similar to the speakers. 7 Recommendations for creating stronger audio CAPTCHAs Due to our success in solving audio CAPTCHAs, we have decided to start developing new audio CAPTCHAs that our methods, and machine learning methods in general, will be less likely to solve. From our experiments, we note that CAPTCHAs containing longer solutions and multiple speakers tend to be more difficult to solve. Also, because our methods depend on the amount of training data we have, having a large vocabulary would make it more difficult to collect enough training data. Already since obtaining these results, reCAPTCHA.net has updated their audio CAPTCHA to contain more distortions and a larger vocabulary: the digits 0 through 99. In designing a new audio CAPTCHA we are also concerned with the human pass rate. The current human pass rate for the reCAPTCHA audio CAPTCHAs is only 70%. To develop an audio CAPTCHA with an improved human pass rate, we plan to take advantage of the human mind’s ability to understand distorted audio through context clues. By listening to a phrase instead of to random isolated words, humans are better able to decipher distorted utterances because they are familiar with the phrase or can use contextual clues to decipher the distorted audio. Using this idea, the audio for our new audio CAPTCHA will be taken from old-time radio programs in which the poor quality of the audio makes transcription by ASR systems difficult. Users will be presented with an audio clip consisting of a 4-6 word phrase. Half of the CAPTCHA consists of words, which validate a user to be human, while the other half of the words need to be transcribed. This is the same idea behind the visual reCAPTCHA that is currently digitizing text on which OCR fails. We expect that this new audio CAPTCHA will be more secure than the current version and easier for humans to pass. Initial experiments using this idea show this to be true [9]. 8 Conclusion We have succeeded in “breaking” three different types of widely used audio CAPTCHAs, even though these were developed with the purpose of defeating attacks by machine learning techniques. We believe our results can be improved by selecting optimal segment sizes, but that is unnecessary given our already high success rate. For our experiments, segment sizes were not chosen in a special way; occasionally yielding results in which a segment only contained half of a word, causing our prediction to contain that particular word twice. We also believe that the AdaBoost results can be improved, particularly for the Digg audio CAPTCHAs, by ensuring that the number of negative training samples is closer to the number of positive training samples. We have shown that our approach is successful and can be used with many different audio CAPTCHAs that contain small finite vocabularies. Acknowledgments This work was partially supported by generous gifts from the Heinz Endowment, by an equipment grant from Intel Corporation, and by the Army Research Office through grant number DAAD19-02-1-0389 to CyLab at Carnegie Mellon University. Luis von Ahn was partially supported by a Microsoft Research New Faculty Fellowship and a MacArthur Fellowship. Jennifer Tam was partially supported by a Google Anita Borg Scholarship. References [1] L. von Ahn, M. Blum, and J. Langford. “Telling Humans and Computers Apart Automatically,” Communications of the ACM, vol. 47, no. 2, pp. 57-60, Feb. 2004. [2] G. Mori and J. Malik. “Recognizing Objects in Adversarial Clutter: Breaking a Visual CAPTCHA,” In Computer Vision and Pattern Recognition CVPR'03, June 2003. [3] K. Chellapilla, and P. Simard, “Using Machine Learning to Break Visual Human Interaction Proofs (HIPs),” Advances in Neural Information Processing Systems 17, Neural Information Processing Systems (NIPS'2004), MIT Press. [4] H. Hermansky, “Perceptual Linear Predictive (PLP) Analysis of Speech,” J. Acoust. Soc. Am., vol. 87, no. 4, pp. 1738-1752, Apr. 1990. [5] H. Hermansky, N. Morgan, A. Bayya, and P. Kohn. “RASTA-PLP Speech Analysis Technique,” In Proc. IEEE Int’l Conf. Acoustics, Speech & Signal Processing, vol. 1, pp. 121124, San Francisco, 1992. [6] R. Santamarta. “Breaking Gmail’s Audio Captcha,” http://blog.wintercore.com/?p=11, 2008. [7] D. Ellis. “PLP and RASTA (and MFCC, and inversion) in Matlab using melfcc.m and invmelfcc.m,” http://www.ee.columbia.edu/~dpwe/resources/matlab/rastamat/, 2006. [8] C. Chang and C. Lin. LIBSVM: a library for support vector machines, 2001. Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm [9] A. Schlaikjer. “A Dual-Use Speech CAPTCHA: Aiding Visually Impaired Web Users while Providing Transcriptions of Audio Streams,” Technical Report CMU-LTI-07-014, Carnegie Mellon University. November 2007.
2008
32
3,518
Efficient Direct Density Ratio Estimation for Non-stationarity Adaptation and Outlier Detection Takafumi Kanamori Nagoya University Nagoya, Japan kanamori@is.nagoya-u.ac.jp Shohei Hido IBM Research Kanagawa, Japan hido@jp.ibm.com Masashi Sugiyama Tokyo Institute of Technology Tokyo, Japan sugi@cs.titech.ac.jp Abstract We address the problem of estimating the ratio of two probability density functions (a.k.a. the importance). The importance values can be used for various succeeding tasks such as non-stationarity adaptation or outlier detection. In this paper, we propose a new importance estimation method that has a closed-form solution; the leave-one-out cross-validation score can also be computed analytically. Therefore, the proposed method is computationally very efficient and numerically stable. We also elucidate theoretical properties of the proposed method such as the convergence rate and approximation error bound. Numerical experiments show that the proposed method is comparable to the best existing method in accuracy, while it is computationally more efficient than competing approaches. 1 Introduction In the context of importance sampling, the ratio of two probability density functions is called the importance. The problem of estimating the importance is gathering a lot of attention these days since the importance can be used for various succeeding tasks, e.g., Covariate shift adaptation: Covariate shift is a situation in supervised learning where the distributions of inputs change between the training and test phases but the conditional distribution of outputs given inputs remains unchanged [8]. Covariate shift is conceivable in many real-world applications such as bioinformatics, brain-computer interfaces, robot control, spam filtering, and econometrics. Under covariate shift, standard learning techniques such as maximum likelihood estimation or cross-validation are biased and therefore unreliable—the bias caused by covariate shift can be compensated by weighting the training samples according to the importance [8, 5, 1, 9]. Outlier detection: The outlier detection task addressed here is to identify irregular samples in an evaluation dataset based on a model dataset that only contains regular samples [7, 3]. The importance values for regular samples are close to one, while those for outliers tend to be significantly deviated from one. Thus the values of the importance could be used as an index of the degree of outlyingness. Below, we refer to the two sets of samples as the training and test sets. A naive approach to estimating the importance is to first estimate the training and test densities from the sets of training and test samples separately, and then take the ratio of the estimated densities. However, density estimation is known to be a hard problem particularly in high-dimensional cases. In practice, such an appropriate parametric model may not be available and therefore this naive approach is not so effective. 1 To cope with this problem, we propose a direct importance estimation method that does not involve density estimation. The proposed method, which we call least-squares importance fitting (LSIF), is formulated as a convex quadratic program and therefore the unique global solution can be obtained. We give a cross-validation method for model selection and a regularization path tracking algorithm for efficient computation [4]. This regularization path tracking algorithm is turned out to be computationally very efficient since the entire solution path can be traced without a quadratic program solver. However, it tends to share a common weakness of path tracking algorithms, i.e., accumulation of numerical errors. To overcome this drawback, we develop an approximation algorithm called unconstrained LSIF (uLSIF), which allows us to obtain the closed-form solution that can be stably computed just by solving a system of linear equations. Thus uLSIF is computationally efficient and numerically stable. Moreover, the leave-one-out error of uLSIF can also be computed analytically, which further improves the computational efficiency in model selection scenarios. We experimentally show that the accuracy of uLSIF is comparable to the best existing method while its computation is much faster than the others in covariate shift adaptation and outlier detection. 2 Direct Importance Estimation Formulation and Notation: Let D ⊂(Rd) be the data domain and suppose we are given independent and identically distributed (i.i.d.) training samples {xtr i }ntr i=1 from a training distribution with density ptr(x) and i.i.d. test samples {xte j }nte j=1 from a test distribution with density pte(x). We assume ptr(x) > 0 for all x ∈D. The goal of this paper is to estimate the importance w(x) = pte(x) ptr(x) from {xtr i }ntr i=1 and {xte j }nte j=1. Our key restriction is that we want to avoid estimating densities pte(x) and ptr(x) when estimating the importance w(x). Least-squares Approach: Let us model the importance w(x) by the following linear model: bw(x) = α⊤ϕ(x), (1) where ⊤denotes the transpose, α = (α1, . . . , αb)⊤, is a parameter to be learned, b is the number of parameters, ϕ(x) = (ϕ1(x), . . . , ϕb(x))⊤are basis functions such that ϕ(x) ≥0b for all x ∈D, 0b denotes the b-dimensional vector with all zeros, and the inequality for vectors is applied in the element-wise manner. Note that b and {ϕℓ(x)}b ℓ=1 could be dependent on the samples i.e., kernel models are also allowed. We explain how the basis functions {ϕℓ(x)}b ℓ=1 are chosen later. We determine the parameter α so that the following squared error is minimized: J0(α) = 1 2 R  bw(x) −pte(x) ptr(x) 2 ptr(x)dx = 1 2 R bw(x)2ptr(x)dx − R bw(x)pte(x)dx + C, where C = 1 2 R w(x)pte(x)dx is a constant and therefore can be safely ignored. Let J(α) = J0(α) −C = 1 2α⊤Hα −h⊤α, (2) where H = R ϕ(x)ϕ(x)⊤ptr(x)dx, h = R ϕ(x)pte(x)dx. Using the empirical approximation and taking into account the non-negativity of the importance function w(x), we obtain minα∈Rb h 1 2α⊤c Hα −bh ⊤α + λ1⊤ b α i s.t. α ≥0b, (3) where c H = 1 ntr Pntr i=1 ϕ(xtr i )ϕ(xtr i )⊤, bh = 1 nte Pnte j=1 ϕ(xte j ). λ1⊤ b α is a regularization term for avoiding overfitting, λ ≥0, and 1b is the b-dimensional vector with all ones. The above problem is a convex quadratic program and therefore the global optimal solution can be obtained by a standard software. We call this method Least-Squares Importance Fitting (LSIF). 2 Convergence Analysis of LSIF: Here, we theoretically analyze the convergence property of the solution bα of the LSIF algorithm. Let α∗be the optimal solution of the ‘ideal’ problem: minα∈Rb h 1 2α⊤Hα −h⊤α + λ1⊤ b α i s.t. α ≥0b. (4) Let f(n) = ω(g(n)) mean that f(n) asymptotically dominates g(n), i.e., for all C > 0, there exists n0 such that |Cg(n)| < |f(n)| for all n > n0. Then we have the following theorem. Theorem 1 Assume that (a) the optimal solution of the problem (4) satisfies the strict complementarity condition, and (b) ntr and nte satisfy nte = ω(n2 tr). Then we have E[J(bα)] = J(α∗) + O n−1 tr  , where E denotes the expectation over all possible training samples of size ntr and all possible test samples of size nte. Theorem 1 guarantees that LSIF converges to the ideal solution with order n−1 tr . It is possible to explicitly obtain the coefficient of the term of order n−1 tr , but we omit the detail due to lack of space. Model Selection for LSIF: The performance of LSIF depends on the choice of the regularization parameter λ and basis functions {ϕℓ(x)}b ℓ=1 (which we refer to as a model). Since our objective is to minimize the cost function J, it is natural to determine the model such that J is minimized. Here, we employ cross-validation for estimating J(bα), which has an accuracy guarantee for finite samples: First, the training samples {xtr i }ntr i=1 and test samples {xte j }nte j=1 are divided into R disjoint subsets {X tr r }R r=1 and {X te r }R r=1, respectively. Then an importance estimate bwr(x) is obtained using {X tr j }j̸=r and {X te j }j̸=r, and the cost J is approximated using the held-out samples X tr r and X te r as bJ(CV) r = 1 2|X tr r | P xtr∈X tr r bwr(xtr)2 − 1 |X te r | P xte∈X te r bwr(xte). This procedure is repeated for r = 1, . . . , R and its average bJ(CV) is used as an estimate of J. We can show that bJ(CV) gives an almost unbiased estimate of the true cost J, where the ‘almost’-ness comes from the fact that the number of samples is reduced due to data splitting. Heuristics of Basis Function Design: A good model may be chosen by cross-validation, given that a family of promising model candidates is prepared. As model candidates, we propose using a Gaussian kernel model centered at the test input points {xte j }nte j=1, i.e., bw(x) = Pnte ℓ=1 αℓKσ(x, xte ℓ), where Kσ(x, x′) = exp −∥x −x′∥2/(2σ2)  . (5) The reason why we chose the test input points {xte j }nte j=1 as the Gaussian centers, not the training input points {xtr i }ntr i=1, is as follows. By definition, the importance w(x) tends to take large values if the training input density ptr(x) is small and the test input density pte(x) is large; conversely, w(x) tends to be small (i.e., close to zero) if ptr(x) is large and pte(x) is small. When a function is approximated by a Gaussian kernel model, many kernels may be needed in the region where the output of the target function is large; on the other hand, only a small number of kernels would be enough in the region where the output of the target function is close to zero. Following this heuristic, we allocate many kernels at high test input density regions, which can be achieved by setting the Gaussian centers at the test input points {xte j }nte j=1. Alternatively, we may locate (ntr + nte) Gaussian kernels at both {xtr i }ntr i=1 and {xte j }nte j=1. However, in our preliminary experiments, this did not further improve the performance, but just slightly increased the computational cost. When nte is large, just using all the test input points {xte j }nte j=1 as Gaussian centers is already computationally rather demanding. To ease this problem, we practically propose using a subset of {xte j }nte j=1 as Gaussian centers for computational efficiency, i.e., bw(x) = Pb ℓ=1 αℓKσ(x, cℓ), (6) where cℓis a template point randomly chosen from {xte j }nte j=1 and b (≤nte) is a prefixed number. In the experiments shown later, we fix the number of template points at b = min(100, nte), and optimize the kernel width σ and the regularization parameter λ by cross-validation with grid search. 3 Entire Regularization Path for LSIF: We can show that the LSIF solution bα is piecewise linear with respect to the regularization parameter λ. Therefore, the regularization path (i.e., solutions for all λ) can be computed efficiently based on the parametric optimization technique [4]. A basic idea of regularization path tracking is to check the violation of the Karush-KuhnTucker (KKT) conditions—which are necessary and sufficient conditions for optimality of convex programs—when the regularization parameter λ is changed. Although the detail of the algorithm is omitted due to lack of space, we can show that a quadratic programming solver is no longer needed for obtaining the entire solution path of LSIF—just computing matrix inverses is enough. This highly contributes to saving the computation time. However, in our preliminary experiments, the regularization path tracking algorithm is turned out to be numerically rather unreliable since the numerical errors tend to be accumulated when tracking the regularization path. This seems to be a common pitfall of solution path tracking algorithms in general. 3 Approximation Algorithm Unconstrained Least-squares Approach: The approximation idea we introduce here is very simple: we ignore the non-negativity constraint of the parameters in the optimization problem (3). Thus minβ∈Rb h 1 2β⊤c Hβ −bh ⊤β + λ 2 β⊤β i . (7) In the above, we included a quadratic regularization term λβ⊤β/2, instead of the linear one λ1⊤ b α since the linear penalty term does not work as a regularizer without the non-negativity constraint. Eq.(7) is an unconstrained convex quadratic program, so the solution can be analytically computed. However, since we dropped the non-negativity constraint β ≥0b, some of the learned parameters could be negative. To compensate for this approximation error, we modify the solution by bβ = max(0b, eβ), eβ = (c H + λIb)−1bh, (8) where Ib is the b-dimensional identity matrix and the ‘max’ operation for vectors is applied in the element-wise manner. This is the solution of the approximation method we propose in this section. An advantage of the above unconstrained formulation is that the solution can be computed just by solving a system of linear equations. Therefore, the computation is fast and stable. We call this method unconstrained LSIF (uLSIF). Due to the ℓ2 regularizer, the solution tends to be close to 0b to some extent. Thus, the effect of ignoring the non-negativity constraint may not be so strong. Below, we theoretically analyze the approximation error of uLSIF. Convergence Analysis of uLSIF: Here, we theoretically analyze the convergence property of the solution bβ of the uLSIF algorithm. Let β∗be the optimal solution of the ‘ideal’ problem: β∗= max(0b, β◦), where β◦= argminβ∈Rb h 1 2β⊤Hβ −h⊤β + λ 2 β⊤β i . Then we have Theorem 2 Assume that (a) β◦ ℓ̸= 0 for ℓ= 1, . . . , b, and (b) ntr and nte satisfy nte = ω(n2 tr). Then we have E[J(bβ)] = J(β∗) + O n−1 tr  . Theorem 2 guarantees that uLSIF converges to the ideal solution with order n−1 tr . It is possible to explicitly obtain the coefficient of the term of order n−1 tr , but we omit the detail due to lack of space. We can also derive upper bounds on the difference between LSIF and uLSIF and show that uLSIF gives a good approximation to LSIF. However, we do not go into the detail due to space limitation. Efficient Computation of Leave-one-out Cross-validation Score: Another practically very important advantage of uLSIF is that the score of leave-one-out cross-validation (LOOCV) can also be computed analytically—thanks to this property, the computational complexity for performing LOOCV is the same order as just computing a single solution. In the current setting, we are given two sets of samples, {xtr i }ntr i=1 and {xte j }nte j=1, which generally have different sample size. For simplicity, we assume that ntr < nte and the i-th training sample xtr i and the i-th test sample xte i are held out at the same time; the test samples {xte j }nte j=ntr+1 are always used for importance estimation. 4 Let bβ (i) λ be a parameter learned without the i-th training sample xtr i and the i-th test sample xte i . Then the LOOCV score is expressed as 1 ntr Pntr i=1[ 1 2(ϕ(xtr i )⊤bβ (i) λ )2 −ϕ(xte i )⊤bβ (i) λ ]. Our approach to efficiently computing the LOOCV score is to use the Sherman-Woodbury-Morrison formula for computing matrix inverses—bβ (i) λ can be expressed as bβ (i) λ = max{0b, (ntr−1)nte ntr(nte−1)(a + a⊤ϕ(xtr i )·ate ntr−ϕ(xtr i )⊤ate )− (ntr−1) ntr(nte−1)(atr + a⊤ teϕ(xtr i )·atr ntr−ϕ(xtr i )⊤atr )}, where, a = A−1bh, atr = A−1ϕ(xtr i ), ate = A−1ϕ(xte i ), A = c H + (ntr−1)λ ntr Ib. This implies that the matrix inverse needs to be computed only once (i.e., A−1) for calculating LOOCV scores. Thus LOOCV can be carried out very efficiently without repeating hold-out loops. 4 Relation to Existing Methods Kernel density estimator (KDE) is a non-parametric technique to estimate a probability density function. KDE can be used for importance estimation by first estimating bptr(x) and bpte(x) separately from {xtr i }ntr i=1 and {xte j }nte j=1 and then estimating the importance by bw(x) = bpte(x)/bptr(x). KDE is efficient in computation since no optimization is involved, and model selection is possible by likelihood cross validation. However, KDE may suffer from the curse of dimensionality. The kernel mean matching (KMM) method allows us to directly obtain an estimate of the importance values at training points without going through density estimation [5]. KMM can overcome the curse of dimensionality by directly estimating the importance using a special property of the Gaussian reproducing kernel Hilbert space. However, there is no objective model selection method for the regularization parameter and kernel width. As for the regularization parameter, we may follow a suggestion in the original paper, which is justified by a theoretical argument to some extent [5]. As for the Gaussian width, we may adopt a popular heuristic to use the median distance between samples, although there seems no strong justification for this. The computation of KMM is rather demanding since a quadratic programming problem has to be solved. Other approaches to directly estimating the importance is to directly fit an importance model to the true importance—a method based on logistic regression (LogReg) [1], or a method based on the kernel model (6) (which is called the Kullback-Leibler importance estimation procedure, KLIEP) [9, 6]. Model selection of these methods is possible by cross-validation, which is a significant advantage over KMM. However, LogReg and KLIEP are computationally rather expensive since non-linear optimization problems have to be solved. The proposed LSIF is qualitatively similar to LogReg and KLIEP, i.e., it can avoid density estimation, model selection is possible, and non-linear optimization is involved. However, LSIF is advantageous over LogReg and KLIEP in that it is equipped with a regularization path tracking algorithm. Thanks to this, model selection of LSIF is computationally much more efficient than LogReg and KLIEP. However, the regularization path tracking algorithm tends to be numerically unstable. The proposed uLSIF inherits good properties of existing methods such as no density estimation involved and a build-in model selection method equipped. In addition to these preferable properties, the solution of uLSIF can be computed analytically through matrix inversion and therefore uLSIF is computationally very efficient and numerically stable. Furthermore, the closed-form solution of uLSIF allows us to compute the LOOCV score analytically without repeating hold-out loops, which highly contributes to reducing the computation time in the model selection phase. 5 Experiments Importance Estimation: Let ptr(x) be the d-dimensional normal distribution with mean zero and covariance identity; let pte(x) be the d-dimensional normal distribution with mean (1, 0, . . . , 0)⊤ and covariance identity. The task is to estimate the importance at training input points: {w(xtr i )}ntr i=1. We fixed the number of test input points at nte = 1000 and consider the following two settings for the number ntr of training samples and the input dimension d: (a) ntr = 100 and d = 1, 2, . . . , 20, (b) d = 10 and ntr = 50, 60, . . . , 150. We run the experiments 100 times for each d, each ntr, and each method, and evaluate the quality of the importance estimates { bwi}ntr i=1 by the normalized mean 5 5 10 15 20 10 −6 10 −5 10 −4 10 −3 Average NMSE over 100 Trials (in Log Scale) d (Input Dimension) KDE KMM LogReg KLIEP uLSIF (a) When d is changed 50 100 150 10 −6 10 −5 10 −4 10 −3 Average NMSE over 100 Trials (in Log Scale) ntr (Number of Training Samples) (b) When ntr is changed Figure 1: NMSEs averaged over 100 trials in log scale. 5 10 15 20 0 0.02 0.04 0.06 0.08 0.1 0.12 Computation Time [sec] d (Input Dimension) KDE KMM LogReg KLIEP uLSIF (a) When d is changed 50 100 150 0 0.05 0.1 0.15 Computation Time [sec] ntr (Number of Training Samples) (b) When ntr is changed Figure 2: Mean computation time (after model selection) over 100 trials. 5 10 15 20 0 5 10 15 Total Computation Time [sec] d (Input Dimension) LogReg uLSIF (a) When d is changed 50 100 150 0 2 4 6 8 10 12 Total Computation Time [sec] ntr (Number of Training Samples) (b) When ntr is changed Figure 3: Mean computation time (including model selection of σ and λ over 9×9 grid). squared error (NMSE): 1 ntr Pntr i=1 ( bw(xtr i ) −w(xtr i ))2, where Pntr i=1 bw(xtr i ) and Pntr i=1 w(xtr i ) are normalized to be one, respectively. NMSEs averaged over 100 trials (a) as a function of input dimension d and (b) as a function of the training sample size ntr are plotted in log scale in Figure 1. Error bars are omitted for clear visibility—instead, the best method in terms of the mean error and comparable ones based on the t-test at the significance level 1% are indicated by ‘◦’; the methods with significant difference are indicated by ‘×’. Figure 1(a) shows that the error of KDE sharply increases as the input dimension grows, while LogReg, KLIEP, and uLSIF tend to give much smaller errors than KDE. This would be the fruit of directly estimating the importance without going through density estimation. KMM tends to perform poorly, which is caused by an inappropriate choice of the Gaussian kernel width. This implies that the popular heuristic of using the median distance between samples as the Gaussian width is not always appropriate. On the other hand, model selection in LogReg, KLIEP, and uLSIF seems to work quite well. Figure 1(b) shows that the errors of all methods tend to decrease as the number of training samples grows. Again LogReg, KLIEP, and uLSIF tend to give much smaller errors than KDE and KMM. Next we investigate the computation time. Each method has a different model selection strategy, i.e., KMM does not involve any cross-validation, KDE and KLIEP involve cross-validation over the kernel width, and LogReg and uLSIF involve cross-validation over both the kernel width and the regularization parameter. Thus the naive comparison of the total computation time is not so meaningful. For this reason, we first investigate the computation time of each importance estimation method after the model parameters are fixed. The average CPU computation time over 100 trials are summarized in Figure 2. Figure 2(a) shows that the computation time of KDE, KLIEP, and uLSIF is almost independent of the input dimensionality d, while that of KMM and LogReg is rather dependent on d. Among them, the proposed uLSIF is one of the fastest methods. Figure 2(b) shows that the computation time of LogReg, KLIEP, and uLSIF is nearly independent of the training sample size ntr, while that of KDE and KMM sharply increase as ntr increases. Both LogReg and uLSIF have very good accuracy and their computation time after model selection is comparable. Finally, we compare the entire computation time of LogReg and uLSIF including cross-validation, which is summarized in Figure 3. We note that the Gaussian width σ and the regularization parameter λ are chosen over the 9 × 9 equidistant grid in this experiment for both LogReg and uLSIF. Therefore, the comparison of the entire computation time is fair. Figures 3(a) and 3(b) show that uLSIF is approximately 5 to 10 times faster than LogReg. 6 Overall, uLSIF is shown to be comparable to the best existing method (LogReg) in terms of the accuracy, but is computationally more efficient than LogReg. Covariate Shift Adaptation in Regression and Classification: Next, we illustrate how the importance estimation methods could be used in covariate shift adaptation [8, 5, 1, 9]. Covariate shift is a situation in supervised learning where the input distributions change between the training and test phases but the conditional distribution of outputs given inputs remains unchanged. Under covariate shift, standard learning techniques such as maximum likelihood estimation or cross-validation are biased; the bias caused by covariate shift can be asymptotically canceled by weighting the samples according to the importance. In addition to training input samples {xtr i }ntr i=1 following a training input density ptr(x) and test input samples {xte j }nte j=1 following a test input density pte(x), suppose that training output samples {ytr i }ntr i=1 at the training input points {xtr i }ntr i=1 are given. The task is to predict the outputs for test inputs. We use the kernel model bf(x; θ) = Pt ℓ=1 θℓKh(x, mℓ) for function learning, where Kh(x, x′) is the Gaussian kernel (5) and mℓis a template point randomly chosen from {xte j }nte j=1. We set the number of kernels at t = 50. We learn the parameter θ by importance weighted regularized least-squares (IWRLS): minθ h Pntr i=1 bw(xtr i )  bf(xtr i ; θ) −ytr i 2 + γ∥θ∥2i . (9) It is known that IWRLS is consistent when the true importance w(xtr i ) is used as weights— unweighted RLS is not consistent due to covariate shift, given that the true learning target function f(x) is not realizable by the model bf(x) [8]. The kernel width h and the regularization parameter γ in IWRLS (9) are chosen by importance weighted CV (IWCV) [9]. More specifically, we first divide the training samples {ztr i | ztr i = (xtr i , ytr i )}ntr i=1 into R disjoint subsets {Ztr r }R r=1. Then a function bfr(x) is learned using {Ztr j }j̸=r by IWRLS and its mean test error for the remaining samples Ztr r is computed: 1 |Ztr r | P (x,y)∈Ztr r bw(x)loss  bfr(x), y  , (10) where loss (by, y) is (by −y)2 in regression and 1 2(1 −sign{byy}) in classification. We repeat this procedure for r = 1, . . . , R and choose the kernel width h and the regularization parameter γ so that the average of the above mean test error over all r is minimized. We set the number of folds in IWCV at R = 5. IWCV is shown to be an (almost) unbiased estimator of the generalization error, while unweighted CV with misspecified models is biased due to covariate shift. The datasets provided by DELVE and IDA are used for performance evaluation, where training input points are sampled with bias in the same way as [9]. We set the number of samples at ntr = 100 and nte = 500 for all datasets. We compare the performance of KDE, KMM, LogReg, KLIEP, and uLSIF, as well as the uniform weight (Uniform, i.e., no adaptation is made). The experiments are repeated 100 times for each dataset and evaluate the mean test error: 1 nte Pnte j=1 loss( bf(xte j ), yte j ). The results are summarized in Table 1, where all the error values are normalized by that of the uniform weight (no adaptation). For each dataset, the best method and comparable ones based on the Wilcoxon signed rank test at the significance level 1% are described in bold face. The upper half corresponds to regression datasets taken from DELVE while the lower half correspond to classification datasets taken from IDA. The table shows that the generalization performance of uLSIF tends to be better than that of Uniform, KDE, KMM, and LogReg, while it is comparable to the best existing method (KLIEP). The mean computation time over 100 trials is described in the bottom row of the table, where the value is normalized so that the computation time of uLSIF is one. This shows that uLSIF is computationally more efficient than KLIEP. Thus, proposed uLSIF is overall shown to work well in covariate shift adaptation with low computational cost. Outlier Detection: Here, we consider an outlier detection problem of finding irregular samples in a dataset (“evaluation dataset”) based on another dataset (“model dataset”) that only contains 7 Table 1: Covariate shift adaptation. Mean and standard deviation of test error over 100 trials (smaller is better). Dataset Uniform KDE KMM LogReg KLIEP uLSIF kin-8fh 1.00(0.34) 1.22(0.52) 1.55(0.39) 1.31(0.39) ◦0.95(0.31) ◦1.02(0.33) kin-8fm 1.00(0.39) 1.12(0.57) 1.84(0.58) 1.38(0.57) ◦0.86(0.35) ◦0.88(0.39) kin-8nh ◦1.00(0.26) 1.09(0.20) 1.19(0.29) 1.09(0.19) ◦0.99(0.22) ◦1.02(0.18) kin-8nm ◦1.00(0.30) 1.14(0.26) 1.20(0.20) 1.12(0.21) ◦0.97(0.25) 1.04(0.25) abalone ◦1.00(0.50) 1.02(0.41) ◦0.91(0.38) ◦0.97(0.49) ◦0.97(0.69) ◦0.96(0.61) image ◦1.00(0.51) 0.98(0.45) 1.08(0.54) ◦0.98(0.46) ◦0.94(0.44) ◦0.98(0.47) ringnorm 1.00(0.04) 0.87(0.04) ◦0.87(0.04) 0.95(0.08) 0.99(0.06) 0.91(0.08) twonorm 1.00(0.58) 1.16(0.71) ◦0.94(0.57) ◦0.91(0.61) ◦0.91(0.52) ◦0.88(0.57) waveform 1.00(0.45) 1.05(0.47) 0.98(0.31) ◦0.93(0.32) ◦0.93(0.34) ◦0.92(0.32) Average 1.00(0.38) 1.07(0.40) 1.17(0.37) 1.07(0.37) 0.95(0.35) 0.96(0.36) Time — 0.82 3.50 3.27 3.64 1.00 Table 2: Outlier detection. Mean AUC values over 20 trials (larger is better). Dataset uLSIF KLIEP LogReg KMM OSVM LOF KDE banana .851 .815 .447 .578 .360 .915 .934 b-cancer .463 .480 .627 .576 .508 .488 .400 diabetes .558 .615 .599 .574 .563 .403 .425 f-solar .416 .485 .438 .494 .522 .441 .378 german .574 .572 .556 .529 .535 .559 .561 heart .659 .647 .833 .623 .681 .659 .638 image .812 .828 .600 .813 .540 .930 .916 splice .713 .748 .368 .541 .737 .778 .845 thyroid .534 .720 .745 .681 .504 .111 .256 titanic .525 .534 .602 .502 .456 .525 .461 t-norm .905 .902 .161 .439 .846 .889 .875 w-form .890 .881 .243 .477 .861 .887 .861 Average .661 .685 .530 .608 .596 .629 .623 Time 1.00 11.7 5.35 751 12.4 85.5 8.70 regular samples. Defining the importance over two sets of samples, we can see that the importance values for regular samples are close to one, while those for outliers tend to be significantly deviated from one. Thus the importance values could be used as an index of the degree of outlyingness in this scenario. Since the evaluation dataset has wider support than the model dataset, we regard the evaluation dataset as the training set (i.e., the denominator in the importance) and the model dataset as the test set (i.e., the numerator in the importance). Then outliers tend to have smaller importance values (i.e., close to zero). We again test KMM, LogReg, KLIEP, and uLSIF for importance estimation; in addition, we test native outlier detection methods such as the one-class support vector machine (OSVM) [7], the local outlier factor (LOF) [3], and the kernel density estimator (KDE). The datasets provided by IDA are used for performance evaluation. These datasets are binary classification datasets consisting of training and test samples. We allocate all positive training samples for the “model” set, while all positive test samples and 1% of negative test samples are assigned in the “evaluation” set. Thus, we regard the positive samples as regular and the negative samples as irregular. The mean AUC values over 20 trials as well as the computation time are summarized in Table 2, showing that uLSIF works fairly well. KLIEP works slightly better than uLSIF, but uLSIF is computationally much more efficient. LogReg overall works rather well, but it performs poorly for some datasets and therefore the average AUC value is small. KMM and OSVM are not comparable to uLSIF both in AUC and computation time. LOF and KDE work reasonably well in terms of AUC, but their computational cost is high. Thus, proposed uLSIF is overall shown to work well and computationally efficient also in outlier detection. 6 Conclusions We proposed a new method for importance estimation that can avoid solving a substantially more difficult task of density estimation. We are currently exploring various possible applications of important estimation methods beyond covariate shift adaptation and outlier detection, e.g., feature selection, conditional distribution estimation, and independent component analysis—we believe that importance estimation could be used as a new versatile tool in machine learning. References [1] S. Bickel et al. Discriminative learning for differing training and test distributions. ICML 2007. [2] S. Bickel et al. Dirichlet-enhanced spam filtering based on biased samples. NIPS 2006. [3] M. M. Breunig et al. LOF: Identifying density-based local outliers. SIGMOD 2000. [4] T. Hastie et al. The entire regularization path for the support vector machine. JMLR 2004. [5] J. Huang et al. Correcting sample selection bias by unlabeled data. NIPS 2006. [6] X. Nguyen et al. Estimating divergence functions and the likelihood ratio. NIPS 2007. [7] B. Sch¨olkopf et al. Estimating the support of a high-dimensional distribution. Neural Computation, 13(7):1443–1471, 2001. [8] H. Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of Statistical Planning and Inference, 90(2):227–244, 2000. [9] M. Sugiyama et al. Direct importance estimation with model selection. NIPS 2007. 8
2008
33
3,519
Differentiable Sparse Coding David M. Bradley Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 dbradley@cs.cmu.edu J. Andrew Bagnell Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 dbagnell@ri.cmu.edu Abstract Prior work has shown that features which appear to be biologically plausible as well as empirically useful can be found by sparse coding with a prior such as a laplacian (L1) that promotes sparsity. We show how smoother priors can preserve the benefits of these sparse priors while adding stability to the Maximum A-Posteriori (MAP) estimate that makes it more useful for prediction problems. Additionally, we show how to calculate the derivative of the MAP estimate efficiently with implicit differentiation. One prior that can be differentiated this way is KL-regularization. We demonstrate its effectiveness on a wide variety of applications, and find that online optimization of the parameters of the KL-regularized model can significantly improve prediction performance. 1 Introduction Sparse approximation is a key technique developed in engineering and the sciences which approximates an input signal, X, in terms of a “sparse” combination of fixed bases B. Sparse approximation relies on an optimization algorithm to infer the Maximum A-Posteriori (MAP) weights ˆW that best reconstruct the signal, given the model X ≈f(BW). In this notation, each input signal forms a column of an input matrix X, and is generated by multiplying a set of basis vectors B, and a column from a coefficient matrix W, while f(z) is an optional transfer function. This relationship is only approximate, as the input data is assumed to be corrupted by random noise. Priors which produce sparse solutions for W, especially L1 regularization, have gained attention because of their usefulness in ill-posed engineering problems [1], their ability to elucidate certain neuro-biological phenomena, [2, 3], and their ability to identify useful features for classification from related unlabeled data [4]. Sparse coding [2] is closely connected to Independent Component Analysis as well as to certain approaches to matrix factorization. It extends sparse approximation by learning a basis matrix B which represents well a collection of related input signals–the input matrix X–in addition to performing optimization to compute the best set of weights ˆW. Unfortunately, existing sparse coding algorithms that leverage an efficient, convex sparse approximation step to perform inference on the latent weight vector [4] are difficult to integrate into a larger learning architecture. It has been convincingly demonstrated that back-propagation is a crucial tool for tuning an existing generative model’s output in order to improve supervised performance on a discriminative task. For example, greedy layer-wise strategies for building deep generative models rely upon a back-propagation step to achieve excellent model performance [5]. Unfortunately, existing sparse coding architectures produce a latent representation ˆW that is an unstable, discontinuous function of the inputs and bases; an arbitrarily small change in input can lead to the selection of a completely different set of latent weights. We present an advantageous new approach to coding that uses smoother priors which preserve the sparsity benefits of L1-regularization while allowing efficient convex inference and producing stable latent representations ˆW. In particular we examine a prior based on minimizing KL-divergence to 1 the uniform distribution which has long been used for approximation problems [6, 7]. We show this increased stability leads to better semi-supervised classification performance across a wide variety of applications for classifiers using the latent representation ˆW as input. Additionally, because of the smoothness of the KL-divergence prior, B can be optimized discriminatively for a particular application by gradient descent, leading to outstanding empirical performance. 2 Notation Uppercase letters, X, denote matrices and lowercase letters, x, denote vectors. For matrices, superscripts and subscripts denote rows and columns respectively. Xj is the jth column of X, Xi is the ith row of X, and Xi j is the element in the ith row and jth column. Elements of vectors are indicated by subscripts, xj, and superscripts on vectors are used for time indexing xt. XT is the transpose of matrix X. 3 Generative Model Sparse coding fits a generative model (1) to unlabeled data, and the MAP estimates of the latent variables of this model have been shown to be useful as input for prediction problems [4]. (1) divides the latent variables into two independent groups, the coefficients W and the basis B, which combine to form the matrix of input examples X. Different examples (columns of X) are assumed to be independent of each other. The Maximum A Posteriori (MAP) approximation replaces the integration over W and B in (1) with the maximum value of P(X|W, B)P(W)P(B), and the values of the latent variables at the maximum, ˆW and ˆB, are the MAP estimates. Finding ˆW given B is an approximation problem, solving for ˆW and ˆB simultaneously over a set of independent examples is a coding problem. P(X) = Z B Z W P(X|W, B)P(W)P(B)dWdB = Z B P(B) Z W Y i P(Xi|Wi, B)P(Wi)dWdB (1) Given B, the negative log of the generative model can be optimized independently for each example, and it is denoted for a generic example x by L in (2). L decomposes into the sum of two terms, a loss function DL (x∥f(Bw)) between an input example and the reconstruction produced by the transfer function f, and a regularization function DP (w∥p) that measures a distance between the coefficients for the example w and a parameter vector p. A regularization constant λ controls the relative weight of these two terms. For fixed B, minimizing (2) with respect to w separately for each example is equivalent to maximizing (1). L = DL (x∥f(Bw)) + λDP (w∥p) (2) ˆw = arg min w L (3) In many applications, the anticipated distribution of x after being corrupted by noise can be modeled by an exponential family distribution. Every exponential family distribution defines a Bregman divergence which serves as a matching loss function for estimating the parameters of the distribution1. One common choice for the loss/transfer functions is the squared loss function with its matching linear transfer function, DL (x∥f(Bw)) = P i(xi −Biw)2, which is the matching Bregman Divergence for x drawn from a multidimensional gaussian distribution. The regularization function DP (w∥p) is also often a Bregman divergence, but may be chosen for other features such as the sparsity of the resulting MAP estimate ˆw. A vector is commonly called sparse if many elements are exactly zero. The entropy [9, 10], and Lp p-norm2, p ≤1 regularization functions [2, 3, 4] promote this form of sparsity, and all of them have shown the ability to learn bases 1The maximum likelihood parameter estimate for any regular exponential family distribution can be found by minimizing the corresponding Bregman divergence for that family, and every Bregman divergence has a matching transfer function which leads to a convex minimization problem [8]. That matching transfer function is the gradient ∇φ of the function φ which is associated with the Bregman divergence Dφ(x∥y) = φ(x) − φ(y) −⟨x −y, ∇φ(y)⟩. 2Lp p(x) = P i |xi|p corresponds to the negative log of a generalized gaussian prior. 2 containing interesting structure from unlabeled data. However, of these only L1 leads to an efficient, convex procedure for inference, and even this prior does not produce differentiable MAP estimates. We argue that if the latent weight vector ˆw is to be used as input to a classifier, a better definition of “sparsity” is that most elements in ˆw can be replaced by elements in a constant vector p without significantly increasing the loss. One regularization function that produces this form of pseudo-sparsity is the KL-divergence KL(w∥p). This regularization function has long been used for approximation problems in Geophysics, Crystallography, Astronomy, and Physics, where it is commonly referred to as Maximum Entropy on the Mean (MEM) [7], and has been shown in the online setting to compete with low L1-norm solutions in terms of regret [11, 12]. L1 regularization provides sparse solutions because its Fenchel dual [13] is the max function, meaning only the most useful basis vectors participate in the reconstruction. A differentiable approximation to maxi xi is a sum of exponentials, P i ex i , whose dual is the KL-divergence (4). Regularization with KL has proven useful in online learning, where it is the implicit prior of the exponentiated gradient descent (EGD) algorithm. EGD has been shown to be “sparse” in the sense that it can select a few relevant features to use for a prediction task from many irrelevant ones. The form of KL we use (4) is the full Bregman divergence of the negative entropy function3. Often KL is used to compute distances between probability distributions, and for this case the KL we use reduces to the standard form. For sparse coding however, it is inconvenient to assume that ∥ˆw∥1 = ∥p∥1 = 1, so we use the full unnormalized KL instead. DP (w∥p) = X i  wi log wi pi −wi + pi  (4) For the prior vector p we use a uniform vector whose L1 magnitude equals the expected L1 magnitude of w. p has an analogous effect to the q parameter in Lq-norm regularization. p →0 approximates L1 and p →∞approximates L2. Changing p affects the magnitude of the KL term, so λ in (2) must be adjusted to balance the loss term in the sparse coding objective function (small values of p require small values of λ). Below we provide a) an efficient procedure for inferring ˆw in this model; b) an algorithm for iteratively updating the bases B, and c) show that this model leads to differentiable estimates of ˆw. We also provide the general form of the derivative for arbitrary Bregman losses. 4 Implementation To compute ˆw with KL-regularization, we minimize (3) using exponentiated gradient descent (EGD) with backtracking until convergence (5). EGD automatically enforces positivity constraints on the coefficient vector w, and is particularly efficient for optimization because it is the natural mirror descent rule for KL-regularization [12]. The gradient of the objective function (2) with respect to the coefficient for the jth basis vector wj is given in (6) for matching loss/transfer function pairs. wt+1 j = wt je −α ∂L ∂wj (5) ∂L ∂wj = (f(Bw) −x)T Bj + λ log wj pj (6) This iterative update is run until the maximum gradient element is less than a threshold, which is estimated by periodically running a random set of examples to the limits of machine precision, and selecting the largest gradient threshold that produces ˆw within ϵ of the exact solution. The α parameter is continuously updated to balance the number of sucessful steps and the number of backtracking steps4. Because L1-regularization produces both positive and negative weights, to compare L1 and KL regularization on the same basis we expand the basis used for KL by adding the negation of each basis vector, which is equivalent to allowing negative weights (see Appendix B). During sparse coding the basis matrix B is updated by Stochastic Gradient Descent (SGD), giving the update rule Bt+1 = Bt −η ∂L ∂Bi j . This update equation does not depend on the prior chosen 3−H(x) = x log(x) 4In our experiments, if the ratio of backtracking steps to total steps was more than 0.6, α was decreased by 10%. Similarly α was increased by 10% if the ratio fell below 0.3. 3 for w and is given in (7) for matching loss/transfer function pairs. SGD implements an implicit L2 regularizer and is suitable for online learning, however because the magnitude of w is explicitly penalized, the columns of B were constrained to have unit L2 norm to prevent the trivial solution of infinitely large B and infinitely small w. The step size was adjusted for the magnitude of ˆw in each application, and was then decayed over time as η ∝1/ √ t. The same SGD procedure was also used to optimize B through backpropagation, as explained in the next section. ∂L ∂Bi j = wj(f(Biw) −xi) (7) 5 Modifying a Generative Model For A Discriminative Task Sparse Coding builds a generative model from unlabeled data that captures structure in that data by learning a basis B. Our hope is that the MAP estimate of basis coefficients ˆw produced for each input vector x will be useful for predicting a response y associated with x. However, the sparse coding objective function only cares about reconstructing the input well, and does not attempt to make ˆw useful as input for any particular task. Fortunately, since priors such as KL-divergence regularization produce solutions that are smooth with respect to small changes in B and x, B can be modified through back-propagation to make ˆw more useful for prediction. The key to computing the derivatives required for backpropagation is noting that the gradient with respect to w of the optimization (3) at its minimum ˆw can be written as a set of fixed point equations where the gradients of the loss term equal the gradient of the regularization: ∇DP ( ˆw∥p) = −1 λ∇DL (x∥f(B ˆw)) . (8) Then if the regularization function is twice differentiable with respect to w, we can use implicit differentiation on (8) to compute the gradient of ˆw with respect to B, and x [14]. For KL-regularization and the simple case of a linear transfer function with squared loss, ∂ˆ w ∂B is given in (9), where ⃗ei is a unit vector whose ith element is 1. A general derivation for matched loss/transfer function pairs as defined before is provided in appendix C. Note that the ability to compute ∂ˆ w ∂x means that multiple layers of sparse coding could be used. ∂ˆw ∂Bk i = −  BT B + diag( λ ˆw) −1 (Bk ˆwi)T +⃗ei(f(Bk ˆw) −xk)  (9) 6 Experiments We verify the performance of KL-sparse coding on several benchmark tasks including the MNIST handwritten digit recognition data-set, handwritten lowercase English characters classification, movie review sentiment regression, and music genre classification (Appendix E). In each application, the ˆw produced using KL-regularization were more useful for prediction than those produced with L1 regularization due to the stability and differentiability provided by KL. 6.1 Sparsity KL-regularization retained the desirable pseudo-sparsity characteristics of L1, namely that each example, x, produces only a few large elements in ˆw. Figure 1 compares the mean sorted and normalized coefficient distribution over the 10,000 digit MNIST test set for KL-divergence and several Lp p regularization functions, and shows that although the KL regularization function is not sparse in the traditional sense of setting many elements of ˆw to zero, it is sparse in the sense that ˆw contains only a few large elements in each example, lending support to the idea that this sense of sparsity is more important for classification. 6.2 Stability Because the gradient of the KL-divergence regularization function goes to ∞with increasing w, it produces MAP estimates ˆw that change smoothly with x and B (see Appendix A for more details). 4 Figure 1: Left: Mean coefficient distribution over the 10,000 digit MNIST test set for various regularization functions. Each example ˆw was sorted by magnitude and normalized by ∥ˆw∥∞before computing the mean over all examples. Right: test set classification performance. Regularization functions that produced few large values in each examples (such as KL and L1) performed the best. Forcing small coefficients to be exactly 0 was not necessary for good performance. Note the log scale on the horizontal axis. Gaussian Noise (Standard Deviation) Random Translations (pixels) Regularization 0.01 0.1 0.1 1 L1 0.0283±0.0069 0.285±0.056 0.138±0.026 1.211±0.213 KL 0.0172±0.0016 0.164±0.015 0.070±0.011 0.671±0.080 Table 1: The 10,000 images of handwritten digits in the MNIST test set were used to show the stability benefits of KL-regularization. Distance (in L1) between the representation for x, ˆw, and the representation after adding noise, divided by ∥ˆw∥1. KL-regularization provides representations that are significantly more stable with respect to both uncorrelated additive Gaussian noise (Left), and correlated noise from translating the digit image in a random direction (Right). Table 1 quantifies how KL regularization significantly reduces the effect on ˆw of adding noise to the input x. This stability improves the usefulness of ˆw for prediction. Figure 2 shows the most-discriminative 2-D subspace (as calculated by Multiple Discriminant Analysis [15]) for the input space, the L1 and KL coefficient space, and the KL coefficient space after it has been specialized by back-propagation. The L1 coefficients tame the disorder of the input space so that clusters for each class are apparent, although noisy and overlapping. The switch to KL regularization makes these clusters more distinct, and applying back-propagation further separates the clusters. Figure 2: Shown is the distribution of the eight most confusable digit classes in the input space and in the coefficient spaces produced by sparse approximation. Multiple Discriminant Analysis was used to compute the most discriminative 2-D projection of each space. The PCA-whitened input space (left) contains a lot of overlap between the classes. L1 regularization (center) discovers structure in the unlabeled data, but still produces more overlap between classes than KL sparse approximation (right) does with the same basis trained with L1 sparse coding. Figure best seen in color. 6.3 Improved Prediction Performance On all applications, the stability provided by KL-regularization improved performance over L1, and back-propagation further improved performance when the training set had residual error after an output classifier was trained. 5 6.3.1 Handwritten Digit Classification We tested our algorithm on the benchmark MNIST handwritten digits dataset [16]. 10,000 of the 60,000 training examples were reserved for validation, and classification performance was evaluated on the separate 10,000 example test set. Each example was first reduced to 180D from 768D by PCA, and then sparse coding was performed using a linear transfer function and squared loss5. The validation set was used to pick the regularization constant, λ, and the prior mean for KL, p. Maxent classifiers6 [17] were then learned on randomly sampled subsets of the training set of various sizes. Switching from L1-regularized to KL-regularized sparse approximation improved performance in all cases (Table 2). When trained on all 50,000 training examples, the test set classification error of KL coefficients, 2.21%, was 37% lower than the 3.53% error rate obtained on the L1regularized coefficients. As shown in Table 3, this increase in performance was consistent across a diverse set of classification algorithms. After running back-propagation with the KL-prior, the test set error was reduced to 1.30%, which improves on the best results reported7 for other shallowarchitecture permutation-invariant classifiers operating on the same data set without prior knowledge about the problem8, (see Table 4). Training Set Size 1000 2000 10000 20000 50000 L1 (Test Set) 7.72% 6.63% 4.74% 4.16% 3.53% KL (Test set) 5.87% 5.06% 3.00% 2.51% 2.21% KL After Backprop (Test Set) 5.66 4.46% 2.31% 1.78% 1.30% Improvement from Backprop 3.6% 11.9% 23.0% 29.1% 43.0% KL (Training Set) 0.00% 0.05% 1.01% 1.50% 1.65% Table 2: The ability to optimize the generative model with back-propagation leads to significant performance increases when the training set is not separable by the model learned on the unlabeled data. Shown is the misclassification rate on the MNIST digit classification task. Larger training sets with higher residual error benefit more from back-propagation. Classifier PCA L1 KL KL+backprop Maxent 7.49% 3.53% 2.21% 1.30% 2-layer NN 2.23% 2.13% 1.40% 1.36% SVM (Linear) 5.55% 3.95% 2.16% 1.34% SVM (RBF) 1.54% 1.94% 1.28% 1.31% Table 3: The stability afforded by the KL-prior improves the performance of all classifier types over the L1 prior. In addition back-propagation allows linear classifiers to do as well as more complicated non-linear classifiers. Algorithm L1 KL KL+backprop SVM 2-layer NN [18] 3-layer NN Test Set Error 3.53% 2.21% 1.30% 1.4% 1.6% 1.53% Table 4: Test set error of various classifiers on the MNIST handwritten digits database. 6.3.2 Transfer to Handwritten Character Classification In [4], a basis learned by L1-regularized sparse coding on handwritten digits was shown to improve classification performance when used for the related problem of handwritten character recognition 5This methodology was chosen to match [4] 6Also known as multi-class logistic regression 7An extensive comparison of classification algorithms for this dataset can be found on the MNIST website, http://yann.lecun.com/exdb/mnist/ 8Better results have been reported when more prior knowledge about the digit recognition problem is provided to the classifier, either through specialized preprocessing or by giving the classifier a model of how digits are likely to be distorted by expanding the data set with random affine and elastic distortions of the training examples or training with vicinal risk minimization. Convolutional Neural Networks produce the best results on this problem, but they are not invariant to permutations in the input since they contain a strong prior about how pixels are connected. 6 with small training data sets (< 5000 examples). The handwritten English characters dataset9 they used consists of 16x8 pixel images of lowercase letters. In keeping with their work, we padded and scaled the images to match the 28x28 pixel size of the MNIST data, projected onto the same PCA basis that was used for the MNIST digits, and learned a basis from the MNIST digits by L1-regularized sparse coding. This basis was then used for sparse approximation of the English characters, along with a linear transfer function and squared loss. In this application as well, Table 5 shows that simply switching to a KL prior from L1 for sparse approximation significantly improves the performance of a maxent classifier. Furthermore, the KL prior allows online improvement of the sparse coding basis as more labeled data for the characterrecognition task becomes available. This improvement increases with the size of the training set, as more information becomes available about the target character recognition task. Training Set Size Raw PCA L1 KL KL+backprop 100 44.3 46.9 44.0 49.4 50.7 500 60.4 61.2 63.7 69.2 69.9 1000 66.3 66.7 69.5 75.0 76.4 5000 75.1 76.0 78.9 82.5 84.2 20000 79.3 79.7 83.3 86.0 89.1 Table 5: Classification Accuracy on 26-way English Character classification task. 6.3.3 Comparison to sLDA: Movie Review Sentiment Regression KL-regularized sparse coding bears some similarities to the supervised LDA (sLDA) model introduced in [19], and we provide results for the movie review sentiment classification task [20] used in that work. To match [19] we use vectors of normalized counts for the 5000 words with the highest tf-idf score among the 5006 movie reviews in the data set, use 5-fold cross validation, compute predictions with linear regression on ˆw, and report our performance in terms of predictive R2 (the fraction of variability in the out-of-fold response values which is captured by the out-of-fold predictions ˆy: pR2 := 1−(P(y−ˆy)2)/(P(y−¯y)2)). Since the input is a probability distribution, we use a normalized exponential transfer function, f(B, w) = eBw ∥eBw∥1 , to compute the reconstruction of the input. For sparse coding we use KL-divergence for both the loss and the regularization functions, as minimizing the KL-divergence between the empirical probability distribution of the document given by each input vector x and f(B, w) is equivalent to maximizing the “constrained Poisson distribution” used to model documents in [21] (details given in appendix D). Table 6 shows that the sparse coding generative model we use is competitive with and perhaps slightly better than LDA. After back-propagation, its performance is superior to the supervised version of LDA, sLDA10. predictive R2 Algorithm 0.263 LDA [19] 0.264 64D unsupervised KL sparse coding 0.281 256D unsupervised KL sparse coding 0.457 L1-regularized regression [19] 0.500 sLDA [19] 0.507 L2-regularized regression 0.534 256D KL-regularized coding with backprop Table 6: Movie review sentiment prediction task. KL-regularized sparse coding compares favorably with LDA and sLDA. 7 Conclusion This paper demonstrates on a diverse set of applications the advantages of using a differentiable, smooth prior for sparse coding. In particular, a KL-divergence regularization function has significant 9Available at http://ai.stanford.edu/˜btaskar/ocr/ 10Given that the word counts used as input are very sparse to begin with, classifiers whose regret bounds depend on the L2 norm of the gradient of the input (such as L2-regularized least squares) do quite well, achieving a predictive R2 value on this application of 0.507. 7 advantages over other sparse priors such as L1 because it retains the important aspects of sparsity, while adding stability and differentiability to the MAP estimate ˆw. Differentiability in particular is shown to lead to state-of-the-art performance by allowing the generative model learned from unlabeled data by sparse-coding to be adapted to a supervised loss function. Acknowledgments David M. Bradley is supported by an NDSEG fellowship provided by the Army Research Office. The authors would also like to thank David Blei, Rajat Raina, and Honglak Lee for their help. References [1] J. A. Tropp, “Algorithms for simultaneous sparse approximation: part ii: Convex relaxation,” Signal Process., vol. 86, no. 3, pp. 589–602, 2006. [2] B. Olshausen and D. Field, “Sparse coding with an overcomplete basis set: A strategy employed by v1?” Vision Research, 1997. [3] Y. Karklin and M. S. Lewicki, “A hierarchical bayesian model for learning non-linear statistical regularities in non-stationary natural signals,” Neural Computation, vol. 17, no. 2, pp. 397–423, 2005. [4] R. Raina, A. Battle, H. Lee, B. Packer, and A. Y. Ng, “Self-taught learning: Transfer learning from unlabeled data,” in ICML ’07: Proceedings of the 24th international conference on Machine learning, 2007. [5] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle, “Greedy layer-wise training of deep networks,” in Advances in Neural Information Processing Systems 19, B. Sch¨olkopf, J. Platt, and T. Hoffman, Eds. Cambridge, MA: MIT Press, 2007, pp. 153–160. [6] E. Rietsch, “The maximum entropy approach to inverse problems,” Journal of Geophysics, vol. 42, pp. 489–506, 1977. [7] G. Besnerais, J. Bercher, and G. Demoment, “A new look at entropy for solving linear inverse problems,” IEEE Trans. on Information Theory, vol. 45, no. 5, pp. 1565–1578, July 1999. [8] A. Banerjee, S. Merugu, I. S. Dhillon, and J. Ghosh, “Clustering with bregman divergences,” Journal of Machine Learning Research, vol. 6, pp. 1705–1749, 2005. [9] M. Brand, “Pattern discovery via entropy minimization,” in AISTATS 99, 1999. [10] M. Shashanka, B. Raj, and P. Smaragdis, “Sparse overcomplete latent variable decomposition of counts data,” in NIPS, 2007. [11] J. Kivinen and M. Warmuth, “Exponentiated gradient versus gradient descent for linear predictors,” Information and Computation, pp. 1–63, 1997. [12] N. Cesa-Bianchi and G. Lugosi, Prediction, Learning, and Games. Cambridge University Press, 2006. [13] R. Rifkin and R. Lippert, “Value regularization and fenchel duality,” The Journal of Machine Learning Research, vol. 8, pp. 441–479, 2007. [14] D. Widder, Advanced Calculus, 2nd ed. Dover Publications, 1989. [15] R. Duda, P. Hart, and D. Stork, Pattern classification. Wiley New York, 2001. [16] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998. [17] K. Nigam, J. Lafferty, and A. McCallum, “Using maximum entropy for text classification,” 1999. [Online]. Available: citeseer.ist.psu.edu/article/nigam99using.html [18] P. Y. Simard, D. Steinkraus, and J. C. Platt, “Best practices for convolutional neural networks applied to visual document analysis,” in ICDAR ’03: Proceedings of the Seventh International Conference on Document Analysis and Recognition. Washington, DC, USA: IEEE Computer Society, 2003, p. 958. [19] D. M. Blei and J. D. McAuliffe, “Supervised topic models,” in NIPS 19, 2007. [20] B. Pang and L. Lee, “Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales,” in Proceedings of the ACL, 2005, pp. 115–124. [21] R. Salakhutdinov and G. Hinton, “Semantic hashing,” in SIGIR workshop on Information Retrieval and applications of Graphical Models, 2007. 8
2008
34
3,520
Inferring rankings under constrained sensing Srikanth Jagabathula Devavrat Shah Laboratory of Information and Decision Systems, Massachusetts Institute of Technology, Cambridge, MA 02139. {jskanth, devavrat}@mit.edu Abstract Motivated by applications like elections, web-page ranking, revenue maximization etc., we consider the question of inferring popular rankings using constrained data. More specifically, we consider the problem of inferring a probability distribution over the group of permutations using its first order marginals. We first prove that it is not possible to recover more than O(n) permutations over n elements with the given information. We then provide a simple and novel algorithm that can recover up to O(n) permutations under a natural stochastic model; in this sense, the algorithm is optimal. In certain applications, the interest is in recovering only the most popular (or mode) ranking. As a second result, we provide an algorithm based on the Fourier Transform over the symmetric group to recover the mode under a natural majority condition; the algorithm turns out to be a maximum weight matching on an appropriately defined weighted bipartite graph. The questions considered are also thematically related to Fourier Transforms over the symmetric group and the currently popular topic of compressed sensing. 1 Introduction We consider the question of determining a real-valued function on the space of permutations of n elements with very limited observations. Such a question naturally arises in many applications including efficient web-page rank aggregation, choosing the winner in a sport season, setting odds in gambling for revenue maximization, estimating popularity of candidates pre-election and the list goes on (for example, see references [1], [2], [3]). In what follows, we give a motivating example for the pursuit of this quest. A motivating example. Consider a pre-election scenario in a democratic country with n potential candidates. Each person (or voter) has certain ranking of these candidates in mind (consciously or sub-consciously). For example, let n = 3 and the candidates be A, B and C. Each person believes in one of the 3! = 6 possible ordering of these candidates. For example, let 50% of people believe in A > B > C, 30% of people believe in B > A > C and 20% of people believe in C > A > B. We wish to infer these preferences of population by means of a limited set of questions. Specifically, suppose we can interview a representative collection (i.e. reasonably large random collection) of people for this purpose. However, in the interview we may not be able to ask them their complete ranking of all candidates. This may be because a person may not be able to articulate it clearly. Or, in situations (e.g. gambling) where there is a financial significance associated with information of complete ranking, an individual may not be ready to provide that information. In such a situation, we will have to settle with restricted questions of the following type: what will be the rank of candidate A in your opinion? or, whom would you rank second? Given answers to such restricted questions, we would like to infer what fraction of the population prefers which ordering of candidates. Clearly, such restricted information cannot lead to any useful 1 inference of prevalent ordering of candidates in the population if there are too many of them (for large n). Now, in a real world scenario, it is likely that people decide rankings of candidates based on a few issues such as war, abortion, economy and gay marriage. That is, an individual will decide the ranking of the candidates based on the opinions of candidates on these issues. Therefore, irrespective of the number of candidates, the number of distinct rankings that prevail in the population are likely to be very few. In this paper, we are interested in inferring such few prevalent rankings of candidates and their popularity based on the restricted (or partial) information as explained above. Thematically, this question is similar to the pursuit of compressed sensing. However, as we explain in Section 2, standard compressed sensing does not apply under this setting. We also discuss a natural relation between the available information and the Fourier coefficients of the Fourier transformation based on group representation (see Proposition 1). It turns out that the problem we consider is equivalent to that of recovery of a function over a symmetric group using the first order Fourier coefficients. Thus, our problem is thematically related to the recovery of functions over non-commutative groups using a limited set of Fourier coefficients. As we show in Section 2, a naive recovery by setting the unknown Fourier coefficients to zero yields a very bad result. Hence, our approach has potential applications to yielding a better recovery. In many applications, one is specifically interested in finding out the most popular ranking (or mode) rather than all the prevalent rankings. For this, we consider an approximation based on Fourier transformation as a surrogate to find the mode. We establish that under the natural majority condition, our algorithm finds the correct mode (see Theorem 2). Interestingly enough, our algorithm to find an estimate of the mode corresponds to finding a maximum weight matching in a weighted bipartite graph of n nodes. Organization. We start describing the setup, the problem statement, and the relation to compressed sensing and Fourier transform based approaches in Section 2. In Section 3, we provide precise statements of the main results. In the remaining Sections, we prove these results and discuss the relevant algorithms. 2 Background and preliminaries Setup. Let Sn = {σ1, . . . , σN} denote set of all possible N = n! permutations (orderings) of n elements. Sn is also known as the symmetric group of degree n. Let f : Sn →[0, 1] denote a mapping from the symmetric group to the interval [0, 1]. We assume that the function f is normalized i.e., ∥f∥ℓ1 = 1, where ∥·∥ℓ1 denotes the ℓ1 norm. Let pk denote the value f(σk), for 1 ≤k ≤N. Without loss of generality we assume that the permutations are labeled such that pk ≤pm for k < m. We write f(·) to denote the function and f to denote the vector (f(σk))N×1. The set of permutations for which f(·) is non-zero will be called the support of f(·); also, the cardinality of the support will be called sparsity of f and is denoted by K i.e., K = ∥f∥ℓ0. Each permutation σ will be represented by its corresponding permutation matrix denoted by P σ i.e., P σ ij = 1{σ(j)=i}, where 1E is the indicator variable of the event E. For brevity, we write P σ to mean both the n × n matrix and the n2 × 1 vector. We use the terms permutation and permutation matrix interchangeably. We think of permutations as complete matchings in a bipartite graph. Specifically, we consider an n×n bipartite graph and each permutation corresponds to a complete matching in the graph. The edges in a permutation will refer to the edges in the corresponding bipartite matching. For 1 ≤i, j ≤n, let qij := X σ∈Sn:σ(j)=i f(σ) (1) Let Q denote both the matrix (qij)n×n and the vector (qij)n2×1. It is easy to note that Q can be equivalently written as P σ∈Sn f(σ)P σ. From the definition, it also follows that Q is a doubly stochastic matrix. The matrix Q corresponds to the first order information about the function f(·). In the election example, it is easy to see that qij denotes the fraction of voters that have ranked candidate j in the ith position. Problem statement and result. The basic objective is to determine the values of the function f(·) precisely, using only the values of the matrix Q. We will first prove, using information theoretic techniques, that recovery is asymptotically reliable (average probability of error goes to zero as 2 n →∞) only if K = O(n). We then provide a novel algorithm that recovers prevalent rankings and their popularity exactly under minimal (essentially necessary) conditions; under a natural stochastic model, this algorithm recovers up to O(n) permutations. In this sense, our algorithm is optimal. It is often the case that the full knowledge of functional values at all permutations is not required. Specifically, in scenarios such as ranked elections, interest is in finding the most likely permutation i.e., arg max f(σ). Theorem 2 proves that the max-weight matching yields the most likely permutation under natural majority assumption. 2.1 Relation to Fourier Transform The question we consider is thematically related to harmonic analysis of functions over noncommutative groups. As we shall show soon, the matrix Q is related to the first two Fourier coefficients of the Fourier Transform of the distribution over the permutation group. Thus, the problem we are considering can be restated as that of reconstructing a distribution over the permutation group from its first two Fourier coefficients. Reconstructing distributions over the permutation group from a limited number of Fourier coefficients has several applications. Specifically, there has been some recent work on multi-object tracking (see [4] and [3]), in which they approach the daunting task of maintaining a distribution over the permutation group by approximating it using the first few Fourier coefficients. This requires reconstructing the function from a limited number of Fourier coefficients, where our solution can be potentially applied. We will now discuss the Fourier Transform of a function on the permutation group, which provides another possible approach for recovery of f. Interestingly enough, the first order Fourier transform of f can be constructed using information based on Q = (qij). As we shall find, this approach fails to recover sparse f as it has tendency to “spread” the mass on all n! elements given Q. However, as established in Theorem 2 this leads to recovery of mode or most likely assignment of f under natural majority condition. Next, some details on what the Fourier transform (an interested reader is requested to check [5] for missing details) based approach is, how Q can be used to obtain an approximation of f and why it does not recover f exactly. The details relevant to recovery of mode of f will be associated with Theorem 2. Fourier Transform: Definition. We can obtain a solution to the set of linear equations in (8) using the Fourier Transforms at symmetric group representations. For a function h: G →R on group G, its Fourier Transform at a representation ρ of G is defined as ˆhρ = P σ h(σ)ρ(σ). The collection of Fourier Transforms of h(·) at a complete set of inequivalent irreducible representations of G completely determine the function. This follows from the following expression for the inverse Fourier Transform: h(σ) = 1 |G| X k dρk Tr h ˆhT ρkρk(σ) i (2) where |G| denotes the cardinality of G, dρk denotes the degree of representation ρk and k indexes over the complete set of inequivalent irreducible representations of G. The trivial representation of a group is the 1-dimensional representation ρ0(σ) = 1, ∀σ ∈G. Therefore, the Fourier Transform of h(·) at ρ0 is P σ h(σ). Fourier Transform: Approximation. The above naturally suggests an approximation based on a limited number of Fourier coefficients with respect to a certain subset of irreducible representations. We will show that, indeed, the information matrix Q corresponds to the Fourier coefficient with respect to the first-order representation of the symmetric group Sn. Therefore, it yields a natural approximation. It is known that [5] the first order permutation representation of Sn, denoted by τ1, has a degree n and maps every permutation σ to its corresponding permutation matrix P σ. In other words, we have τ1(σ) = P σ. Thus, ˆf(σ) = P σ∈Sn f(σ)τ1(σ) = Q. Reconstruction of f requires Fourier Transforms at irreducible representations. Even though τ1 is not an irreducible representation, it is known that [5] that every representation of a group is equivalent to the direct sum of irreducible representations. In particular, τ1 can be decomposed into τ1 = ρ0 ⊕ρ1; where ρ0 is the aforementioned trivial representation of degree 1 and ρ1 is an irreducible representation of degree n −1. It is worth pointing out to a familiar reader that what we call ρ1 is more appropriately denoted by ρ(n−1,1) in 3 the literature; but we will stick to ρ1 for brevity. Thus, Q is related to the Fourier Transforms of the irreducible representations ρ0 and ρ1. We now have the following proposition: Proposition 1. Consider a function f : Sn →R. Suppose that ∥f∥ℓ1 = 1 and we are given its corresponding Q. Then, its natural Fourier approximation obtained by looking at the Fourier coefficients of the relevant irreducible representations is given by the function ˜f : Sn →R defined as: ˜f(σ) = (n −1)⟨Q, P σ⟩ N −n −2 N (3) for σ ∈Sn, with N = n!, ∥f∥ℓ1 = ∥˜f∥ℓ1 and P σ∈Sn ˜f(σ)P σ = Q. Proof. We have: Q = X σ∈Sn f(σ)τ1 = X σ∈Sn f(σ)(τ0 ⊕τ1) = ˆfρ0 ⊕ˆfρ1. (4) Therefore, ⟨Q, P σ⟩= Tr  QT P σ = Tr h ˆf T ρ0 ⊕ˆf T ρ1  (ρ0(σ) ⊕ρ1(σ)) i (5) Since Tr is independent of the basis, choosing an appropriate basis we can write: ⟨Q, P σ⟩= Tr h ˆf T ρ0ρ0(σ) i + Tr h ˆf T ρ1ρ1(σ) i = 1 + Tr h ˆf T ρ1ρ1(σ) i (6) (6) is true because ρ0(σ) = 1, ∀σ ∈Sn, and ∥f∥ℓ1 = 1. ˜f is obtained by truncating the Inverse Fourier Transform expression to the first two terms. Thus, from (2), it follows that: ˜f(σ) = 1 N h ˆf T ρoρ0(σ) + (n −1) ˆf T ρ1ρ1(σ) i (7) Using the fact that ρ0(σ) = 1 ∀σ ∈Sn, ˆfρ0 = 1, and plugging (6) into (7) gives the result of the proposition. Summary. Thus, the Fourier Transform technique yields a solution to the problem. Unfortunately, the solution is not sparse and the “mass” is distributed over all the permutations yielding values of O(1/N) for all permutations. In summary, a naive approach to the reconstruction of a sparse distribution gives unsatisfactory results and requires a different approach. 2.2 Relation to Compressed Sensing Here we discuss the relation of the above stated question to the recently popular topic of compressed sensing. Indeed, both share the commonality in the sense that the ultimate goal is to recover a sparse function (or vector) based on few samples. However, as we shall show, the setup of our work here is quite different. This is primarily because in the standard compressed sensing setup, samples are chosen as “random projections” while here samples are highly constrained and provide information matrix Q. Next, we provide details of this. Our problem can be formulated as a solution to a set of linear equations by defining a matrix A as the n2 × N matrix with column vectors as P σk, 1 ≤k ≤N. Then, f is a solution to the following set of linear equations: Ax = Q (8) Candes and Tao (2005) [6] provide an approach to solve this problem. They require the vector f to be sparse i.e., ∥f∥ℓ0 = ρN, for some ρ > 0. As discussed earlier, this is a reasonable assumption in our case because: (a) the total number of permutations N can be very large even for a reasonably sized n and (b) most functions f(·) that arise in practice are determined by a small (when compared to N) number of parameters. Under a restriction on the isometry constants of the matrix A, Candes and Tao prove that the solution f is the unique minimizer to the LP: min∥x∥ℓ1 s.t. Ax = Q (9) 4 Unfortunately, the approach of Candes and Tao cannot be directly applied to our problem because the isometry constants of the matrix A do not satisfy the required conditions. We now take a closer look at the isometry constants of A. Gaussian random matrices form an important class of matrices with good isometry constants. Unfortunately, neither is our matrix A random nor is there a straightforward random formulation of our problem. To see why the matrix A has bad isometry constants, we take a simple example. For any n ≥4 consider the following 4 permutations: σ1 = id, σ2 = (12), σ3 = (34) and σ4 = (12)(34). Here, id refers to the identity permutation and the permutations are represented using the cycle notation. It is easy to see that: P σ1 + P σ4 = P σ2 + P σ3 (10) For any integer 1 ≤S ≤N, the S restricted isometry constant of A is defined as the smallest quantity such that AT c obeys: (1 −δS)∥c∥2 ℓ2 ≤∥AT c∥2 ℓ2 ≤(1 + δS)∥c∥2 ℓ2 (11) ∀T ⊆{1, 2, . . . , N} of cardinality at most S and all real vectors c. Here, AT c denotes P k∈T ckP σk. From this definition and (10), it follows that δS = 1 ∀S ≥4. Theorem 1.4 requires δS < 1 for perfect reconstruction of f when ∥f∥ℓ0 ≤S. Therefore, the compressed sensing approach of Candes and Tao does not guarantee the unique reconstruction of f if ∥f∥ℓ0 ≥4. 3 Main results Exact recovery. The main result of this paper is about the exact recovery of f from the given constrained information matrix Q = (qij) under the hypothesis that f is sparse or has small ∥f∥ℓ0. We provide an algorithm that recovers f exactly if the underlying support and probabilities have the following two properties: Property 1 (P1). Suppose the function f(·) is K sparse. Let p1, p2, . . . , pK be the function values. The following is true: X j∈J pj ̸= X j∈J′ pj ∀J, J′ ⊆{1, 2, . . . , K} s.t J ∩J′ = ∅ Property 2 (P2). Let {σ1, σ2, . . . , σK} be the support of f(·). For each 1 ≤i ≤K, ∃an 1 ≤ηi ≤ n such that σi(ηi) ̸= σj(ηi) ∀j ̸= i. In other words, each permutation has at least one edge that is different from all the others. When properties P1 and P2 are satisfied, the equation Q = Af has a unique solution and can indeed be recovered; we will provide an algorithm for such recovery. The following is the formal statement of this result and will be proved later. Theorem 1. Consider a function f : Sn →[0, 1] such that ∥f∥ℓ0 = L, ∥f∥ℓ1 = 1, and the functional values and the support possess properties P1 and P2. Then, matrix Q is sufficient to reconstruct f(·) precisely. Random model, Sparsity and Theorem 1. Theorem 1 asserts that when properties P1 and P2 are satisfied, exact recovery is possible. However, it is not clear why they are reasonable. We will now provide some motivation and prove that the algorithm is indeed optimal in terms of the maximum sparsity it can recover. Let’s go back to the counter-example we mentioned before: For any n ≥4 consider the 4 permutations σ1 = id, σ2 = (12), σ3 = (34) and σ4 = (12)(34). We have P σ1 + P σ4 = P σ2 + P σ3. Now, consider 4 values p1, p2, p3 and p4. Without loss of generality suppose that p1 ≤p4 and p2 ≤p3. Using the equation P σ1 + P σ4 = P σ2 + P σ3, we can write the following: Q = p1P σ1 + p2P σ2 + p3P σ3 + p4P σ4 = (p1 + p2)P σ1 + (p1 + p2)P σ4 + (p3 −p2)P σ3 = (p1 + p2)P σ2 + (p1 + p3)P σ3 + (p4 −p1)P σ4. 5 Thus, under the above setup, there is no unique solution to Q = Af. In addition, from the last two equalities, we can conclude that even the sparsest solution is not unique. Hence, there is no hope of recovering f given only Q in this setup. The question we now ask is whether the above counter example is contrived and specially constructed, or is it more prevalent. For that, we consider a random model which puts a uniform measure on all the permutations. The hope is that under this model, situations like the counter example occur with a vanishing probability. We will now describe the random model and then state important results on the sparsity of f that can be recovered from Q. Random Model. Under the random model, we assume that the function f with sparsity K is constructed as follows: Choose K permutations uniformly at random and let them have any non-trivial real functional values chosen uniformly at random from a bounded interval and then normalized. We call an algorithm producing an estimate ˆf of f as asymptotically reliable if Pr h f ̸= ˆf i = ε(n) where ε(n) →0 as n →∞. We now have the following two important results: Lemma 1. Consider a function f : Sn →R with sparsity K. Given the matrix Q = Af, and no additional information, the recovery will be asymptotically reliable only if K ≤4n. First note that a trivial bound of (n −1)2 can be readily obtained as follows: Since Q is doubly stochastic, it can be written as a convex combination of permutation matrices [7], which form a space of dimension (n−1)2. Lemma 1 says that this bound is loose. It can be proved using standard arguments in Information Theory by considering A as a channel with input f and output Q. Lemma 2. Consider a function f : Sn →R with sparsity K constructed according to the random model described above. Then, the support and functional values of f possess properties P1 and P2 with probability 1 −o(1) as long as K ≤0.6n. It follows from Lemma 2 and Theorem 1 that f can be recovered exactly from Q if the sparsity K = O(n). Coupled with Lemma 1 we conclude that our algorithm is optimal in the sense that it achieves the sparsity bound of O(n). Recovery of Mode. As mentioned before, often we are interested in obtaining only limited information about f(·). One such scenario is when we would like to find just the most likely permutation. For this purpose, we use the Fourier approximation ˜f (cf. Proposition 1) in place of f: that is, the mode of f is estimated as mode of ˜f. The following result states the correctness of this approximation under majority. Theorem 2. Consider a function f : Sn →[0, 1] such that ∥f∥ℓ0 = L and ∥f∥ℓ1 = 1. Suppose the majority condition holds, that is maxσ∈Sn f(σ) > 1/2. Then, arg max σ∈Sn f(σ) = arg max σ∈Sn ˜f(σ) = arg max σ∈Sn ⟨P σ, Q⟩. The mode of ˜f, or maximizer of ⟨P σ, Q⟩is essentially the maximum weight matching in a weighted bipartite graph: consider a complete bipartite graph G = ((V1, V2), E) with V1 = V2 = {1, . . . , n} and E = V1 × V2 with edge (i, j) ∈E having weight qij. Then, weight of a matching (equivalently permutation σ) is indeed ⟨P σ, Q⟩. The problem of finding maximum weight matching is classical. It can be solved in O(n3) using algorithm due to Edmond and Karp [8] or max-product belief propagation by Bayati, Shah and Sharma [9]. Thus, this is an approximation that can be evaluated. 4 Theorem 1: Proof and Algorithm Here, we present a constructive proof of Theorem 1. Specifically, we will describe an algorithm to determine the function values from Q which will be the original f as long as properties P1 and P2 are satisfied. Let p1, p2, . . . , pL denote the non-zero functional values. Let σ1, σ2, . . . , σL denote the corresponding permutations i.e., f(σk) = pk. Without loss of generality assume that the permutations are labeled such that pi ≤pj for i < j. Let q1, q2, . . . , qM, where M = n2, denote the values of matrix Q arranged in ascending order. 6 Given this sorted version, we have qi ≤qj for i < j. Let ei denote the edge (u, v) such that qi = qei = quv, where recall that quv = X k:σk(u)=v f(σk) = X k:σk(u)=v pk. Let Ak denote the set of edges corresponding to permutation σk, 1 ≤k ≤L. That is, Ak = {(u, σk(u)) : 1 ≤u ≤n}. The algorithm stated below will itself determine L, and (Ak, pk)1 ≤ k ≤L using information Q. The algorithm works when properties P1 and P2 are satisfied. Algorithm: initialization: p0 = 0, k(0) = 0 and Ak = ∅, 1 ≤k ≤M. for i = 1 to M if qi = P j∈J pj for some J ⊆{0, 1, . . . , k(i −1)} k(i) = k(i −1) Aj = Aj ∪{ei} ∀j ∈J else k(i) = k(i −1) + 1 pk(i) = qi Ak(i) = Ak(i) ∪{ei} end if end for Output L = k(i) and (pk, Ak), 1 ≤k ≤L. By property P2, there exists at least one qi such that it is equal to pk, for each 1 ≤k ≤L. The property P1 ensures that whenever qi = pk(i), the condition in the “if” statement of the pseudocode is not satisfied. Therefore, the algorithm correctly assigns values to each of the pk’s. Note that the condition in the “if” statement being true implies that edge ei is present in all the permutations σj such that j ∈J. Property P1 ensures that such a J, if exists, is unique. Therefore, when the condition is satisfied, the only permutations that contain edge ei are σj, j ∈J. When the condition in the “if” statement fails, again from properties P1 and P2 it follows that edge ei is contained only in permutation σk(i). From this discussion we can conclude that at the end of the iterations, each of the Ai’s contain complete information about their corresponding permutations. The algorithm thus completely determines the function f(·). Finally, note that the algorithm does not require the knowledge of ∥f∥ℓ0. 5 Theorem 2: Proof and Algorithm Here, our interest is in finding the mode of f. The algorithm we have proposed is use the mode of ˜f, as an estimate of mode of f. We wish to establish that when maxσ∈Sn f(σ) > 1/2 then ˜σ∗= σ∗, where ˜σ∗= arg max σ∈Sn ˜f(σ); σ∗= arg max σ∈Sn f(σ). Since we have assumed that f(σ∗) > 1/2 and ∥f∥ℓ1 = 1, we should have P σ∈S f(σ) < 1/2, where S ⊂Sn such that σ∗/∈S. Therefore, there is exactly one entry in each column of matrix Q that is > 1/2, and the corresponding edge should be a part of σ∗. Thus, keeping only those edges (i, j) such that Qi,j > 1/2, we should the matching σ∗. It is clear from the construction that σ∗ indeed has the maximum weight of all the other matchings. The result now follows. 6 Conclusion In summary, we considered the problem of inferring popular rankings from highly constrained information. Since raking data naturally arises in several diverse practical situations, an answer to this question has wide ranging implications. 7 Specifically, we considered the problem of inferring a sparse normalized function on the symmetric group using only the first order information about the function. In the election example this first order information corresponds to the fraction of people who have ranked candidate i in the jth position. We provide a novel algorithm to precisely recover the permutations and the associated popularity under minimal, and essentially necessary, conditions. We provide justification to the necessity of our assumptions and consider a natural random model to quantify the sparsity that can be supported. We also provide an algorithm, based on Fourier transform approximation, to determine the most popular ranking (mode of the function). The algorithm is essentially a max-weight matching with weights as the q.. values. Under a natural majority assumption, the algorithm finds the correct mode. The question considered is thematically related to harmonic analysis of functions over the symmetric group and also the currently popular topic of compressed sensing. The problem we consider can be restated as the reconstruction of a function using its first order Fourier representation, which has several applications particularly in the multi-object tracking problem. On the other hand, the parallels to the to the standard compressed sensing setup are limited because the available information is highly constrained. Thus, the existing approaches of compressed sensing cannot be applied to the problem. Next Steps. We concentrated on the recovery of the distribution from its first order marginals. A possible next step would be to consider recovery under different forms of partial information. More specifically, practical applications motivate considering the recovery of distribution from pair-wise information: probability of candidate i being ranked above candidate j. Another natural practical consideration would be to address the presence of noise in the available information. Understanding recovery of distributions with the above considerations are natural next steps. References [1] C. Dwork, R. Kumar, M. Naor, and D. Sivakumar. Rank aggregation revisited. In Proceedings of WWW10, 2001. [2] Yiling Chen, Lance Fortnow, Evdokia Nikolova, and David M. Pennock. Betting on permutations. In EC ’07: Proceedings of the 8th ACM conference on Electronic commerce, pages 326–335, New York, NY, USA, 2007. ACM. [3] J. Huang, C. Guestrin, and L. Guibas. Efficient Inference for Distributions on Permutations. In Advances in Neural Information Processing Systems (NIPS), 2007. [4] R. Kondor, A. Howard, and T. Jebara. Multi-object tracking with representations of the symmetric group. In Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, 2007. [5] P. Diaconis. Group Representations in Probability and Statistics. IMS Lecture Notes-Monograph Series, 11, 1988. [6] E.J. Candes and T. Tao. Decoding by linear programming. Information Theory, IEEE Transactions on, 51(12):4203–4215, Dec. 2005. [7] G. Birkhoff. Tres observaciones sobre el algebra lineal. Univ. Nac. Tucuman Rev. Ser. A, 5:147– 151, 1946. [8] J. Edmonds and R. Karp. Theoretical improvements in algorithmic efficiency for network flow problems. Jour. of the ACM, 19:248–264, 1972. [9] M. Bayati, D. Shah, and M. Sharma. Max-product for maximum weight matching: convergence, correctness and lp duality. IEEE Transactions on Information Theory, March 2008. 8
2008
35
3,521
Sparse Convolved Gaussian Processes for Multi-output Regression Mauricio Alvarez School of Computer Science University of Manchester, U.K. alvarezm@cs.man.ac.uk Neil D. Lawrence School of Computer Science University of Manchester, U.K. neill@cs.man.ac.uk Abstract We present a sparse approximation approach for dependent output Gaussian processes (GP). Employing a latent function framework, we apply the convolution process formalism to establish dependencies between output variables, where each latent function is represented as a GP. Based on these latent functions, we establish an approximation scheme using a conditional independence assumption between the output processes, leading to an approximation of the full covariance which is determined by the locations at which the latent functions are evaluated. We show results of the proposed methodology for synthetic data and real world applications on pollution prediction and a sensor network. 1 Introduction We consider the problem of modeling correlated outputs from a single Gaussian process (GP). Applications of modeling multiple outputs include multi-task learning (see e.g. [1]) and jointly predicting the concentration of different heavy metal pollutants [5]. Modelling multiple output variables is a challenge as we are required to compute cross covariances between the different outputs. In geostatistics this is known as cokriging. Whilst cross covariances allow us to improve our predictions of one output given the others because the correlations between outputs are modelled [6, 2, 15, 12] they also come with a computational and storage overhead. The main aim of this paper is to address these overheads in the context of convolution processes [6, 2]. One neat approach to account for non-trivial correlations between outputs employs convolution processes (CP). When using CPs each output can be expressed as the convolution between a smoothing kernel and a latent function [6, 2]. Let’s assume that the latent function is drawn from a GP. If we also share the same latent function across several convolutions (each with a potentially different smoothing kernel) then, since a convolution is a linear operator on a function, the outputs of the convolutions can be expressed as a jointly distributed GP. It is this GP that is used to model the multi-output regression. This approach was proposed by [6, 2] who focussed on a white noise process for the latent function. Even though the CP framework is an elegant way for constructing dependent output processes, the fact that the full covariance function of the joint GP must be considered results in significant storage and computational demands. For Q output dimensions and N data points the covariance matrix scales as QN leading to O(Q3N 3) computational complexity and O(N 2Q2) storage. Whilst other approaches to modeling multiple output regression are typically more constraining in the types of cross covariance that can be expressed [1, 15], these constraints also lead to structured covariances functions for which inference and learning are typically more efficient (typically for N > Q these methods have O(N 3Q) computation and O(N 2Q) storage). We are interested in exploiting the richer class of covariance structures allowed by the CP framework, but without the additional computational overhead they imply. We propose a sparse approximation for the full covariance matrix involved in the multiple output convolution process, exploiting the fact that each of the outputs is conditional independent of all others given the input process. This leads to an approximation for the covariance matrix which keeps intact the covariances of each output and approximates the cross-covariances terms with a low rank matrix. Inference and learning can then be undertaken with the same computational complexity as a set of independent GPs. The approximation turns out to be strongly related to the partially independent training conditional (PITC) [10] approximation for a single output GP. This inspires us to consider a further conditional independence function across data points that leads to an approximation which shares the form of the fully independent training conditional (FITC) approximation [13, 10] reducing computational complexity to O(NQM 2) and storage to O(NQM) with M representing a user specified value. To introduce our sparse approximation some review of the CP framework is required (Section 2). Then in Section 3, we present sparse approximations for the multi-output GP. We discuss relations with other approaches in Section 4. Finally, in Section 5, we demonstrate the approach on both synthetic and real datasets. 2 Convolution Processes Consider a set of Q functions {fq(x)}Q q=1, where each function is expressed as the convolution between a smoothing kernel {kq(x)}Q q=1, and a latent function u(z), fq(x) = Z ∞ −∞ kq(x −z)u(z)dz. More generally, we can consider the influence of more than one latent function, {ur(z)}R r=1, and corrupt each of the outputs of the convolutions with an independent process (which could also include a noise term), wq(x), to obtain yq(x) = fq(x) + wq(x) = R X r=1 Z ∞ −∞ kqr(x −z)ur(z)dz + wq(x). (1) The covariance between two different functions yq(x) and ys(x′) is then recovered as cov [yq(x), ys(x′)] = cov [fq(x), fs(x′)] + cov [wq(x), ws(x′)] δqs, where cov [fq(x), fs(x′)] = R X r=1 R X p=1 Z ∞ −∞ kqr(x −z) Z ∞ −∞ ksp(x′ −z′) cov [ur(z), up(z′)] dz′dz (2) This equation is a general result; in [6, 2] the latent functions ur(z) are assumed as independent white Gaussian noise processes, i.e. cov [ur(z), up(z′)] = σ2 urδrpδz,z′, so the expression (2) is simplified as cov [fq(x), fs(x′)] = R X r=1 σ2 ur Z ∞ −∞ kqr(x −z)ksr(x′ −z)dz. We are going to relax this constraint on the latent processes, we assume that each inducing function is an independent GP, i.e. cov [ur(z), up(z′)] = kurup(z, z′)δrp, where kurur(z, z′) is the covariance function for ur(z). With this simplification, (2) can be written as cov [fq(x), fs(x′)] = R X r=1 Z ∞ −∞ kqr(x −z) Z ∞ −∞ ksr(x′ −z′)kurur(z, z′)dz′dz. (3) As well as this correlation across outputs, the correlation between the latent function, ur(z), and any given output, fq(x), can be computed, cov [fq(x), ur(z))] = Z ∞ −∞ kqr(x −z′)kurur(z′, z)dz′. (4) 3 Sparse Approximation Given the convolution formalism, we can construct a full GP over the set of outputs. The likelihood of the model is given by p(y|X, φ) = N(0, Kf,f + Σ), (5) where y =  y⊤ 1 , . . . , y⊤ Q ⊤is the set of output functions with yq = [yq(x1), . . . , yq(xN)]⊤; Kf,f ∈ℜQN×QN is the covariance matrix relating all data points at all outputs, with elements cov [fq(x), fs(x′)] in (3); Σ = Σ ⊗IN, where Σ is a diagonal matrix with elements {σ2 q}Q q=1; φ is the set of parameters of the covariance matrix and X = {x1, . . . , xN} is the set of training input vectors at which the covariance is evaluated. The predictive distribution for a new set of input vectors X∗is [11] p(y∗|y, X, X∗, φ) = N Kf∗,f(Kf,f + Σ)−1y, Kf∗,f∗−Kf∗,f(Kf,f + Σ)−1Kf,f∗+ Σ  , where we have used Kf∗,f∗as a compact notation to indicate when the covariance matrix is evaluated at the inputs X∗, with a similar notation for Kf∗,f. Learning from the log-likelihood involves the computation of the inverse of Kf,f + Σ, which grows with complexity O((NQ)3). Once the parameters have been learned, prediction is O(NQ) for the predictive mean and O((NQ)2) for the predictive variance. Our strategy for approximate inference is to exploit the natural conditional dependencies in the model. If we had observed the entire length of each latent function, ur(z), then from (1) we see that each yq(x) would be independent, i.e. we can write, p({yq (x)}Q q=1 | {ur (z)}R r=1 , θ) = Q Y q=1 p(yq (x) | {ur (z)}R r=1 , θ), where θ are the parameters of the kernels and covariance functions. Our key assumption is that this independence will hold even if we have only observed M samples from ur(z) rather than the whole function. The observed values of these M samples are then marginalized (as they are for the exact case) to obtain the approximation to the likelihood. Our intuition is that the approximation should be more accurate for larger M and smoother latent functions, as in this domain the latent function could be very well characterized from only a few samples. We define u =  u⊤ 1 , . . . , u⊤ R ⊤as the samples from the latent function with ur = [ur(z1), . . . , ur(zM)]⊤; Ku,u is then the covariance matrix between the samples from the latent functions ur(z), with elements given by kurur(z, z′); Kf,u = K⊤ u,f are the cross-covariance matrices between the latent functions ur(z) and the outputs fq(x), with elements cov [fq(x), ur(z)] in (4) and Z = {z1, . . . , zM} is the set of input vectors at which the covariance Ku,u is evaluated. We now make the conditional independence assumption given the samples from the latent functions, p(y|u, Z, X, θ) = Q Y q=1 p(yq|u, Z, X, θ) = Q Y q=1 N Kfq,uK−1 u,uu, Kfq,fq −Kfq,uK−1 u,uKu,fq + σ2 qI  . We rewrite this product as a single Gaussian with a block diagonal covariance matrix, p(y|u, Z, X, θ) = N Kf,uK−1 u,uu, D + Σ  (6) where D = blockdiag  Kf,f −Kf,uK−1 u,uKu,f  , and we have used the notation blockdiag [G] to indicate the block associated with each output of the matrix G should be retained, but all other elements should be set to zero. We can also write this as D =  Kf,f −Kf,uK−1 u,uKu,f  ⊙M where ⊙is the Hadamard product and M = IQ ⊗1N, 1N being the N ×N matrix of ones and ⊗being the Kronecker product. We now marginalize the values of the samples from the latent functions by using their process priors, i.e. p(u|Z) = N(0, Ku,u). This leads to the following marginal likelihood, p(y|Z, X, θ) = Z p(y|u, Z, X, θ)p(u|Z)du = N 0, D + Kf,uK−1 u,uKu,f + Σ  . (7) Notice that, compared to (5), the full covariance matrix Kf,f has been replaced by the low rank covariance Kf,uK−1 u,uKu,f in all entries except in the diagonal blocks corresponding to Kfq,fq. When using the marginal likelihood for learning, the computation load is associated to the calculation of the inverse of D. The complexity of this inversion is O(N 3Q) + O(NQM 2), storage of the matrix is O(N 2Q) + O(NQM). Note that if we set M = N these reduce to O(N 3Q) and O(N 2Q) respectively which matches the computational complexity of applying Q independent GPs to model the multiple outputs. Combining eq. (6) with p(u|Z) using Bayes theorem, the posterior distribution over u is obtained as p(u|y, X, Z, θ) = N Ku,uA−1Ku,f(D + Σ)−1y, Ku,uA−1Ku,u  (8) where A = Ku,u + Ku,f(D + Σ)−1Kf,u. The predictive distribution is expressed through the integration of (6), evaluated at X∗, with (8), giving p(y∗|y, X, X∗, Z, θ) = Z p(y∗|u, Z, X∗, θ)p(u|y, X, Z, θ)du =N Kf∗,uA−1Ku,f(D + Σ)−1y, D∗+ Kf∗,uA−1Ku,f∗+ Σ  (9) with D∗= blockdiag  Kf∗,f∗−Kf∗,uK−1 u,uKu,f∗  . The functional form of (7) is almost identical to that of the PITC approximation [10], with the samples we retain from the latent function providing the same role as the inducing values in the partially independent training conditional (PITC) approximation. This is perhaps not surprising given that the nature of the conditional independence assumptions in PITC is similar to that we have made. A key difference is that in PITC it is not obvious which variables should be grouped together when making the conditional independence assumption, here it is clear from the structure of the model that each of the outputs should be grouped separately. However, the similarities are such that we find it convenient to follow the terminology of [10] and also refer to our approximation as a PITC approximation. We have already noted that our sparse approximation reduces the computational complexity of multioutput regression with GPs to that of applying independent GPs to each output. For larger data sets the N 3 term in the computational complexity and the N 2 term in the storage is still likely to be prohibitive. However, we can be inspired by the analogy of our approach to the PITC approximation and consider a more radical factorization of the outputs. In the fully independent training conditional (FITC) [13, 14] a factorization across the data points is assumed. For us that would lead to the following expression for conditional distribution of the output functions given the inducing variables, p(y|u, Z, X, θ) = QQ q=1 QN n=1 p(yqn|u, Z, X, θ) which can be briefly expressed through (6) with D = diag  Kf,f −Kf,uK−1 u,uKu,f  =  Kf,f −Kf,uK−1 u,uKu,f  ⊙M, with M = IQ⊗IN. Similar equations are obtained for the posterior (8), predictive (9) and marginal likelihood distributions (7) leading to the Fully Independent Training Conditional (FITC) approximation [13, 10]. Note that the marginal likelihood might be optimized both with respect to the parameters associated with the covariance matrices and with respect to Z. In supplementary material we include the derivatives of the marginal likelihood wrt the matrices Kf,f, Ku,f and Ku,u. 4 Related work There have been several suggestions for constructing multiple output GPs [2, 15, 1]. Under the convolution process framework, the semiparametric latent factor model (SLFM) proposed in [15] corresponds to a specific choice for the smoothing kernel function in (1) namely, kqr(x) = φqrδ(x). The latent functions are assumed to be independent GPs and in such a case, cov [fq(x), fs(x′)] = P r φqrφsrkurur(x, x′). This can be written using matrix notation as Kf,f = (Φ⊗I)Ku,u(Φ⊤⊗I). For computational speed up the informative vector machine (IVM) is employed [8]. In the multi-task learning model (MTLM) proposed in [1], the covariance matrix is expressed as Kf,f = Kf ⊗k(x, x′), with Kf being constrained positive semi-definite and k(x, x′) a covariance function over inputs. The Nystr¨om approximation is applied to k(x, x′). As stated in [1] with respect to SLFM, the convolution process is related with MTLM when the smoothing kernel function is given again by kqr(x) = φqrδ(x) and there is only one latent function with covariance kuu(x, x′) = k(x, x′). In this way, cov [fq(x), fs(x′)] = φqφsk(x, x′) and in matrix notation Kf,f = ΦΦ⊤⊗ k(x, x′). In [2], the latent processes correspond to white Gaussian noises and the covariance matrix is given by eq. (3). In this work, the complexity of the computational load is not discussed. Finally, [12] use a similar covariance function to the MTLM approach but use an IVM style approach to sparsification. Note that in each of the approaches detailed above a δ function is introduced into the integral. In the dependent GP model of [2] it is introduced in the covariance function. Our approach considers the more general case when neither kernel nor covariance function is given by the δ function. 5 Results For all our experiments we considered squared exponential covariance functions for the latent process of the form kurur(x, x′) = exp h −1 2 (x −x′)⊤Lr (x −x′) i , where Lr is a diagonal matrix which allows for different length-scales along each dimension. The smoothing kernel had the same form, kqr(τ) = Sqr|Lqr|1/2 (2π)p/2 exp  −1 2τ ⊤Lqrτ  , where Sqr ∈R and Lqr is a symmetric positive definite matrix. For this kernel/covariance function combination the necessary integrals are tractable (see supplementary material). We first setup a toy problem in which we evaluate the quality of the prediction and the speed of the approximation. The toy problem consists of Q = 4 outputs, one latent function, R = 1, and N = 200 observation points for each output. The training data was sampled from the full GP with the following parameters, S11 = S21 = 1, S31 = S41 = 5, L11 = L21 = 50, L31 = 300, L41 = 200 for the outputs and L1 = 100 for the latent function. For the independent processes, wq (x), we simply added white noise with variances σ2 1 = σ2 2 = 0.0125, σ2 3 = 1.2 and σ2 4 = 1. For the sparse approximations we used M = 30 fixed inducing points equally spaced between the range of the input and R = 1. We sought the kernel parameters through maximizing the marginal likelihood using a scaled conjugate gradient algorithm. For test data we removed a portion of one output as shown in Figure 1 (points in the interval [−0.8, 0] were removed). The predictions shown correspond to the full GP (Figure 1(a)), an independent GP (Figure 1(b)), the FITC approximation (Figure 1(c)) and the PITC approximation (Figure 1(d)). Due to the strong dependencies between the signals, our model is able to capture the correlations and predicts accurately the missing information. Table 1 shows prediction results over an independent test set. We used 300 points to compute the standarized mean square error (SMSE) [11] and ten repetitions of the experiment, so that we also included one standard deviation for the ten repetitions. The training times for iteration of each model are 1.45 ± 0.23 secs for the full GP, 0.29 ± 0.02 secs for the FITC and 0.48 ± 0.01 for the PITC. Table 1, shows that the SMSE of the sparse approximations is similar to the one obtained with the full GP with a considerable reduction of training times. Method Output 1 Output 2 Output 3 Output 4 Full GP 1.07 ± 0.08 0.99 ± 0.03 1.12 ± 0.07 1.05 ± 0.07 FITC 1.08 ± 0.09 1.00 ± 0.03 1.13 ± 0.07 1.04 ± 0.07 PITC 1.07 ± 0.08 0.99 ± 0.03 1.12 ± 0.07 1.05 ± 0.07 Table 1: Standarized mean square error (SMSE) for the toy problem over an independent test set. All numbers are to be multiplied by 10−2. The experiment was repeated ten times. Table included the value of one standard deviation over the ten repetitions. We now follow a similar analysis for a dataset consisting of weather data collected from a sensor network located on the south coast of England. The network includes four sensors (named Bramblemet, Sotonmet, Cambermet and Chimet) each of which measures several environmental variables [12]. We selected one of the sensors signals, tide height, and applied the PITC approximation scheme with an additional squared exponential independent kernel for each wq (x) [11]. Here Q = 4 and we chose N = 1000 of the 4320 for the training set, leaving the remaining points for testing. For comparison we also trained a set of independent GP models. We followed [12] in simulating sensor failure by introducing some missing ranges for these signals. In particular, we have a missing range −1 −0.5 0 0.5 1 −10 −8 −6 −4 −2 0 2 4 6 8 10 (a) Output 4 using the full GP −1 −0.5 0 0.5 1 −10 −8 −6 −4 −2 0 2 4 6 8 10 (b) Output 4 using an independent GP −1 −0.5 0 0.5 1 −10 −8 −6 −4 −2 0 2 4 6 8 10 (c) Output 4 using the FITC approximation −1 −0.5 0 0.5 1 −10 −8 −6 −4 −2 0 2 4 6 8 10 (d) Output 4 using the PITC approximation Figure 1: Predictive mean and variance using the full multi-output GP, the sparse approximation and an independent GP for output 4. The solid line corresponds to the mean predictive, the shaded region corresponds to 2 standard deviations away from the mean and the dash line is the actual value of the signal without noise. The dots are the noisy training points. There is a range of missing data in the interval [−0.8, 0.0]. The crosses in figures 1(c) and 1(d) corresponds to the locations of the inducing inputs. of [0.6, 1.2] for the Bramblemet tide height sensor and [1.5, 2.1] for the Cambermet. For the other two sensors we used all 1000 training observations. For the sparse approximation we took M = 100 equally spaced inducing inputs. We see from Figure 2 that the PITC approximation captures the dependencies and predicts closely the behavior of the signal in the missing range. This contrasts with the behavior of the independent model, which is not able to follow the original signal. As another example we employ the Jura dataset, which consists of measurements of concentrations of several heavy metals collected in the topsoil of a 14.5 km2 region of the Swiss Jura. The data is divided into a prediction set (259 locations) and a validation set (100 locations)1. In a typical situation, referred as undersampled or heterotopic case, a few expensive measurements of the attribute of interest are supplemented by more abundant data on correlated attributes that are cheaper to sample. We follow the experiments described in [5, p. 248,249] in which a primary variable (cadmium and copper) at prediction locations in conjunction with some secondary variables (nickel and zinc for cadmium; lead, nickel and zinc for copper) at prediction and validation locations, are employed to predict the concentration of the primary variable at validation locations. We compare results of independent GP, the PITC approximation, the full GP and ordinary co-kriging. For the PITC experiments, a k-means procedure is employed first to find the initial locations of the inducing values and then these locations are optimized in the same optimization procedure used for the parameters. Each experiment is repeated ten times. The results for ordinary co-kriging were obtained from [5, p. 248,249]. In this case, no values for standard deviation are reported. Figure 3 shows results of prediction for cadmium (Cd) and copper (Cu). From figure 3(a), it can be noticed that using 50 inducing values, the approximation exhibits a similar performance to the co-kriging method. As more 1This data is available at http://www.ai-geostats.org/ 0 0.5 1 1.5 2 2.5 3 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 Tide Height (m) Time (days) (a) Bramblemet using an independent GP 0 0.5 1 1.5 2 2.5 3 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 Tide Height (m) Time (days) (b) Bramblemet using PITC 0 0.5 1 1.5 2 2.5 3 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 Tide Height (m) Time (days) (c) Cambermet using an independent GP 0 0.5 1 1.5 2 2.5 3 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 Tide Height (m) Time (days) (d) Cambermet using PITC Figure 2: Predictive Mean and variance using independent GPs and the PITC approximation for the tide height signal in the sensor dataset. The dots indicate the training observations while the dash indicates the testing observations. We have emphasized the size of the training points to differentiate them from the testing points. The solid line corresponds to the mean predictive. The crosses in figures 2(b) and 2(d) corresponds to the locations of the inducing inputs. inducing values are included, the approximation follows the performance of the full GP, as it would be expected. From figure 3(b), it can be observed that, although the approximation is better that the independent GP, it does not obtain similar results to the full GP. Summary statistics of the prediction data ([5, p. 15]) shows higher variability for the copper dataset than for the cadmium dataset, which explains in some extent the different behaviors. IGP P(50) P(100) P(200) P(500) FGP CK 0.42 0.44 0.46 0.48 0.5 0.52 0.54 0.56 0.58 MEAN ABSOLUTE ERROR Cd (a) Cadmium (Cd) IGP P(50) P(100) P(200) P(500) FGP CK 7 8 9 10 11 12 13 14 15 16 MEAN ABSOLUTE ERROR Cu (b) Copper (Cu) Figure 3: Mean absolute error and standard deviation for ten repetitions of the experiment for the Jura dataset In the bottom of each figure, IGP stands for independent GP, P(M) stands for PITC with M inducing values, FGP stands for full GP and CK stands for ordinary co-kriging (see [5] for detailed description). 6 Conclusions We have presented a sparse approximation for multiple output GPs, capturing the correlated information among outputs and reducing the amount of computational load for prediction and optimization purposes. The reduction in computational complexity for the PITC approximation is from O(N 3Q3) to O(N 3Q). This matches the computational complexity for modeling with independent GPs. However, as we have seen, the predictive power of independent GPs is lower. Linear dynamical systems responses can be expressed as a convolution between the impulse response of the system with some input function. This convolution approach is an equivalent way of representing the behavior of the system through a linear differential equation. For systems involving high amounts of coupled differential equations [4], the approach presented here is a reasonable way of obtaining approximate solutions and incorporating prior domain knowledge to the model. One could optimize with respect to positions of the values of the latent functions. As the input dimension grows, it might be more difficult to obtain an acceptable response. Some solutions to this problem have already been proposed [14]. Acknowledgments We thank the authors of [12] who kindly made the sensor network database available. References [1] E. V. Bonilla, K. M. Chai, and C. K. I. Williams. Multi-task Gaussian process prediction. In J. C. Platt, D. Koller, Y. Singer, and S. Roweis, editors, NIPS, volume 20, Cambridge, MA, 2008. MIT Press. In press. [2] P. Boyle and M. Frean. Dependent Gaussian processes. In L. Saul, Y. Weiss, and L. Bouttou, editors, NIPS, volume 17, pages 217–224, Cambridge, MA, 2005. MIT Press. [3] M. Brookes. The matrix reference manual. Available on-line., 2005. http://www.ee.ic.ac.uk/ hp/staff/dmb/matrix/intro.html. [4] P. Gao, A. Honkela, M. Rattray, and N. D. Lawrence. Gaussian process modelling of latent chemical species: Applications to inferring transcription factor activities. Bioinformatics, 24(16):i70–i75, 2008. [5] P. Goovaerts. Geostatistics For Natural Resources Evaluation. Oxford University Press, 1997. ISBN 0-19-511538-4. [6] D. M. Higdon. Space and space-time modelling using process convolutions. In C. Anderson, V. Barnett, P. Chatwin, and A. El-Shaarawi, editors, Quantitative methods for current environmental issues, pages 37–56. Springer-Verlag, 2002. [7] N. D. Lawrence. Learning for larger datasets with the Gaussian process latent variable model. In Meila and Shen [9]. [8] N. D. Lawrence, M. Seeger, and R. Herbrich. Fast sparse Gaussian process methods: The informative vector machine. In S. Becker, S. Thrun, and K. Obermayer, editors, NIPS, volume 15, pages 625–632, Cambridge, MA, 2003. MIT Press. [9] M. Meila and X. Shen, editors. AISTATS, San Juan, Puerto Rico, 21-24 March 2007. Omnipress. [10] J. Qui˜nonero Candela and C. E. Rasmussen. A unifying view of sparse approximate Gaussian process regression. JMLR, 6:1939–1959, 2005. [11] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, Cambridge, MA, 2006. ISBN 0-262-18253-X. [12] A. Rogers, M. A. Osborne, S. D. Ramchurn, S. J. Roberts, and N. R. Jennings. Towards real-time information processing of sensor network data using computationally efficient multi-output Gaussian processes. In Proceedings of the International Conference on Information Processing in Sensor Networks (IPSN 2008), 2008. In press. [13] E. Snelson and Z. Ghahramani. Sparse Gaussian processes using pseudo-inputs. In Y. Weiss, B. Sch¨olkopf, and J. C. Platt, editors, NIPS, volume 18, Cambridge, MA, 2006. MIT Press. [14] E. Snelson and Z. Ghahramani. Local and global sparse Gaussian process approximations. In Meila and Shen [9]. [15] Y. W. Teh, M. Seeger, and M. I. Jordan. Semiparametric latent factor models. In R. G. Cowell and Z. Ghahramani, editors, AISTATS 10, pages 333–340, Barbados, 6-8 January 2005. Society for Artificial Intelligence and Statistics.
2008
36
3,522
A “Shape Aware” Model for semi-supervised Learning of Objects and its Context Abhinav Gupta1, Jianbo Shi2 and Larry S. Davis1 1 Dept. of Computer Science, Univ. of Maryland, College Park 2 Dept. of Computer and Information Sciences, Univ. of Pennsylvania agupta@cs.umd.edu, jshi@cis.upenn.edu, lsd@cs.umd.edu Abstract We present an approach that combines bag-of-words and spatial models to perform semantic and syntactic analysis for recognition of an object based on its internal appearance and its context. We argue that while object recognition requires modeling relative spatial locations of image features within the object, a bag-of-word is sufficient for representing context. Learning such a model from weakly labeled data involves labeling of features into two classes: foreground(object) or “informative” background(context). We present a “shape-aware” model which utilizes contour information for efficient and accurate labeling of features in the image. Our approach iterates between an MCMC-based labeling and contour based labeling of features to integrate co-occurrence of features and shape similarity. 1 Introduction Understanding the meaning of a sentence involves both syntactic and semantic analysis. A bag-ofwords approach applied locally over a sentence would be insufficient to understand its meaning. For example, “Jack hit the bar” and “The bar hit Jack” have different meanings even though the bag-ofwords representation is the same for both. In many cases, determining meaning also requires word sense disambiguation using contextual knowledge. For example, does “bar” represents a rod or a place where drinks are served? While a combined semantic and syntactical model could be used for representation and application of context as well, it would be expensive to apply. Syntactical rules are generally not required for extracting knowledge about context - a topic model is generally sufficient for contextual analysis in text [14, 15]. We use analogous reasoning to suggest a similar dichotomy in representing object structure and context in vision. Our approach combines bag-of-words and spatial models to capture semantics and syntactic rules, respectively, that are employed for recognizing an object using its appearance, structure and context. We treat an object and a scene analogous to a sentence and a document respectively. Similar to documents, object recognition in natural scenes requires modeling spatial relationships of image features(words) within the object but for representing context in a scene, a bag-of-words approach suffices (See Figure 1 (a) and (b)). Learning such a model from weakly labeled data requires labeling the features in an image as belonging to an object or its context (informative background). Spatial models, such as constellation or star models, compute a sparse representation of objects(with a fixed number of parts) by selecting features which satisfy spatial constraints. Their sparse representation reduces their utility in the presence of occlusion. Approaches for learning a dense bag-of-features model with spatial constraints from weakly labeled data have also been proposed. Such approaches (based on marginalizing over possible locations of the object), however, lead to poor foreground segmentation if the training dataset is small, the images have significant clutter 1 or if some other object in the background has a strong and consistent spatial relationship with the object to be learned throughout the 1A dataset of less cluttered images would fail to provide enough contextual information to be learned for a model that simultaneously learns object model and its contextual relationships. (a) (b) (c) Figure 1: (a) An example of the importance of spatial constraints locally. The red color shows the features on the foreground car. A bag of words approach fails to capture spatial structure and thus combines the front and rear of different cars. (b) We use a spatial model of the object and a bag-of-words approach for context representation. (c) Importance of using contour information: Objects such as signs become part of the foreground since they occur at consistent relative location to the car. If shape and contour information is combined with co-occurrence and spatial structure of image features, then such mis-labellings can be reduced. For example, in the above case since there are strong intervening contours between the features on the car(foreground) and the features on signs, and there is a lack of strong contours between features on signs and features on trees (background), it is more likely that features on the signs should be labeled as background. Problem: Learn the parameters of object model given the images (I1, .., ID), object labels (O1, .., OD) and Object Model Shape (M). Approach: Simultaneous localization the object in training images and estimation of model parameters. This is achieved by integrating cues from image features and contours. The criteria includes following terms: 1. Feature Statistics: The image features satisfy the co-occurrence and spatial statistics of the model. 2. Shape Similarity: The shape of the foreground object is similar to the shape of the sketch of the object. 3. Separation: The object and background features should be separated by the object boundary contours. Table 1: Summary of “Shape Aware” Model training dataset. We overcome this problem by applying shape based constraints while constructing the foreground model. Figure 1(c) shows an example of how contours provide important information for foreground/background labeling. We add two constraints to the labeling problem using the contour information: (a) The first constraint requires the presence of strong intervening contours between foreground and background features. (b) The second constraint requires the shape of boundary contours be similar to the shape of the exemplar/sketch provided with the weakly labeled dataset. This allows us to learn object models from images where there is significant clutter and in which the object does not cover a significant part of the image. We provide an iterative solution to integrate these constraints. Our approach first labels the image features based on co-occurrence and spatial statistics - the features that occur in positive images and exhibit strong spatial relationships are labeled as foreground features. Based on the labels of image features, object boundaries are identified based on how well they separate foreground and background features. This is followed by a shape matching step which identifies the object boundary contours based on their expected shape. This step prunes many contours and provides a better estimate of object boundaries. These boundaries are then be used to relabel the features in the image. This provides an initialization point for the next iteration of Gibbs sampling. Figure 2 shows the system flow of our “Shape Aware” approach. 1.1 Related Work Many graphical models for object recognition [11] have been inspired by models of text documents such as LDA [6] and pLSA [7]. These models are computationally efficient because they ignore the spatial relationships amongst image features (or parts) and use a dense object representation. However, ignoring spatial relationships between features leads to problems (See Figure 1(a)). In contrast, approaches that model spatial relationships [9, 5] between object parts/features are comFigure 2: Shape-Aware Learning (Overview): We first compute feature labels using the Gibbs sampling approach on the Spatial Author Topic model. The features labeled foreground and background are drawn in red and yellow respectively. This is followed by object boundary extraction. The object boundaries are identified based on how well they separate foreground and background features. Likely object boundary contours are then matched to the sketch using a voting-based approach and the contours consistent with the shape of the sketch are identified. These contours are then used to relabel the features using the same separation principle. The new labels and topics from the previous time step are used as a new initialization point for the next iteration. putationally expensive and therefore employ only sparse features representation. These approaches fail under occlusion due to their sparse representation and their stringent requirement of a one-one correspondence between image and object features. There has been recent work in applying spatial constraints to topic models which enforce neighboring features to belong to similar topics [10, 2] for the purpose of segmentation. Our work is more related to classification based approaches [8, 3] that model spatial locations of detected features based on a reference location in the image. Sudderth et. al [3] presented such a model that can be learned in a supervised manner. Fergus et. al [8] proposed an approach to learn the model from weakly labeled data. This was achieved by marginalizing object locations and scale. Each object location hypothesis provides a foreground segmentation which can be used for learning the model. Such an approach, however, is expensive unless the training images are not highly cluttered. Additionally, they are subject to modeling errors if the object of interest is small in the training images. Our goal is to simultaneously learn an object model and its context model from weakly labeled images. To learn context we require real world scenes of object and their natural surrounding environment (high clutter and small objects). We present a “shape aware” feature based model for recognizing objects. Our approach resolves the foreground/background labeling ambiguities by requiring that the shapes of the foreground object across the training images to be similar to a sketch exemplar. Shape based models [1] have been used previously for object recognition. However, contour matching is an expensive(exponential) problem due to the need to select the best subset of contours from the set of all edges that match the shape model. Approximate approaches such as MCMC are not applicable since matching is very closely coupled with selection. We propose an efficient approach that iterates between an co-occurence based labeling and contour based labeling of features. 2 Our Approach - Integrating feature and contour based cues We assume the availability of a database of weakly labeled images which specify the presence of an object, but not its location. Similar to previous approaches based on document models, we vector quantize the space of image features into visual words to generate a discrete image representation. Each visual word is analogous to a word and an image is treated analogous to a document. Each word is associated with a topic and an author (the object). The topic distribution depends on the associated author and the word distribution depends on the assigned topic (Section 2.1). We start with random assignments of words to topics and authors. This is followed by a Gibbs sampling step which simultaneously estimates the hidden variables (topic and author) and also the parameters of the generative model that maximizes the likelihood(Section 2.2). These assignments are then used to obtain a set of likely object boundary contours in each image. These contours are subsequently analyzed to identify the object “centers” and final object contours by matching with the shape exemplar(Section 2.3). Using the new set of boundary contours, the authors corresponding to each word are reassigned and the model is retrained using the new assignment. 2.1 Generative Model - Syntax and Semantics Author-Topic Model: Our model is motivated by the author-topic model [13] and the model presented in [4]. We first provide a brief description of the author topic model, shown in figure 3(a). The author-topic model is used to model documents for which a set of authors is given. For each word in the document, an author (xi) is chosen uniformly at random from the set of authors (ad). A topic (zi) is chosen from a distribution of topics specific to the selected author and a word (wi) is generated from that topic. The distribution of topics (θ) for each author is chosen from a symmetric Dirichlet(α) prior and the distribution of words (φ) for a topic is chosen from symmetric Dirichlet (β) prior. θ D w z x ad Nd φ β α Od Rd x z w ri γ θ α β φ ζ η D Nd l Figure 3: (a) Author-Topic Model (b) Our Model (Spatial Author-Topic Model). Our model extends the author topic model by including the spatial(syntactical) relationship between features. Spatial-Author Topic Model: Our model is shown in figure 3(b). Our goal is not only to model the distribution of type of features but also to model the distribution of spatial locations of the subset of these features that are associated with the foreground object. We model this as follows: A feature in the image is described by its type wi and location li. Each feature (wi, li) is ‘authored’ by an author xi which is described by its type oi 2 and its location ri. For each feature, the author xi is chosen from a distribution, η, which can be either uniform or generated using available priors from other sources. Topic zi for each word is chosen from a distribution of topic specific to the type of object oi and a word wi is generated from that topic. The distribution of topics (θ) for each object type is chosen from a symmetric Dirichlet (α) distribution3 . The distribution of a word for each topic is chosen from a symmetric Dirichlet (β) prior. The location of each feature, li, is sampled from the distribution p(li|oi, zi, ri) using the following distribution: p(li|oi, zi, ri) = exp(−||li −ri||2 σ2s )ζoi,zi ri (li) (1) 2For an image with label car, the possible object types are car, and context of car. The differentiation between “informative” and “non-informative” background is captured by the probability distributions. 3The Dirichlet distribution is an attractive distribution - it belongs to the exponential family and is conjugate to the multinomial distribution. The first term ensures that each feature has higher probability of being generated by nearby reference locations. The second term enforces spatial constraints on the location of the feature that is generated by topic (zi). We enforce these spatial constraints by a binning approach. Each feature in the foreground can lie in B possible bins with respect to the reference location. The distribution of the spatial location of a feature is specific to the topic zi and the type of object oi. This distribution is chosen from a symmetric Dirichlet (γ) prior. Since we do not want to enforce spatial constraints on the locations of the features generated by topics from context, we set ζ to a constant when oi corresponds to the context of some object. 2.2 Gibbs Sampling We use Gibbs sampling to estimate zi and xi for each feature. Given the features (w, l), authors assignments x, other topic assignments z−i and other hyperparameters, each zi is drawn from: P(zi|w, l, x, z−i) ∝ P(wi|w−i, z)P(zi|z−i, oi)P(li|xi, l−i, x−i, zi) ∝ nzi wi + β nzi + Wβ noi zi + α noi + Tα noi,zi Bi + γ noi,zi + Bγ (2) where nzi wi represents the number of features of type wi in the dataset assigned to topic zi, nzi represents the total number of features assigned to topic zi. noi zi represents the number of features that are assigned to topic zi and author of type oi and noi represents the total number of features assigned to author oi. Bi represents the spatial bin in which feature i lies in when the reference is ri, noi,zi Bi represents the number of features from object type oi and topic zi which lie in bin Bi, noi,zi represents the total number of features from object type oi and topic zi. W is number of type of words and T represents number of topic types. Similarly, given the features (w, l), topic assignments z, other author assignments x−i and other hyperparameters, each xi is drawn from: P(xi|w, l, z, x−i) ∝ P(li|xi, l−i, x−i, zi)P(zi|oi, z−i, x−i)P(ri|oi, z−i, x−i) ∝ exp(−||li −ri||2 σ2s ) noi,zi Bi + γ noi,zi + Bγ noi zi + α noi + Tα noi ri + δ noi + Rδ (3) where noi ri represents the number of features from object type oi that have ri as the reference location and noi represents the total number of features from object oi. In case oi is of type context, the second term is replaced by a constant. R represents the number of possible reference locations. 2.3 “Shape Aware” Model The generative model presented in section 2.1 can be learned using the Gibbs sampling approach explained above. However, this approach has some shortcomings: (a) If there are features in the background that exhibit a strong spatial relationship with the object, they can be labeled as foreground. (b) In clutter, the labeling performance diminishes as the discriminability of the object is lower. The labeling performance can, however, be improved if contour cues are utilized. We do this by requiring that the shape of the object boundary contours extracted based on feature labeling should be similar to a sketch of the object provided in the dataset. Thus, the labeling of features into foreground and background is not only governed by co-occurrence and structural information, but also by shape similarity. We refer to this as a “shape aware” model. Shape matching using contours has, in the worst case, exponential complexity since it requires selection of the subset of contours that best constitute the foreground boundary. We avoid this computationally expensive challenge by solving the selection problem based on the labels of features extracted using Gibbs sampling. The spatial author-topic model is used to attend to the contours which are likely to be object boundaries. Our shape matching module has three steps: (a) Extracting object boundaries based on labels extracted from the spatial author topic model. (b) Extracting boundaries consistent with the shape model by matching. (c) Using new boundaries to determine new labels for features. Figure 4: Extraction of object boundaries consistent with the shape of exemplar. The first step is extraction of contours which separate foreground and background features. This is followed by a voting process. Each contour in the image is matched to every contour in the model to extract the center of the object. The votes are then traced back to identify the contours consistent with the shape model. Extracting Object Boundary Contours from Feature Labels: We first determine the edges using and group them into contours using the approach presented in [16]. Each contour cj is a collection of 2D points (pj1, pj2....). Our goal is to extract boundary contours of the object using the feature labels. Since, the boundary contours separates foreground and background features, an estimate of the number of foreground and background features on each side of an image contour provides evidence as to whether that image contour is part of the object boundary. For each contour, we measure the number of foreground and background features that lie on each side of the contour within some fixed distance of the contour. The probability that a contour is a boundary contour clj = 1 of the object with the side S1 being the interior of the object is given by: PS1(clj = 1|x) = nS1 f + τ nS1 + 2τ nS2 b + τ nS2 + 2τ (4) where nS1 f is the total number of features with foreground label on side S1 of the contour and nS1 is total number of features on side S1. Shape Matching: Given the probabilities of each contour being a part of the object boundary, we estimate the object center using a voting-based approach [18]. Each contour votes for the center of the object where the weight of the vote is determined based on how well the contour matches the sketch. Non-maximal suppression is then used to estimate the candidate object locations. Once the candidate location of the center of object is selected, we trace back the votes to estimate the new boundary of the object. Figure 4 shows an example of the voting process and boundary contours extracted using this approach. Extracting New Labels: These boundaries are then used to relabel the image features into foreground and background. We use the same separation principle to label new features. Each boundary contour votes as to whether a feature should be labeled foreground or background. If the feature lies on the same side as the object center, then the contour votes for the feature as foreground. Votes are weighted based on the probability of a contour being an object boundary. Therefore, the probability that the feature i is labeled as foreground is given by P j ωjνij P j ωj where ωj is the probability that the contour j is on object boundary and νij is variable which is 1 if the object center and feature are on same side of contour cj or 0, if the center is on opposite side. The new labels are then used as an initialization point for the Gibbs sampling based learning of the feature model. 3 Experimental Results We tested our “shape-aware” model on images of cars obtained from the Label-me dataset[17]. We randomly selected 45 images for training the model from the LabelMe dataset. A potential concern is the number of iterations/convergence required by our iterative approach. However, it was empirically observed that, in most cases the system stabilizes after only two iterations. It should also be noted that each iteration between contour and feature labelings is performed after 200 iterations Figure 5: Advantages of iterative approach. At each iteration, the author topic distribution changes, which requires retraining the model using Gibbs sampling. This can help in two ways: (A) More Focused Attention: The feature labeling gets refined. (B) Change of Focus: A new reference point gets chosen by new distribution. of Gibbs sampling. The advantages of having an iterative approach is shown in, figure 5. We compared the performance of our system against the author-topic model and the author-topic model with spatial constraints. We evaluated the performance of the algorithm by measuring the labeling performance in training and test datasets. Better labeling in training is required for better model learning. Figure 6 show some of the cases where both author-topic and author-topic model with spatial constraints fail due to high clutter or the foreground object being too small in the training dataset. The “shape aware” model, however, shows better localization performance as compared to the other two. t = 0 t = 2 t = 0 t = 2 Figure 6: Two examples of how the “shape aware” model provides better localization compared to spatial author topic models. The odd columns show the results of the author topic model (the initialization point of iterative approach). The even columns show the labeling provided by our algorithm after 2 iterations. "Shape−Aware" Spatial Author Topic Author Topic 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Recall Precision (a) Labeling (Training) "Shape Aware" Spatial Author Topic Author Topic 0 0.2 0.4 0.6 0.8 Recall Precision (b) Labeling (Test) Figure 7: Quantitative Comparison of author-topic, spatial author-topic and “shape aware” model based on randomly selected 40 images each from the training and test dataset(17000 features each approximately). The values of the parameters used are T = 50, α = 50 T , β = 0.01, γ = 0.01, B = 8 and τ = 0.1. Figure 7 shows a quantitative comparison of the “shape aware” model to the author-topic and the spatial author-topic model. Recall ratio is defined as the ratio of features labeled as foreground to the total number of foreground features. Precision is defined as the ratio of features correctly labeled as foreground to the total number of features labeled as foreground. In the case of labeling in training data, our approach outperforms both author-topic and spatial author-topic model. In the case of test dataset, the author-topic model has higher recall but very low precision. The low precision of authortopic and spatial author-topic can be attributed to the fact that, in many cases the context is similar and at the same relative locations to each other. This leads to modeling errors - these features are learned to be part of the object. In the case of the “shape aware” model, the shape of the objects help in pruning these features and therefore lead to much higher precision. Low recall rates in our model and the spatial author-topic model is because some foreground features do not satisfy the spatial Figure 8: Example of performance of three models on a test image. “Shape Aware” model shows high precision in label prediction due to pruning provided by shape matching. Author Topic model shows high recall rates because high similarity in context across images. Figure 9: A few examples of labeling in the test dataset. constraints and hence are falsely labeled as background features. Figure 9 shows some examples of performance of the “shape aware” model on test dataset. Acknowledgements This research was funded by US Government’s VACE program and NSF-IIS-04-47953(CAREER) award. The authors would also like to thank Qihui Zhu for providing the code for extracting contours. References [1] G. Elidan, G. Heitz and D. Koller, Learning Object Shape: From Drawings to Images, IEEE CVPR 2006. [2] X. Wang and E. Grimson, Spatial Latent Dirichlet Allocation, NIPS 2007. [3] E. Sudderth, A. Torralba, W.T Freeman and A.S Wilsky, Learning Hierarchical Models of Scenes, Objects and Parts, ICCV 2005. [4] T.L Griffiths, M Steyvers, D.M Blei and J.B Tenenbaum, Integrating Topics and Syntax, NIPS 2005. [5] D.J Crandall and D.P Huttenlocher, Weakly Supervised Learning of Part-Based Spatial Models for Visual Object Recognition, ECCV 2006. [6] D. Blei, A. Ng and M. Jordan, Latent Dirichlet Allocation, Journal of Machine Learning Research, 2003. [7] T. Hofmann, Unsupervised learning by probabilistic latent semantic analysis, Machine Learning 2001. [8] R. Fergus, L. Fei-Fei, P. Perona and A. Zisserman, Learning Object Categories from Google’s Image Search, ICCV 2005. [9] R. Fergus, P. Perona and A. Zisserman, Object Class Recognition by Unsupervised Scale-Invariant Learning, CVPR 2003. [10] L. Cao and L. Fei-Fei, Spatially coherent latent topic model for concurrent object segmentation and classification, ICCV 2007. [11] B. Russell, A. Efros, J. Sivic, W. Freeman and A. Zisserman, Using Multiple Segmentations to Discover Objects and their Extent in Image Collections, CVPR 2006. [12] T.L Griffiths and M. Steyvers, Finding Scientific Topics, PNAS 2004. [13] M. Rosen-Zvi, T. Griffiths, M. Steyvers and P. Smyth, The Author-Topic Model for Authors and Documents, UAI 2004 [14] M. Lesk, Automatic Sense Disambiguation Using Marchine Readable Dictionaries: How to Tell a Pine Cone from Ice Cream Cone, SIGDOC 1986. [15] D. Yarowsky, Word Sense Disambiguation Using Statistical Models of Roget’s Categories trained on Large Corpora, COLING 1992. [16] Q. Zhi, G. Song and J. Shi, Untangling Cycles for Contour Grouping, ICCV 2007. [17] B. C. Russell, A. Torralba, K. P. Murphy, W. T. Freeman, LabelMe: a Database and Web-based Tool for Image Annotation, IJCV 2008. [18] B. Leibe, A. Leonardis and B. Schiele,Combined Object Categorization and Segmentationwith an Implicit Shape Model, ECCV workshop on Statistical Learning in Vision, 2006.
2008
37
3,523
Multi-task Gaussian Process Learning of Robot Inverse Dynamics Kian Ming A. Chai Christopher K. I. Williams Stefan Klanke Sethu Vijayakumar School of Informatics, University of Edinburgh, 10 Crichton Street, Edinburgh EH8 9AB, UK {k.m.a.chai, c.k.i.williams, s.klanke, sethu.vijayakumar}@ed.ac.uk Abstract The inverse dynamics problem for a robotic manipulator is to compute the torques needed at the joints to drive it along a given trajectory; it is beneficial to be able to learn this function for adaptive control. A robotic manipulator will often need to be controlled while holding different loads in its end effector, giving rise to a multi-task learning problem. By placing independent Gaussian process priors over the latent functions of the inverse dynamics, we obtain a multi-task Gaussian process prior for handling multiple loads, where the inter-task similarity depends on the underlying inertial parameters. Experiments demonstrate that this multi-task formulation is effective in sharing information among the various loads, and generally improves performance over either learning only on single tasks or pooling the data over all tasks. 1 Introduction The inverse dynamics problem for a robotic manipulator is to compute the torques τ needed at the joints to drive it along a given trajectory, i.e. the motion specified by the joint angles q(t), velocities ˙q(t) and accelerations ¨q(t), through time t. Analytical models for the inverse dynamics τ(q, ˙q, ¨q) are often infeasible, for example due to uncertainty in the physical parameters of the robot, or the difficulty of modelling friction. This leads to the need to learn the inverse dynamics. A given robotic manipulator will often need to be controlled while holding different loads in its end effector. We refer to different loadings as different contexts. The inverse dynamics functions depend on the different contexts. A simple approach is to learn a different mapping for each context, but it is more attractive if one can exploit commonality in these related tasks to improve performance, i.e. to carry out multi-task learning (MTL) [1, 2]. The aim of this paper is to show how this can be carried out for the inverse dynamics problem using a multi-task Gaussian process (GP) framework. In §2 we discuss the relevant theory for the problem. Details of how we optimize the hyperparameters of the multi-task GP are given in §3, and model selection is described in §4. Relationships to other work are discussed in §5, and the experimental setup and results are given in §6. 2 Theory We first describe the relationship of inverse dynamics functions among contexts in §2.1. In §2.2 we review the multi-task GP regression model proposed in [3], and in §2.3 we describe how to derive a multi-task GP model for the inverse-dynamics problem. 2.1 Linear relationship of inverse dynamics between contexts Suppose we have a robotic manipulator consisting of J joints, and a set of M loads. Figure 1 illustrates a six-jointed manipulator, with joint j connecting links j−1 and j. We wish to learn the inverse Shoulder Joint 2 Waist Joint 1 Joint 3 Joint 5 Wrist Bend Wrist rotation Joint 4 Joint 6 Flange Elbow Base q3 Figure 1: Schematic of the PUMA 560 without the end-effector (to be connected to joint 6). yjJ,1 yjJ,10 hj · · ·· · · j = 1 . . . J τ m j · · · m = 1 . . . M    1 πm J,1 · · · πm J,10    Figure 2: A schematic diagram on how the different functions are related. A plate repeats its contents over the specified range. dynamics model of the manipulator for the mth context, i.e. when it handles the mth load in its endeffector connected to the last link. We denote this by τ m(x) ∈RJ, with x def= (qT, ˙qT, ¨qT)T ∈R3J. It can be shown that the required torque for the jth joint can be written as [4] τ m j (x) = PJ j′=j yT jj′(x)πm j′ yjj′ : R3J 7→R10, (1) where the yjj′’s are vector-valued functions of x, and πm j′ ∈R10 is the vector of inertial parameters1 of the j′th joint when manipulating the mth load. The inertial parameters for a joint depend on the physical characteristics of its corresponding link (e.g. mass) and are independent of x. When, as in our case, the loads are rigidly attached to the end effector, each load may be considered as part of the last link, and thus modifies the inertia parameters for the last link only [5]. The parameters for the other links remain unchanged since the parameters are local to the links and their frames. Denoting the common inertial parameters of the j′th link by π• j′, we can write τ m j (x) = hj(x) + yT jJ(x)πm J , where hj(x) def= PJ−1 j′=j yT jj′(x)π• j′. (2) Define ˜yj(x) def= (hj(x), (yjJ(x))T)T and ˜πm def= (1, (πm J )T)T, then τ m j (x) = ˜yj(x)T ˜πm. Note that the ˜yjs are shared among the contexts, while the ˜πms are shared among the J links, as illustrated in Figure 2. This decomposition is not unique, since given a non-singular square 11×11 matrix Aj, setting zj(x) def= A−T j ˜yj(x) and ρm j def= Aj ˜πm, we also have τ m j (x) = ˜yj(x)TA−1 j Aj ˜πm = zj(x)Tρm j . (3) Hence the vector of parameters ˜πγ is identifiable only up to a linear combination. Note that in general the matrix Aj may vary across the joints. 2.2 Multi-task GP regression model We give a brief summary of the multi-task Gaussian process (GP) regression model described in [3]. This model learns M related functions {f m}M m=1 by placing a zero mean GP prior which directly induces correlations between tasks. Let tm be the observation of the mth function at x. Then the model is given by ⟨f m(x)f m′(x′)⟩ def= Kf mm′kx(x, x′) tm ∼N(f m(x), σ2 m), (4) where kx is a covariance function over inputs, Kf is a positive semi-definite (p.s.d) matrix of intertask similarities, and σ2 m is the noise variance for the mth task. 2.3 Multi-task GP model for multiple contexts We now show that the multi-task GP model can be used for inferring inverse dynamics for multiple contexts. We begin by placing independent zero mean GP priors on all the component functions of z1(·), . . . , zJ(·). Let α be an index into the elements of the vector function zj(·), then our prior is ⟨zjα(x)zj′α′(x′)⟩= δjj′δαα′kx j (x, x′). (5) 1We may also formulate our model using the more general vector of dynamic parameters which includes also the friction parameters, motor inertia etc. However, these additional parameters are independent of the load, and so can be absorbed into the function hj in eq. 2. In addition to independence specified by the Kronecker delta functions δ··, this model also imposes the constraint that all component functions for a given joint j share the same covariance function kx j (·, ·). With this prior over the zjs, the Gaussian process prior for τ m j (·) is given by ⟨τ m j (x)τ m′ j′ (x′)⟩= δjj′(Kρ j )mm′kx j (x, x′), (6) where we have set Pj def= (ρ1 j| · · · |ρM j ) and Kρ j def= PT j Pj, so that (ρm j )Tρm′ j = (Kρ j )mm′, the (m, m′)th entry of the positive semi-definite matrix Kρ j . Notice that Kρ j defines the similarity between different contexts. The rank of Kρ j is the rank of Pj, and is upper bounded by min(M, 11), reflecting the fact that there are at most 11 underlying latent functions (see Figure 2). Let tm j (x) be the observed value of τ m j (x). The deviations from τ m j (x) may be modelled with tm j (x) ∼N(τ m j (x), (σm j )2), though in practice we let σj def= σ1 j ≡σ2 j . . . ≡σM j , sharing the variance parameters among the contexts. This completes the correspondence with the multi-task GP model in eq. 4. Note, however, that in this case we have J multi-task GP models, one for each joint. This model is a simple and convenient one where the prior, likelihood and posterior factorize over joints. Hence inference and hyperparameter learning can be done separately for each joint. Making predictions As in [3], inference in our model can be done by using the standard GP formulae for the mean and variance of the predictive distribution with the covariance function given in eq. 6 together with the normal noise model. The observations over all contexts for a given joint j will be used to make the predictions. For the case of complete data (where there are observations at the same set of x-values for all contexts) one can exploit the Kronecker-product structure [3, eq. 2]. 2.3.1 The relationship among task similarity matrices Let ˜Π def= (˜π1| · · · |˜πM). Recall that ˜πm is an 11 dimensional vector. However, if the different loads in the end effector do not explore the full space (e.g. if some of the inertial parameters are constant over all loads), then it can happen that s def= rank(˜Π) ≤min(M, 11). It is worthwhile to investigate the relationship between Kρ j and Kρ j′, j ̸= j′. Recall from eq. 3 that ρm j def= Aj ˜πm, where Aj is a full-rank square matrix. This gives Pj = Aj ˜Π and Kρ j = ˜ΠTAT j Aj ˜Π, so that rank(Kρ j ) = rank(˜Π). Therefore the Kρ j s have the same rank for all joints, although their exact values may differ. This observation will be useful for model selection in §4. 3 Learning the hyperparameters — a staged optimization heuristic In this section, we drop the joint index j for the sake of brevity and clarity. The following applies separately for each joint. Let tm be the vector of nm observed torques at the joint for context m, and Xm be the corresponding 3J ×nm design matrix. Further, let X be the 3J ×N design matrix of distinct x-configurations observed over all M contexts. Given this data, we wish to optimize the marginal likelihood L(θx, Kρ, σ2) def= p({tm}M m=1|X, θx, Kρ, σ2), where θx are the parameters of kx. As pointed out in [3], one may approach this either using general gradient-based optimization, or using expectation-maximization. In this paper, the former is used. In general, the objective function L(θx, Kρ, σ2) will have multiple modes, and it is a difficult problem of how to locate the best mode. We propose a staged strategy during optimization to help localize the search region. This is outlined below, with details given in the subsections that follow. Require: Starting positions θx 0, Kρ 0 , σ2 0, and rank r. {All arg max operations are understood to find only the local maximum.} 1: Starting from θx 0 and σ2 0, find (θx 1, σ2 1) = arg maxθx,σ2 L(θx, Kρ 0 , σ2). 2: Calculate K1 ρ based on details in §3.2. 3: Starting from θx 1, Kρ 1 , and σ2 0, find (θx ans, Kρ ans, σ2 ans) = arg maxθx,Kρ,σ2 L(θx, Kρ, σ2). The optimization order reflects the relative importance of the different constituents of the model. The most important is kx, hence the estimation of θx begins in step 1; the least important is σ2, hence its estimation from the initial value σ2 0 is in step 3. For our application, we find that this strategy works better than one which simultaneously optimizes for all the parameters. 3.1 The initial choice of Kρ The choice of Kρ 0 is important, since it affects the search very early on. Reasonable values that admit ready interpretations are the matrix of ones 11T and the identity matrix I. For Kρ 0 = 11T, we initially assume the contexts to be indistinguishable from each other; while for Kρ 0 = I, we initially assume the contexts to be independent given the kernel parameters, which is a multi-task learning model that has been previously explored, e.g. [6]. These two are at the opposite extremes in the spectrum of inter-context/task correlation, and we believe the merit of each will be application dependent. Since these two models have the same number of free parameters, we select the one with the higher likelihood as the starting point for the search in step 2. However, we note that in some applications there may be reasons to prefer one over the other. 3.2 Computation of Kρ 1 in step 2 Given estimates θx 1 and σ2 1, we wish to estimate a Kρ 1 from which the likelihood can be optimized in step 3. Here we give the sequence of considerations that leads to a formula for computing Kρ 1 . Let Kx 1 be the covariance matrix for all pairs in X, using θx 1 for kx. Let T be an N×M matrix which corresponds to the true values of the torque function τ m(xi) for m = 1, . . . , M and i = 1, . . . , N. Then as per the EM step discussed in [3, eq. 4], we have Kρ EM = N −1 T T(Kx 1)−1T ˜θ0 ≃N −1 ⟨T ⟩T ˜θ0 (Kx 1)−1 ⟨T ⟩˜θ0 , (7) where the expectations are taken w.r.t a GP with parameters ˜θ0 = (θx 1, Kρ 0 , σ2 1), and the (i, m)th entry of ⟨T ⟩˜θ0 is the mean of τ m(xi) with this GP. The approximation neglects the GP’s variance; this is justifiable since the current aim is to obtain a starting estimate of Kρ for a search procedure. There are two weaknesses with eq. 7 that we shall address. The first is that the rank of ⟨T ⟩˜θ0 is upper bounded by that of Kρ 0 , so that the rank of Kρ EM is similarly upper bounded.2 This property is undesirable, particularly when Kρ 0 = 11T . We ameliorate this by replacing ⟨τ m(xi)⟩˜θ0 with the corresponding observed value tm(xi) wherever it is available, and call the resultant matrix Taug. The second weakness is that with the commonly used covariance functions, Kx 1 will typically have rapidly decaying eigenvalues [7, §4.3.1]. To overcome this, we regularize its inversion by adding η2I to the diagonal of Kx 1 to give Kρ aug = N −1T T aug(Kx 1 + η2I)−1Taug. We set η2 to tr(T T augTaug)/(MN), so that tr(Kρ aug) = M if Kx 1 were the zero matrix. Finally, the required Kρ 1 is obtained from Kρ aug by constraining it to have rank r. This is currently achieved by computing the eigen-decomposition of Kρ aug and keeping only the top r eigenvectors/values; it could also be implemented using an incomplete Cholesky decomposition. 3.3 Incorporating a novel task Above we have assumed that data from all contexts is available at training time. However, we may encounter a new context for which we have not seen much data. In this case we fix θx and σ2 while extending Kρ by an extra row and column for the new context, and it is only this new border which needs to be learned by maximising the marginal likelihood. Note that as Kρ is p.s.d this means learning only at most M new parameters, or fewer if we exploit the rank-constraint property of Kρ. 4 Model selection The choice of the rank r of Kρ j in the model is important, since it reflects on the rank s of ˜Π. In our model, r is not a hyperparameter to be optimized. Thus to infer its value we rely on an information criterion to select the most parsimonious correct model. Here, we use the Bayesian Information Criterion (BIC), but the use of Akaike or Hannan-Quinn criteria is similar. Let Ljr be the likelihood for each joint at optimized hyperparameters θx j, Kρ j , and σ2 j , when Kρ j is constrained to have rank r; let nm j be the number of observations for the jth joint in the mth 2This is not due to our approximation; indeed, it can be shown that the rank of Kρ EM is upper bounded by that of Kρ 0 even if the exact EM update in eq. 7 has been used. context, and n def= P j,m nm j be the total number of observations; and let dj be the dimensionality of θx j. Since the likelihood of the model factorizes over joints, we have BIC(r) = −2 PJ j=1 log Ljr + PJ j=1 dj + J 2 r(2M + 1 −r) + J  log n, (8) where r(2M + 1 −r)/2 is the number of parameters needed to define an incomplete Cholesky decomposition of rank r for an M ×M matrix. For selecting the appropriate rank of the Kρ j s, we compute and compare BIC(r) for different values of r. 5 Relationships to other work We consider related work first with regard to the inverse dynamics problem, and then to multi-task learning with Gaussian processes. Learning methods for the single-context inverse dynamics problem can be found in e.g. [8], where the locally weighted projection regression (LWPR) method is used. Gaussian process methods for the same problem have also been shown to be effective [7, §2.5; 9]. The LWPR method has been extended to the multi-context situation by Petkos and Vijayakumar [5]. If the inertial parameters πm J s are known for at least 11 contexts then the estimated torque functions can be used to estimate the underlying yjj′s using linear regression, and prediction in a novel context (with limited training data) will depend on estimating the inertial parameters for that context. Assuming the original estimated torque functions are imperfect, having more than 11 models for distinct known inertial parameters will improve load estimation. If the inertial parameters are unknown, the novel torque function can still be represented as a linear combination of a set of 11 linearly independent torque functions, and so one can estimate the inverse dynamics in a novel context by linear regression on those estimated functions. In contrast to the known case, however, no more than 11 models can be used [5, §V]. Another difference between known and unknown parameters is that in the former case the resulting πm J s are interpretable, while in the latter there is ambiguity due to the Ajs in eq. 3. Comparing our approach with [5], we note that: (a) their approach does not exploit the knowledge that the torque functions for the different contexts are known to share latent functions as in eq. 2, and thus it may be useful to learn the M inverse dynamics models jointly. This is expected to be particularly advantageous when the data for each task explores rather different portions of x-space; (b) rather than relying on least-squares methods (which assume equal error variances everywhere), our fully probabilistic model will propagate uncertainties (co-variances for jointly Gaussian models) automatically; and (c) eq. 6 shows that we do not need to be limited to exactly 11 reference contexts, either fewer or more than 11 can be used. On the other hand, using the LWPR methods will generally give rise to better computational scaling for large data-sets (although see approximate GP methods in [7, ch. 8]), and are perhaps less complex than the method in this paper. Earlier work on multiple model learning such as Multiple Model Switching and Tuning (MMST) [10] uses an inverse dynamics model and a controller for each context, switching among the models to the one producing the most accurate predictions. The models are linear-in-the-parameters with known non-linear regressor functions of x, and the number of models are assumed known. MMST involves very little dynamics learning, estimating only the linear parameters of the models. A closely related approach is Modular Selection and Identification for Control (MOSAIC) [11], which uses inverse dynamics models for control and forward dynamics models for context identification. However, MOSAIC was developed and tested on linear dynamics models without the insights into how eq. 1 may be used across contexts for more efficient and robust learning and control. Early references to general multi-task learning are [1] and [2]. There has been a lot of work in recent years on MTL with e.g. neural networks, Dirichlet processes, Gaussian processes and support vector machines. Some previous models using GPs are summarized in [3]. An important related work is the semiparametric latent factor model [12] which has a number of latent processes which are linearly combined to produce observable functions as in eq. 3. However, in our model all the latent functions share a common covariance function, which reduces the number of free parameters and should thus help to reduce over-fitting. Also we note that the regression experiments by Teh et al. [12, §4] used a forward dynamics problem on a four-jointed robot arm for a single context, with an artificial linear mixing of the four target joint accelerations to produce six response variables. In contrast, we have shown how linear mixing arises naturally in a multi-context inverse dynamics situation. In relation 0.3 0.4 0.5 0.6 0.7 −0.2 0 0.2 0.3 0.5 x/m y/m z/m p1 p2 p3 p4 Figure 3: The four paths p1, p2, p3, p4. The robot base is located at (0, 0, 0). Table 1: The trajectories at which the training samples for each load are acquired. All loads have training samples from the common trajectory (p2, s3). For the multiple-contexts setting, c15, and hence (p4, s4), is not used for training. s1 s2 s3 s4 p1 c1 c7 c13 c14 p2 c6 c12 c1 · · · c15 c5 p3 c11 c3 c4 c10 p4 c2 c8 c9 c15∗ Table 2: The average nMSEs of the predictions by LR and sGP, for joint 3 and for both kinds of test sets. Training set sizes given in the second row. The nMSEs are averaged over loads c1 . . . c15. average nMSE for the interpm sets average nMSE for the extrapm sets 20 170 1004 4000 20 170 1004 4000 LR 1×10−1 7×10−4 6×10−4 6×10−4 5×10−1 2×10−1 2×10−1 2×10−1 sGP 1×10−2 2×10−7 2×10−8 3×10−9 1×10−1 3×10−2 4×10−3 3×10−3 to work by Bonilla et al. [3] described in section 2.2, we note that the factorization between inter-task similarity Kf and a common covariance function kx is an assumption there, while we have shown that such decomposition is inherent in our application. 6 Experiments Data We investigate the effectiveness of our model with the Puma 560 (Figure 1), which has J = 6 degrees of freedom. We learn the inverse dynamic models of this robot manipulating M = 15 different loads c1, . . . , c15 through four different figure-of-eight paths at four different speeds. The data for our experiments is obtained using a realistic simulation package [13], which models both Coulomb and viscous frictional forces. Figure 3 shows the paths p1, . . . , p4 which are placed at 0.35m, 0.45m, 0.55m and 0.65m along the x-axis, at 0.36m, 0.40m, 0.44m and 0.48m along the z-axis, and rotated about the z-axis by −10◦, 0◦, 10◦and 20◦. There are four speeds s1, . . . , s4, finishing a path in 20s, 15s, 10s and 5s respectively. In general, loads can have very different physical characteristics; in our case, this is done by representing each load as a cuboid with differing dimensions and mass, and attaching each load rigidly to a random point at the end-effector. The masses range evenly from 0.2kg for c1 to 3.0kg for c15; details of the other parameters are omitted due to space constraints. For each load cm, 4000 data points are sampled at regular intervals along the path for each path-speed (trajectory) combination (p·, s·). Each sample is the pair (t, x), where t ∈RJ are the observed torques at the joints, and x ∈R3J are the joint angles, velocities and accelerations. This set of data is partitioned into train and test sets in the manner described below. Acquiring training data combinatorially by sampling for every possible load-trajectory pair may be prohibitively expensive. One may imagine, however, that training data for the handling of a load can be obtained along a fixed reference trajectory Tr for calibration purposes, and also along a trajectory typical for that load, say Tm for the mth load. Thus, for each load, 2000 random training samples are acquired at a common reference trajectory Tr = (p2, s3), and an additional 2000 random training samples are acquired at a trajectory unique to each load; Table 1 gives the combinations. Therefore each load has a training set of 4000 samples, but acquired only on two different trajectories. Following [14], two kinds of test sets are used to assess our models for (a) control along a repeated trajectory (which is of practical interest in industry), and (b) control along arbitrary trajectories (which is of general interest to roboticists). The test for (a) assesses the accuracy of torque predictions for staying within the trajectories that were used for training. In this case, the test set for load cm, denoted by interpm for interpolation, consists of the rest of the samples from Tr and Tm that are not used for training. The test for (b) assesses the accuracy also for extrapolation to trajectories not sampled for training. The test set for this, denoted by extrapm, consists of all the samples that are not training samples for cm. In addition, we consider a data-poor scenario, and investigate the quality of the models using randomly selected subsets of the training data. The sizes of these subsets range from 20 to 4000. Results comparing GP with linear regression We first compare learning the inverse dynamics with Bayesian linear regression (LR) to learning with single-task Gaussian processes (sGP). For each context and each joint, we train a LR model and a sGP model with the corresponding training data separately. For LR, the covariates are (x, sgn( ˙q), 1), where sgn(·) is the component-wise signum of its arguments; regression coefficients β and noise variance σ2 are given a broad normal-inversegamma prior p(β, σ2) ≡N(β|0, σ2 · 108I)IG(σ2|1, 1), though note that the mean predictions do not depend on the parameters of the inverse-gamma prior on σ2. The covariance function of each sGP model is a sum of an inhomogeneous linear kernel on (x, sgn( ˙q)), a squared exponential kernel on x, and an independent noise component [7, §4.2], with the first two using the automatic relevance determination parameterization [7, §5.1]. The hyperparameters of sGP are initialized by giving equal weightings among the covariates and among the components of the covariance function, and then learnt by optimizing the marginal likelihood independently for each context and each joint. The trained LR and sGP models are used to predict torques for the interpm and extrapm data sets. For each test set, the normalized mean square error (nMSE) of the predictions are computed, by dividing the MSE by the variance of the test data. The nMSEs are then averaged over the 15 contexts for the interpm and extrapm tests. Table 2 shows how the averages for joint 3 vary with the number of training samples. Similar relative results are obtained for the other joints. The results show that sGP outperforms LR for both the test cases. As one would expect, the errors of LR level-off early at around 200 training samples, while the quality of predictions by sGP continues to improve with training sample size, especially so for the interpm sets. Both sGP and LR do reasonably well on the interpm sets, but not so well on the extrapm sets. This suggests that learning from multiple contexts which have training data from different parts of the trajectory space will be advantageous. Results for multi-task GP We now investigate the merit of using MTL, using the training data tabulated in Table 1 for loads c1, . . . , c14. We use n to denote the number of observed torques for each joint totalled across the 14 contexts. Note that trajectory (p4, s4) is entirely unobserved during learning, but is included in the extrapm sets. We learn the hyperparameters of a multi-task GP model (mGP) for each joint by optimizing the marginal likelihood for all training data (accumulated across contexts) for that joint, as discussed in §3, using the same kernel and parameterization as for the sGP. This is done for ranks 2, 4, 5, 6, 8 and 10. Finally, a common rank r for all the joints is chosen using the selection criterion given in §4. We denote the selected set of mGP models by mGP-BIC. In addition to comparing with sGP, we also compare mGP-BIC with two other na¨ıve schemes: (a) denoted by iGP, a collection of independent GPs for the contexts, but sharing kernel parameters of kx j among the contexts; and (b) denoted by pGP, a single GP for each joint that learns by pooling all training data from all the contexts. The iGP and pGP models can be seen as restrictions of the multi-task GP model, restricting Kρ j to the identity matrix I and the matrix of ones 11T respectively. As discussed in §3, the hyperparameters for the mGPs are initialized to either those of pGP or those of iGP during optimization, choosing the one with the higher marginal likelihood. For our data, we find that the choice is mostly iGP; pGP is only chosen for the case of joint 1 and n < 532. In addition, the chosen ranks based on the BIC are r = 4 for all cases of n, except for n = 476 and n = 1820 when r = 5 is selected instead. Figure 4 gives results of sGP, iGP, pGP and mGP-BIC for both the interpm and extrapm test sets, and for joints 1 and 4. Plots for the other joints are omitted due to space constraints, but they are qualitatively similar to the plots for joint 4. The plots are the average nMSEs over the 14 contexts against n. The vertical scales of the plots indicate that extrapolation is at least an order of magnitude harder than interpolation. Since the training data are subsets selected independently for the different values of n, the plots reflect the underlying variability in sampling. Nevertheless, we can see that mGP-BIC performs favorably in almost all the cases, and especially so for the extrapolation task. For joint 1, we see a close match between the predictive performances of mGP-BIC and pGP, with mGP-BIC slightly better than pGP for the interpolation task. This is due to the limited variation among observed torques for this joint across the different contexts for the range of end-effector 280 532 896 1820 0 1 2 3 4 5 ×10−5 (a) joint 1, interpm tests 280 532 896 1820 0 0.5 1 1.5 2 ×10−4 (b) joint 1, extrapm tests 280 532 896 1820 0 1 2 3 4 ×10−4 (c) joint 4, interpm tests 280 532 896 1820 0 0.5 1 1.5 2 ×10−2 (d) joint 4, extrapm tests Figure 4: Average nMSEs of sGP ( ), iGP ( ), pGP ( ) and mGP-BIC ( ) against n (on log2 scale). Ticks on the x-axes represent specified values of n. The vertical scales of the plots varies. A value above the upper limit of its vertical range is plotted with a nominal value near the top instead. movements investigated here. Therefore it is not surprising that pGP produces good predictions for joint 1. For the other joints, iGP is usually the next best after mGP-BIC. In particular, iGP is better than sGP, showing that (in this case) combining all the data to estimate the parameters of a single common covariance function is better than separating the data to estimate the parameters of 14 covariance functions. 7 Summary We have shown how the structure of the multiple-context inverse dynamics problem maps onto a multi-task GP prior as given in eq. 6, how the corresponding marginal likelihood can be optimized effectively, and how the rank of the Kρ j s can be chosen. We have demonstrated experimentally that the results of the multi-task GP method (mGP) are generally superior to sGP, iGP and pGP. Therefore it is advantageous to learn inverse dynamics models jointly using mGP-BIC, especially when each context/task explores different portions of the data space, a common case in dynamics learning. In future work we would like to investigate if coupling learning over joints is beneficial. Acknowledgments We thank Sam Roweis for suggesting pGP as a baseline. This work is supported in part by the EU PASCAL2 ICT Programme, and in part by the EU FP6 SENSOPAC project grant to SV and SK. KMAC would also like to thank DSO NL for financial support. References [1] R. Caruana. Multitask Learning. Machine Learning, 28(1), July 1997. [2] S. Thrun and L. Pratt, editors. Learning to Learn. Kluwer Academic Publishers, 1998. [3] E. Bonilla, K. M. A. Chai, and C. K. I. Williams. Multi-task Gaussian Process Prediction. NIPS 20, 2008. [4] L. Sciavicco and B. Siciliano. Modelling and Control of Robot Manipulators. Springer, 2000. [5] G. Petkos and S. Vijayakumar. Load estimation and control using learned dynamics models. IROS, 2007. [6] T. P. Minka and R. W. Picard. Learning How to Learn is Learning with Point Sets, 1997. URL http: //research.microsoft.com/˜minka/papers/point-sets.html. revised 1999. [7] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006. [8] S. Vijayakumar and S. Schaal. LWPR: An O(n) Algorithm for Incremental Real Time Learning in High Dimensional Space. ICML 2000, 2000. [9] D. Nguyen-Tuong, J. Peters, and M. Seeger. Computed torque control with nonparametric regression models. ACC 2008, 2008. [10] M. Kemal Cılız and K. S. Narendra. Adaptive control of robotic manipulators using multiple models and switching. Int. J. Rob. Res., 15(6):592–610, 1996. [11] M. Haruno, D. M. Wolpert, and M. Kawato. MOSAIC Model for Sensorimotor Learning and Control. Neural Comp., 13(10):2201–2220, 2001. [12] Y. W. Teh, M. Seeger, and M. I. Jordan. Semiparametric latent factor models. 10th AISTATS, 2005. [13] P. I. Corke. A robotics toolbox for MATLAB. IEEE Rob. and Auto. Magazine, 3(1):24–32, 1996. [14] E. Burdet and A. Codourey. Evaluation of parametric and nonparametric nonlinear adaptive controllers. Robotica, 16(1):59–73, 1998.
2008
38
3,524
Efficient Inference in Phylogenetic InDel Trees Alexandre Bouchard-Cˆot´e† Michael I. Jordan†‡ Dan Klein† Computer Science Division†, Department of Statistics‡ University of California at Berkeley Berkeley, CA 94720 {bouchard,jordan,klein}@cs.berkeley.edu Abstract Accurate and efficient inference in evolutionary trees is a central problem in computational biology. While classical treatments have made unrealistic site independence assumptions, ignoring insertions and deletions, realistic approaches require tracking insertions and deletions along the phylogenetic tree—a challenging and unsolved computational problem. We propose a new ancestry resampling procedure for inference in evolutionary trees. We evaluate our method in two problem domains—multiple sequence alignment and reconstruction of ancestral sequences—and show substantial improvement over the current state of the art. 1 Introduction Phylogenetic analysis plays a significant role in modern biological applications such as ancestral sequence reconstruction and multiple sequence alignment [1, 2, 3]. While insertions and deletions (InDels) of nucleotides or amino acids are an important aspect of phylogenetic inference, they pose formidable computational challenges and they are usually handled with heuristics [4, 5, 6]. Routine application of approximate inference techniques fails because of the intricate nature of the combinatorial space underlying InDel models. Concretely, the models considered in the phylogenetic literature take the form of a tree-shaped graphical model where nodes are string-valued random variables representing a fragment of DNA, RNA or protein of a species. Edges denote evolution from one species to another, with conditional probabilities derived from the stochastic model described in Sec. 2. Usually, only the terminal nodes are observed, while the internal nodes are hidden. The interpretation is that the sequence at the root is the common ancestor of those at the terminal nodes, and it subsequently evolved in a branching process following the topology of the tree. We will concentrate on the problem of computing the posterior of these hidden nodes rather than the problem of selecting the topology of the tree—hence we will assume the tree is known or estimated with some other algorithm (a guide tree assumption). This graphical model can be misleading. It only encodes one type of independence relation, those between generations. There is another important structure that can be exploited. Informally, InDel events that operate at the beginning of the sequences should not affect, for instance, those at the end. However, because alignments between the sequences are unknown in practice, it is difficult to exploit this structure in a principled way. In many previous works [4, 5, 6], the following heuristic approach is taken to perform inference on the hidden nodes (refer to Fig. 1): First, a guide tree (d) and a multiple sequence alignment (a) (a transitive alignment between the characters in the sequences of the modern species) are computed using heuristics [7, 8]. Second, the problem is cast into several easy subproblems as follows. For each equivalence class in the multiple sequence alignment (called a site, corresponding to a column in Fig. 1(b)), a new graphical model is created with the same tree structure as the original problem, but where there is exactly one character in each node rather than a string. For nodes with a character 1 (a) (b) (e) ...ACCGTGCGATGCT... ...ACCCTGCGGTGCT... ...ACGCTGCGGTGAT... ...ACGTTGCGGTGAT... ...ACGCTGTTGTGAT... ...ACGGTGCTGTGCT.. A C T A C a : A C G b : T A C C c : A C T A C a : A C G b : T A C C c : ? ? ? ? ? ? d : ? ? ? ? ? ? a b c d ...ACCGTGCGTGCT... ...ACCCTGCGGTGCT... ...ACGCTGCGTGAT... ...ACGTTGCGTGAT... ...ACGCTGTTGTGAT... ...ACGGTGCTGTGCT.. (AR) C C A A A T ... (c) (d) (f) Figure 1: Comparison of different approaches to phylogenetic modeling: (a,b,c,d) heuristics based on site independence; (e) Single Sequence Resampling; (f) Ancestry Resampling. The boxes denote the structures that can be sampled or integrated out in one step by each method. in the current equivalence class, the node in this new tree is observed, and the rest of the nodes are considered as unobserved data (Fig. 1(c)). Note that the question marks are not the gaps commonly seen in linearized representations of multiple alignments, but rather phantom characters. Finally, each site is assumed independent of the others, so the subproblems can be solved efficiently by running the forward-backward algorithm on each site. This heuristic has several problems, the most important being that it does not allow explicit modeling of insertions and deletions (InDel), which are frequent in real biological data and play an important role in evolution [9]. If InDels are included in the probabilistic model, there is no longer a deterministic notion of site on which independence assumptions can be made. This complicates inference substantially. For instance, in the standard TKF91 model [10], the fastest known algorithm for computing exact posteriors takes time O(2F N F ) where F is the number of leaves and N is the geometric mean sequence length [11]. Holmes et al. [2] developed an approximate Markov chain Monte Carlo (MCMC) inference procedure for the TKF91 model. Their algorithm proceeds by sampling the entire sequence corresponding to a single species conditioning on its parent and children (Fig. 1(e)). We will call this type of kernel a Single Sequence Resampling (SSR) move. Unfortunately, chains based exclusively on SSR have performance problems. There are two factors behind these problems. The first factor is a random walk behavior that arises in tall chains found in large or unbalanced trees [2, 12]: initially, the InDel events resampled at the top of the tree are independent of all the observations. It takes time for the information from the observations to propagate up the tree. The second factor is the computational cost of each SSR move, which is O(N 3) with the TKF91 model and binary trees. For long sequences, this becomes prohibitive, so it is common to use a “maximum deviation pruning strategy” (i.e., putting a bound on the relative positions of characters that mutate from one to the other) to speed things up [12]. We observed that this pruning can substantially hurt the quality of the estimated posterior (see Sec. 4). In this paper, we present a novel MCMC procedure for phylogenetic InDel models that we refer to as Ancestry Resampling (AR). AR addresses both of the efficiency and accuracy problems that arise for SSR. The intuition behind the AR approach is to use an MCMC kernel that combines the advantages of the two approaches described above: like the forward-backward algorithm in the site-independent case, AR always directly conditions on some part of the observed data, but, like SSR, it is capable of resampling the InDel history. This is illustrated in Fig. 1(f). 2 Model For concreteness, we describe the algorithms in the context of the standard TKF91 model [10], but in Sec. 5 we discuss how the ideas extend to other models. We assume that a phylogenetic directed tree topology τ = (V, E) is fixed, where nodes in this tree are string-valued random variables, from 2 an alphabet of K characters—K is four in nucleotide sequences and about twenty in amino-acid sequences. Also known is a positive time length te associated to each edge e ∈E. We start the description of the model in the simple case of a single branch of known length t, with a string x at the root and a string y at the leaf. The model, TKF91, is a string-valued Continuous-Time Markov Chain (CTMC). There is one rate µ for deletion (death in the original TKF terminology) and one rate λ for insertions, which can occur either to the right of one of the existing character (birth), or to the left of the sequence (immigration). Additionally, there is an independent CTMC substitution process on each character. Fortunately, the TKF91 model has a closed form solution for the conditional distribution over strings y at the leaf given the string x at the root. The derivation of this conditional distribution is presented in [10] and its form is: P(a character in x survived and has n descendants in y) = αβn−1(1 −β) for n = 1, 2, . . . P(a character in x died and has n descendants in y) = (1 −α)(1 −γ) for n = 0 = (1 −α)γβn−1(1 −β) for n = 1, 2, . . . P(immigrants inserted at the left have n descendants in y) = βn(1 −β) for n = 0, 1, . . . In defining descendants, we count the character itself, its children, grandchildren, etc. α, β, γ are functions of t, µ, λ. See [2] for the details. Since we only work with these conditionals, note that the situation resembles that of a standard weighted edit process with a specific, branch-length dependent structure over insertions and deletions. To go from a single branch to a tree, we simply compose this process. The full generative process works as follows: starting at the root, we generate the first string according to the stationary distribution of TKF91. Then, for each outgoing edge e, we use the known time te and the equations above to generate a child string. We continue in preorder recursively. 2.1 Auxiliary variables We now define some auxiliary variables that will be useful in the next section. Between each pair of nodes a, b ∈V connected by an edge and with respective strings x, y , we define an alignment random variable: its values are bipartite matchings between the characters of the strings x and y. Links in this alignment denote survival of a character (allowing zero or more substitutions). Note that this alignment is monotonic: if character i in x is linked to character j in y, then the characters i′ > i in x can only be unlinked or linked to a character with index j′ > j in y. The random variable that consists of the alignments and the strings for all the edges and nodes in the phylogenetic tree τ will be called a derivation. Note also that a derivation D defines another graph that we will call a derivation graph. Its nodes are the characters of all the strings in the tree. We put an edge between two characters x, y in this graph iff two properties hold. Let a, b ∈V be the nodes corresponding to the strings from which respectively x, y belongs to. We put an edge between x, y iff (1) there is an edge between a and b in E and (2) there is a link between x, y in the alignment of the corresponding strings. Examples of derivation graphs are shown in Fig. 2. 3 Efficient inference The approximate inference algorithm we propose, Ancestry Resampling (AR), is based on the Metropolis-Hastings (MH) framework. While the SSR kernel resamples the whole sequence corresponding to a single node, AR works around the difficulties of SSR by joint resampling of a “thin vertical slice” (Fig. 1(f)) in the tree that is composed of a short substring in every node. As we will see, if we use the right definition of vertical slice, this yields a valid and efficient MH algorithm. 3.1 Ancestry Resampling We will call one of these “thin slices” an ancestry A, and we now discuss what its definition should be. Some care will be needed to ensure irreducibility and reversibility of the sampler. 3 a c b (a) (b) (c) (d) a : b : c : anchor observed characters selected characters anchor Legend z (e) Figure 2: (a): the simple guide tree used in this example (left) and the corresponding sequences and alignments (right). (a,b,c): the definitions of A0, A∞, A respectively are shaded (the “selected characters”). (d,e): An example showing the non-reversibility problem with A∞. We first augment the state of the AR sampler to include the derivation auxiliary variable described in Sec. 2.1. Let D be the current derivation and let x be a substring of one of the terminal nodes, say in node e. We will call x an anchor. The ancestry will depend on both a derivation and an anchor. The overall MH sampler is a mixture of proposal distributions indexed by a set of anchors covering all the characters in the terminal strings. Each proposal resamples a new value of A(D, x) given the terminal nodes and keeping A(D, x)c frozen. We first let A0(D, x) be the set of characters connected to some character in x in the derivation graph of D (see Fig. 2(a)). This set A0(D, x) is not a suitable definition of vertical slice, but will be useful to construct the correct one. It is unsuitable for two reasons. First, it does not yield an irreducible chain, as illustrated in same figure, where nine of the characters of this sample (those inside the dashed curve) will never be resampled, no matter which substring of the terminal node is selected as anchor. Secondly, we would like the vertical slices to be contiguous substrings rather than general subsequences to ease implementation. We therefore modify the definition recursively as follows. See Fig. 2(b) for an illustration of this definition. For i > 0, we will say that a character token y is in Ai(D, x) if one of the following conditions is true: 1. y is connected to Ai−1(D, x), 2. y appears in a string · · · y′ · · · y · · · y′′ · · · such that both y′ and y′′ are in Ai−1(D, x), 3. y appears in a string · · · y′ · · · y · · · such that y′ is in Ai−1(D, x) and x is a suffix, 4. y appears in a string · · · y · · · y′ · · · such that y′ is in Ai−1(D, x) and x is a prefix. Then, we define A∞(D, x) := ∪∞ i=0Ai(D, x). In words, a symbol is in A∞(D, x) if it is linked to an anchored character through the alignments, or if it is “squeezed” between previously connected characters. Cases 3 and 4 handle the boundaries of strings. With this property, irreducibility could be established with some conditions on the anchors, but it turns out that this definition is still not quite right. With A∞, the main problem arises when one tries to establish reversibility of the chain. This is illustrated in Fig. 2(d). In this example, the chain first transitions to a new state by altering the circled link. One can see that with the definition of A∞(D, x) given above, from the state 2 (e), the state in 2 (d) is now unreachable by the same resampling operator, the reason being that the substring labeled z in the figure belongs to the frozen part of the state if the transition is visited backwards. While there exist MCMC methods that are not based on reversible chains [13], we prefer to take a simpler approach: a variation on our definition solves the issue, informally by taking vertical slices A(D, x) to be the “complement of the ancestry taken on the complement of the anchor.” More precisely, if x = x′xx′′ is the string at the anchor node e, we let the resampled section to be A(D, x) := (A∞(D, x′) ∪A∞(D, x′′))c. This creates slightly thicker slices (Fig. 2(c)) but solves the reversibility problem. We will call A(D, x) the ancestry of the anchor x. With this definition, the proposal distribution can be made reversible using a MH acceptance ratio; it is also irreducible. 4 The problem of resampling a single slice decomposes along the tree structure τ, but an unbounded number of InDels could occur a priori inside the thin slice. It may seem at the first glance that we are back at our initial problem: sampling from a tree-structured directed graphical model where the support of the space of the nodes is a countably infinite space. But in fact, we have made progress: the distribution is now concentrated on very short sequences. Indeed, the anchors x can be taken relatively small (we used anchors of length 3 to 5 in our experiments). Another important property to notice is that given an assignment of the random variable A(D, x), it is possible to compute efficiently and exactly an unnormalized probability for this assignment. The summation over the possible alignments can be done using a standard quadratic dynamic program known in its max version as the Needleman-Wunsch algorithm [14]. 3.2 Cylindric proposal We now introduce the second idea that will make efficient inference possible: when resampling an ancestry given its complement, rather than allowing all possible strings for the resampled value of A(D, x), we restrict the choices to the set of substitutes that are close to its current value. We formalize closeness as follows: Let a1, a2 be two values for the ancestry A(D, x). We define the cylindric distance as the maximum over all the nodes e of the Levenshtein edit distance between the substrings in a1 and a2 at node e. Fix some positive integer m. The proposal distribution consider the substitution ancestry that are within a ball of radius m centered at the current state in the cylindric metric. The value m = 1 worked well in practice. Here the number of states in the tree-structured dynamic program at each node is polynomial in the lengths of the strings in the current ancestry. A sample can therefore be obtained easily using the observation we have made that unnormalized probability can be computed.1 Next, we compute the acceptance ratio, i.e.: min  1, P(ap) × Q(ac|ap) P(ac) × Q(ap|ac)  , where ac, ap are the current and proposed ancestry values and Q(a2|a1) is the transition probability of the MH kernel, proportional to P(·), but with support restricted to the cylindric ball centered at a1. 4 Experiments We consider two tasks: reconstruction of ancestral sequences and prediction of alignments between multiple genetically-related proteins. We are interested in comparing the ancestry sampling method (AR) presented in this paper with the Markov kernel used in previous literature (SSR). 4.1 Reconstruction of ancestral sequences Given a set of genetically-related sequences, the reconstruction task is to infer properties of the common ancestor of these modern species. This task has important scientific applications: for instance, in [1], the ratio of G+C nucleotide content of ribosomal RNA sequences was estimated to assess the environmental temperature of the common ancestor to all life forms (this ratio is strongly correlated with the optimal growth temperature of prokaryotes). Just as in the task of topology reconstruction, there are no gold ancestral sequences available to evaluate ancestral sequence reconstruction. For this reason, we take the same approach as in topology reconstruction and perform comparisons on synthetic data [15]. We generated a root node from the DNA alphabet and evolved it down a binary tree of seven nodes. Only the leaves were given to the algorithms (a total of 124010 nucleotides); the hidden nodes were held out. Since our goal in this experiment is to compare inference algorithms rather than methods 1What we are using here is actually a nested dynamic programs, meaning that the computation of a probability in the outer dynamic program (DP) requires the computation of an inner, simpler DP. While this may seem prohibitive, this is made feasible by designing the sampling kernels so that the inner DP is executed most of the time on small problem instances. We also cached the small-DP cost matrices. 5 0 1 2 3 4 5 0 100 200 300 Time Error SSR AR 0 1 2 3 4 5 0 100 200 300 Time Error With max. dev. Max dev = MAX Figure 3: Left: Single Sequence Resampling versus Ancestry Resampling on the sequence reconstruction task. Right: Detrimental effect of a maximum deviation heuristic, which is not needed with AR samplers. of estimation, we gave both algorithms the true parameters; i.e., those that were used to generate the data. The task is to predict the sequences at the root node with error measured using the Levenshtein edit distance l. For both algorithms, we used a standard approximation to minimum Bayes risk decoding to produce the final reconstruction. If s1, s2, . . . , sI are the samples collected up to iteration I, we return mini∈1...I P j∈1...I l(si, sj). Fig. 3 (left) shows the error as a function of time for the two algorithms, both implemented efficiently in Java. Although the computational cost for one pass through the data was higher with AR, the AR method proved to be dramatically more effective: after only one pass through the data (345s), AR already performed better than running SSR for nine hours. Moreover, AR steadily improved its performance as more samples were collected, keeping its error at each iteration to less than half of that of the competitor. Fig. 3 (right) shows the detrimental effect of a maximum deviation heuristic. This experiment was performed under the same setup described in this section. While the maximum deviation heuristic is necessary for SSR to be able to handle the long sequences found in biological datasets, it is not necessary for AR samplers. 4.2 Protein multiple sequence alignment We also performed experiments on the task of protein multiple sequence alignment, for which the BAliBASE [16] dataset provides a standard benchmark. BAliBASE contains annotations created by biologists using secondary structure elements and other biological cues. Note first that we can get a multiple sequence alignment from an InDel evolutionary model. For a set S of sequences to align, construct a phylogenetic tree such that its terminal leaves coincide with S. A multiple sequence alignment can be extracted from the inferred derivation D as follows: deem the amino acids x, y ∈S aligned iff y ∈A0(D, x). The state-of-the-art for multiple sequence alignment systems based on an evolutionary model is Handel [2]. It is based on TKF91 and produces a multiple sequence alignment as described above. The key difference with our approach is that their inference algorithm is based on SSR rather than the AR move that we advocate in this paper. While other heuristic approaches are known to perform better than Handel on this dataset [8, 17], they are not based on explicit evolutionary models. They perform better because they leverage more sophisticated features such as affine gap penalties and hydrophobic core modeling. While these features can be incorporated in our model, we leave this for future work since the topic of this paper is inference. 6 System CS SP SSR (Handel) 0.63 0.77 AR (this paper) 0.77 0.86 Table 1: XXXXXX—clustalw Figure 4: CS and SP XXXCITE are recall metrics XXX lignment can be extracted from the infered derivation D as follow: deem the amino acids x, y ∈S y ∈A0(D, x). of-the-art for multiple sequence alignment systems based on an evolutionary model is Handel [2]. on TKF91 and produces a multiple sequence alignment as described above. The key difference pproach is that their inference algorithm is based on SSR rather than the AR move that we advocate er. r heuristic approaches are known to perform better than Handel on this dataset [7, 16], they are not xplicit evolutionary models. They perform better because they leverage more sophisticated features fine gap penalty and hydrophobic core modelling. While these features can be incorporated in our leave this for future work since the topic of this paper is inference. volutionary trees using weighbor [17]. We ran each system for the same time on the sequences directory of BAliBASE v.1. Decoding for this experiment was done by picking the sample with elihood. We report in Figure 4, left, the CS and SP Scores, the two standard metrics for this task. ecall measures on the subset of the alignments that were labeled, called the core blocks, see [16] ails. For both metrics, our approach performs better. investigate where the advantage comes from, we did another multiple alignment experiment plotme performance after a fixed time as a function of depth of the trees. If the random walk argument n the introduction held, we would expect the advantage of AR over SSR to increase as the tree This prediction is confirmed as illustrated in Figure 4, middle, right. For short trees, the two perform equally, SSR beating AR slightly for trees with three nodes, which is not surprising since lly performs exact inference in this tiny topology configuration. However, as trees get taller, the mes more difficult, and only AR manages to maintain good performances. re Directions egrating affine gap, hydrophobic core modelling, CRF models ces tier, N. Tourasse, and M. Gouy. A nonhyperthermophilic common ancestor to extant life forms. Science, 20–221, 1999. mes and W. J. Bruno. Evolutionary hmm: a bayesian approach to multiple alignment. Bioinformatics, 17:803– 001. 7 0 25 50 75 100 3 4 5 6 7 8 Depth CS SSR AR 70 80 90 100 3 4 5 6 7 8 Depth SP SSR AR Figure 4: Left: performance on the ref1 directory of BAliBASE. Center, right: Column-Score (CS) and Sum-of-Pairs score (SP) as a function of the depth of the generating trees. We built evolutionary trees using weighbor [7]. We ran each system for the same time on the sequences in the ref1 directory of BAliBASE v.1. Decoding for this experiment was done by picking the sample with highest likelihood. We report in Fig. 4(left) the CS and SP Scores, the two standard metrics for this task. Both are recall measures on the subset of the alignments that were labeled, called the core blocks; see, e.g., [17] for instance for the details. For both metrics, our approach performs better. In order to investigate where the advantage comes from, we did another multiple alignment experiment, plotting performance as a function of the depth of the trees. If the random walk argument presented in the introduction holds, we would expect the advantage of AR over SSR to increase as the tree gets taller. This prediction is confirmed as illustrated in Fig. 4 (center, right). For short trees, the two algorithms perform equally, SSR beating AR slightly for trees with three nodes, which is not surprising since SSR actually performs exact inference in this tiny topology. However, as the trees get taller, the task becomes more difficult, and only AR maintains good performance. 5 Conclusion We have described a principled inference procedure for InDel trees. We have evaluated its performance against a state-of-the-art statistical alignment procedure and shown its clear superiority. In contrast to heuristics such as Clustalw [8], it can be used both for reconstruction of ancestral sequences and multiple alignment. While our algorithm was described in the context of TKF91, it can be extended to more sophisticated models. Incorporating affine gap penalties and hydrophobic core modeling is of particular interest as they are known to dramatically improve multiple alignment performance [2]. These models typically do not have closed forms for the conditional probabilities, but this could be alleviated by using a discretization of longer branches. This creates tall trees, but as we have seen, AR still performs very well in this setting. References [1] N. Galtier, N. Tourasse, and M. Gouy. A nonhyperthermophilic common ancestor to extant life forms. Science, 283:220–221, 1999. [2] I. Holmes and W. J. Bruno. Evolutionary HMM: a Bayesian approach to multiple alignment. Bioinformatics, 17:803–820, 2001. [3] J. Felsenstein. Inferring Phylogenies. Sinauer Associates, 2003. [4] Z. Yang and B. Rannala. Bayesian phylogenetic inference using DNA sequences: A Markov chain Monte Carlo method. Molecular Biology and Evolultion, 14:717–724, 1997. [5] B. Mau and M. A. Newton. Phylogenetic inference for binary data on dendrograms using Markov chain Monte Carlo. Journal of Computational and Graphical Statistics, 6:122–131, 1997. 7 [6] S. Li, D. K. Pearl, and H. Doss. Phylogenetic tree construction using Markov chain Monte Carlo. Journal of the American Statistical Association, 95:493–508, 2000. [7] W. J. Bruno, N. D. Socci, and A. L. Halpern. Weighted neighbor joining: A likelihoodbased approach to distance-based phylogeny reconstruction. Molecular Biology and Evolution, 17:189–197, 2000. [8] D. G. Higgins and P. M. Sharp. CLUSTAL: a package for performing multiple sequence alignment on a microcomputer. Gene, 73:237–244, 1988. [9] J. L. Thorne, H. Kishino, and J. Felsenstein. Inching toward reality: an improved likelihood model of sequence evolution. Journal of Molecular Evolution, 34:3–16, 1992. [10] J. L. Thorne, H. Kishino, and J. Felsenstein. An evolutionary model for maximum likelihood alignment of DNA sequences. Journal of Molecular Evolution, 33:114–124, 1991. [11] G. A. Lunter, I. Mikl´os, Y. S. Song, and J. Hein. An efficient algorithm for statistical multiple alignment on arbitrary phylogenetic trees. Journal of Computational Biology, 10:869–889, 2003. [12] A. Bouchard-Cˆot´e, P. Liang, D. Klein, and T. L. Griffiths. A probabilistic approach to diachronic phonology. In Proceedings of EMNLP 2007, 2007. [13] P. Diaconis, S. Holmes, and R. M. Neal. Analysis of a non-reversible Markov chain sampler. Technical report, Cornell University, 1997. [14] S. Needleman and C. Wunsch. A general method applicable to the search for similarities in the amino acid sequence of two proteins. Journal of Molecular Biology, 48:443–453, 1970. [15] K. St. John, T. Warnow, B. M. E. Moret, and L. Vawter. Performance study of phylogenetic methods: (unweighted) quartet methods and neighbor-joining. Journal of Algorithms, 48:173– 193, 2003. [16] J. Thompson, F. Plewniak, and O. Poch. BAliBASE: A benchmark alignments database for the evaluation of multiple sequence alignment programs. Bioinformatics, 15:87–88, 1999. [17] C. B. Do, M. S. P. Mahabhashyam, M. Brudno, and S. Batzoglou. PROBCONS: Probabilistic consistency-based multiple sequence alignment. Genome Research, 15:330–340, 2005. 8
2008
39
3,525
Local Gaussian Process Regression for Real Time Online Model Learning and Control Duy Nguyen-Tuong Jan Peters Matthias Seeger Max Planck Institute for Biological Cybernetics Spemannstraße 38, 72076 T¨ubingen, Germany {duy,jan.peters,matthias.seeger}@tuebingen.mpg.de Abstract Learning in real-time applications, e.g., online approximation of the inverse dynamics model for model-based robot control, requires fast online regression techniques. Inspired by local learning, we propose a method to speed up standard Gaussian process regression (GPR) with local GP models (LGP). The training data is partitioned in local regions, for each an individual GP model is trained. The prediction for a query point is performed by weighted estimation using nearby local models. Unlike other GP approximations, such as mixtures of experts, we use a distance based measure for partitioning of the data and weighted prediction. The proposed method achieves online learning and prediction in real-time. Comparisons with other non-parametric regression methods show that LGP has higher accuracy than LWPR and close to the performance of standard GPR and ν-SVR. 1 Introduction Precise models of technical systems can be crucial in technical applications. Especially in robot tracking control, only a well-estimated inverse dynamics model can allow both high accuracy and compliant control. For complex robots such as humanoids or light-weight arms, it is often hard to model the system sufficiently well and, thus, modern regression methods offer a viable alternative [7,8]. For most real-time applications, online model learning poses a difficult regression problem due to three constraints, i.e., firstly, the learning and prediction process should be very fast (e.g., learning needs to take place at a speed of 20-200Hz and prediction at 200Hz to a 1000Hz). Secondly, the learning system needs to be capable at dealing with large amounts of data (i.e., with data arriving at 200Hz, less than ten minutes of runtime will result in more than a million data points). And, thirdly, the data arrives as a continuous stream, thus, the model has to be continuously adapted to new training examples over time. These problems have been addressed by real-time learning methods such as locally weighted projection regression (LWPR) [7,8]. Here, the true function is approximated with local linear functions covering the relevant state-space and online learning became computationally feasible due to low computational demands of the local projection regression which can be performed in real-time. The major drawback of LWPR is the required manual tuning of many highly data-dependent metaparameters [15]. Furthermore, for complex data, large numbers of linear models are necessary in order to achieve a competitive approximation. A powerful alternative for accurate function approximation in high-dimensional space is Gaussian process regression (GPR) [1]. Since the hyperparameters of a GP model can be adjusted by maximizing the marginal likelihood, GPR requires little effort and is easy and flexible to use. However, the main limitation of GPR is that the computational complexity scales cubically with the training examples n. This drawback prevents GPR from applications which need large amounts of training data and require fast computation, e.g., online learning of inverse dynamics model for model-based 1 robot control. Many attempts have been made to alleviate this problem, for example, (i) sparse Gaussian process (SGP) [2], and (ii) mixture of experts (ME) [3, 4]. In SGP, the training data is approximated by a smaller set of so-called inducing inputs [2,5]. Here, the difficulty is to choose an appropriate set of inducing inputs, essentially replacing the full data set [2]. In contrast to SGP, ME divide the input space in smaller subspaces by a gating network, within which a Gaussian process expert, i.e., Gaussian local model, is trained [4, 6]. The computational cost is then significantly reduced due to much smaller number of training examples within a local model. The ME performance depends largely on the way of partitioning the training data and the choice of an optimal number of local models for a particular data set [4]. In this paper, we combine the basic idea behind both approaches, i.e., LWPR and GPR, attempting to get as close as possible to the speed of local learning while having a comparable accuracy to Gaussian process regression. This results in an approach inspired by [6,8] using many local GPs in order to obtain a significant reduction of the computational cost during both prediction and learning step allowing the application to online learning. For partitioning the training data, we use a distance based measure, where the corresponding hyperparameters are optimized by maximizing the marginal likelihood. The remainder of the paper is organized as follows: first, we give a short review of standard GPR in Section 2. Subsequently, we describe our local Gaussian process models (LGP) approach in Section 3 and discuss how it inherits the advantages of both GPR and LWPR. Furthermore, the learning accuracy and performance of our LGP approach will be compared with other important standard methods in Section 4, e.g., LWPR [8], standard GPR [1], sparse online Gaussian process regression (OGP) [5] and ν-support vector regression (ν-SVR) [11], respectively. Finally, our LGP method is evaluated for an online learning of the inverse dynamics models of real robots for accurate tracking control in Section 5. Here, the online learning is demonstrated by rank-one update of the local GP models [9]. The tracking task is performed in real-time using model-based control [10]. To our best knowledge, it is the first time that GPR is successfully used for high-speed online model learning in real time control on a physical robot. We present the results on a version of the Barrett WAM showing that with the online learned model using LGP the tracking accuracy is superior compared to state-of-the art model-based methods [10] while remaining fully compliant. 2 Regression with standard GPR Given a set of n training data points {xi, yi}n i=1, we would like to learn a function f(xi) transforming the input vector xi into the target value yi given by yi = f(xi)+ϵi , where ϵi is Gaussian noise with zero mean and variance σ2 n [1]. As a result, the observed targets can also be described by y ∼N 0, K(X, X) + σ2 nI  , where K(X, X) denotes the covariance matrix. As covariance function, a Gaussian kernel is frequently used [1] k (xp, xq)=σ2 sexp  −1 2(xp−xq)T W(xp−xq)  , (1) where σ2 s denotes the signal variance and W are the widths of the Gaussian kernel. The joint distribution of the observed target values and predicted value for a query point x∗is given by  y f(x∗)  ∼N  0,  K(X, X) + σ2 nI k(X, x∗) k(x∗, X) k(x∗, x∗)  . (2) The conditional distribution yields the predicted mean value f(x∗) with the corresponding variance V (x∗) [1] f(x∗) = kT ∗ K + σ2 nI −1 y = kT ∗α , V (x∗) = k(x∗, x∗) −kT ∗ K + σ2 nI −1 k∗, (3) with k∗= k(X, x∗), K = K(X, X) and α denotes the so-called prediction vector. The hyperparameters of a Gaussian process with Gaussian kernel are θ = [σ2 n, σ2 f, W] and their optimal value for a particular data set can be derived by maximizing the log marginal likelihood using common optimization procedures, e.g., Quasi-Newton methods [1]. 2 Input: new data point {x, y}. for k=1 to number of local models do Compute distance to the k-th local model: wk =exp(−0.5(x −ck)T W(x −ck)) end for Take the nearest local model: v = max(wk) if v > wgen then Insert {x, y} to nearest local model: Xnew =[X, x] ynew =[y, y] Update corresponding center: cnew = mean(Xnew) Compute inverse covariance matrix and prediction vector of local model: Knew = K(Xnew, Xnew) αnew = (Knew + σ2I)−1ynew else Create new model: ck+1 =x, Xk+1 =[x], yk+1 =[y] Initialization new inverse covariance matrix and new prediction vector. end if Algorithm 1: Partitioning of training data and model learning. Input: query data point x, M . Determine M local models next to x. for k = 1 to M do Compute distance to the k-th local model: wk =exp(−0.5(x −ck)T W(x −ck)) Compute local mean using the k-th local model: ¯yk = kT k αk end for Compute weighted prediction using M local models: ˆy=PM k=1 wk¯yk/ PM k=1 wk . Algorithm 2: Prediction for a query point. (a) SARCOS arm (b) Barrett WAM Figure 1: Robot arms used for data generation and evaluation. 3 Approximation using Local GP Models The major limitation of GPR is the expensive computation of the inverse matrix (K+σ2 nI)−1 which yields a cost of O(n3). Reducing this computational cost, we cluster the training data in local regions and, subsequently, train the corresponding GP models on these local clusters. The mean prediction for a query point is then made by weighted prediction using the nearby local models in the neighborhood. Thus, the algorithm consists out of two stages: (i) localization of data, i.e., allocation of new input points and learning of corresponding local models, (ii) prediction for a query point. 3.1 Partitioning and Training of Local Models Clustering input data is efficiently performed by considering a distance measure of the input point x to the centers of all local models. The distance measure wk is given by the kernel used to learn the local GP models, e.g., Gaussian kernel wk = exp  −1 2 (x −ck)T W (x −ck)  , (4) where ck denotes the center of the k-th local model and W a diagonal matrix represented the kernel width. It should be noted, that we use the same kernel width for computing wk as well as for training of all local GP models as given in Section 2. The kernel width W is obtained by maximizing the log likelihood on a subset of the whole training data points. For doing so, we subsample the training data and, subsequently, perform an optimization procedure. During the localization process, a new model with center ck+1 is created, if all distance measures wk fall below a limit value wgen. The new data point x is then set as new center ck+1. Thus, the number of local models is allowed to increase as the trajectories become more complex. Otherwise, if a new point is assigned to a particular k-th model, the center ck is updated as mean of corresponding local 3 data points. With the new assigned input point, the inverse covariance matrix of the corresponding local model can be updated. The localization procedure is summarized in Algorithm 1. The main computational cost of this algorithm is O(N 3) for inverting the local covariance matrix, where N presents the number of data points in a local model. Furthermore, we can control the complexity by limiting the number of data points in a local model. Since the number of local data points increases continuously over time, we can adhere to comply with this limit by deleting old data point as new ones are included. Insertion and deletion of data points can be decided by evaluating the information gain of the operation. The cost for inverting the local covariance matrix can be further reduced, as we need only to update the full inverse matrix once it is computed. The update can be efficiently performed in a stable manner using rank-one update [9] which has a complexity of O(N 2). 3.2 Prediction using Local Models The prediction for a mean value ˆy is performed using weighted averaging over M local predictions ¯yk for a query point x [8]. The weighted prediction ˆy is then given by ˆy = E{¯yk|x} = PM k=1 ¯ykp(k|x). According to the Bayesian theorem, the probability of the model k given x can be expressed as p(k|x)=p(k, x)/ PM k=1 p(k, x)=wk/ PM k=1 wk. Hence, we have ˆy = PM k=1 wk¯yk PM k=1 wk . (5) The probability p(k|x) can be interpreted as a normalized distance of the query point x to the local model k where the measure metric wk is used as given in Equation (4). Thus, each local prediction ¯yk, determined using Equation (3), is additionally weighted by the distance wk between the corresponding center ck and the query point x. The search for M local models can be quickly done by evaluating the distances between the query point x and all model centers ck. The prediction procedure is summarized in Algorithm 2. 4 Learning Inverse Dynamics We have evaluated our algorithm using high-dimensional robot data taken from real robots, e.g., the 7 degree-of-freedom (DoF) anthropomorphic SARCOS master arm and 7-DoF Barrett whole arm manipulator shown in Figure 1, as well as a physically realistic SL simulation [12]. We compare the learning performance of LGP with the state-of-the-art in non-parametric regression, e.g., LWPR, ν-SVR, OGP and standard GPR in the context of approximating inverse robot dynamics. For evaluating ν-SVR and GPR, we have employed the libraries [13] and [14]. 4.1 Dynamics Learning Accuracy Comparison For the comparison of the accuracy of our method in the setting of learning inverse dynamics, we use three data sets, (i) SL simulation data (SARCOS model) as described in [15] (14094 training points, 5560 test points), (ii) data from the SARCOS master arm (13622 training points, 5500 test points) [8], (iii) a data set generated from our Barrett arm (13572 training points, 5000 test points). Given samples x=[q, ˙q, ¨q] as input, where q, ˙q, ¨q denote the joint angles, velocity and acceleration, and using the corresponding joint torques y = [u] as targets, we have a proper regression problem. For the considered 7 degrees of freedom robot arms, we, thus, have data with 21 input dimensions (for each joint, we have an angle, a velocity and an acceleration) and 7 targets (a torque for each joint). We learn the robot dynamics model in this 21-dim space for each DoF separately employing LWPR, ν-SVR, GPR, OGP and LGP, respectively. Partitioning of the training examples for LGP can be performed either in the same input space (where the model is learned) or in another space which has to be physically consistent with the approximated function. In the following, we localize the data depending on the position of the robot. Thus, the partitioning of training data is performed in a 7-dim space (7 joint angles). After determining wk for all k local models in the partitioning space, the input point will be assigned to the nearest local model, i.e., the local model with the maximal value of distance measure wk. 4 1 2 3 4 5 6 7 0 0.01 0.02 0.03 0.04 0.05 Degree of Freedom nMSE LWPR OGP ν−SVR GPR LGP (a) Approximation Error using SL data (SARCOS model) 1 2 3 4 5 6 7 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 Degree of Freedom nMSE LWPR OGP ν−SVR GPR LGP (b) Approximation Error using SARCOS data 1 2 3 4 5 6 7 0 0.05 0.1 0.15 0.2 0.25 Degree of Freedom nMSE LWPR OGP ν−SVR GPR LGP (c) Approximation Error using Barrett WAM data Figure 2: Approximation error as nMSE for each DoF. The error is computed after prediction on the test sets with simulated data from SL Sarcos-model, real robot data from Barrett and SARCOS master arm, respectively. In most cases, LGP outperforms LWPR and OGP in learning accuracy while being competitive to ν-SVR and standard GPR. It should be noted that the nMSE depends on the target variances. Due to smaller variances in the Barrett data, the corresponding nMSE has also a larger scale compared to SARCOS. Figure 2 shows the normalized mean squared error (nMSE) of the evaluation on the test set for each of the three evaluated scenarios, i.e., the simulated SARCOS arm in (a), the real SARCOS arm in (b) and the Barrett arm in (c). Here, the normalized mean squared error is defined as nMSE = Mean squared error/Variance of target. During the prediction on the test set using LGP, we take the most activated local models, i.e., the ones which are next to the query point. 0 5000 10000 15000 1 2 3 4 5 67 Nr. of Training Points Prediction Time [ms] (log. Scale) LWPR ν−SVR GPR LGP Figure 3: Average time in millisecond needed for prediction of 1 query point. The computation time is plotted logarithmic in respect of the number of training examples. The time as stated above is the required time for prediction of all 7 DoF. Here, LWPR presents the fastest method due to simple regression models. Compared to global regression methods such as standard GPR and ν-SVR, local GP makes significant improvement in term of prediction time. It should be noted that the choice of the limit value wgen during the partitioning step is crucial for the performance of LGP and, unfortunately, is an open parameter. If wgen is too small, a lot of local models will be generated with small number of training points. It turns out that these small local models do not perform well in generalization for unknown data. If wgen is large, the local models become also large which increase the computational complexity. Here, the training data are clustered in about 30 local regions ensuring that each local model has a sufficient amount of data points for high accuracy (in practice, roughly a hundred data points for each local model suffice) while having sufficiently few that the solution remains feasible in real-time (on our current hardware, a Core Duo at 2GHz, that means less than 1000 data points). On average, each local model has approximately 500 training examples. This small number of training inputs enables a fast training for each local model, i.e., the matrix inversion. For estimating the hyperparameters using likelihood optimization, we subsample the training data which results in a subset of about 1000 data points. Considering the approximation error on the test set shown in Figure 2(a-c), it can be seen that LGP generalizes well using only few local models for prediction. In all cases, LGP outperforms LWPR and OGP while being close in learning accuracy to global methods such as GPR and νSVR. The mean-prediction for GPR is determined according to Equation (3) where we precomputed 5 the prediction vector α from training data. When a query point appears, the kernel vector kT ∗is evaluated for this particular point. The operation of mean-prediction has then the order of O(n) for standard GPR (similarly, for ν-SVR) and O(NM) for LGP, where n denotes the total number of training points, M number of local models and N number of data points in a local model. 4.2 Comparison of Computation Speed for Prediction Beside the reduction of training time (i.e., matrix inversion), the prediction time is also reduced significantly compared to GPR and ν-SVR due to the fact that only a small amount of local models in the vicinity of the query point are needed during prediction for LGP. Thus, the prediction time can be controlled by the number of local models. A large number of local models may provide a smooth prediction but on the other hand increases the time complexity. The comparison of prediction speed is shown in Figure 3. Here, we train LWPR, ν-SVR, GPR and LGP on 5 different data sets with increasing training examples (1065, 3726, 7452, 10646 and 14904 data points, respectively). Subsequently, using the trained models we compute the average time needed to make a prediction for a query point for all 7 DoF. For LGP, we take a limited number of local models in the vicinity for prediction, e.g., M = 3. Since our control system requires a minimal prediction rate at 100 Hz (10 ms) in order to ensure system stability, data sets with more than 15000 points are not applicable for standard GPR or ν-SVR due to high computation demands for prediction. The results show that the computation time requirements of ν-SVR and GPR rises very fast with the size of training data set as expected. LWPR remains the best method in terms of computational complexity only increasing at a very low speed. However, as shown in Figure 3, the cost for LGP is significantly lower than the one ν-SVR and GPR and increases at a much lower rate. In practice, we can also curb the computation demands of single models by deleting old data points, if a new ones are assigned to the model. As approach to deleting and inserting data points, we can use the information gain of the corresponding local model as a principled measure. It can be seen from the results that LGP represents a compromise between learning accuracy and computational complexity. For large data sets (e.g., more than 5000 training examples), LGP reduces the prediction cost considerably while keeping a good learning performance. 5 Application in Model-based Robot Control Local GP Robot ¨qd ˙qd qd Kv Kp ! ! ! + + + − − + + u q ˙q ! Figure 4: Schematic showing model-based robot control. The learned dynamics model can be updated online using LGP. In this section, first, we use the inverse dynamics models learned in Section 4.1 for a modelbased tracking control task [10] in the setting shown in Figure 4. Here, the learned model of the robot is applied for an online prediction of the feedforward torques uFF given the desired trajectory [qd, ˙qd, ¨qd]. Subsequently, the model approximated by LGP is used for an online learning performance. Demonstrating the online learning, the local GP models are adapted in real-time using rank-one update. As shown in Figure 4, the controller command u consists of the feedforward part uFF and the feedback part uFB = Kpe + Kv ˙e, where e = qd −q denotes the tracking error and Kp, Kv position-gain and velocity-gain, respectively. During the control experiment we set the gains to very low values taking the aim of compliant control into account. As a result, the learned model has a stronger effect on computing the predicted torque uFF and, hence, a better learning performance of each method results in a lower tracking error. For comparison with the learned models, we also compute the feedforward torque using rigid-body (RB) formulation which is a common approach in robot control [10]. The control task is performed 6 1 2 3 4 5 6 7 0 0.02 0.04 0.06 0.08 0.1 0.12 Degree of Freedom RMSE RBD LWPR ν−SVR GPR LGP offline (a) Tracking Error on Barrett without online learning 1 2 3 4 5 6 7 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 Degree of Freedom RMSE LGP offline GPR LGP online (b) Tracking Error after LGP online learning on Barrett Figure 5: (a) Tracking error as RMSE on test trajectory for each DoF with Barrett WAM. (b) Tracking error after online learning with LGP. The model uncertainty is reduced with online learning using LGP. With online learning, LGP is able to outperform offline learned models using standard GPR for test trajectories. in real-time on the Barrett WAM, as shown in Figure 1. As desired trajectory, we generate a test trajectory which is similar to the one used for learning the inverse dynamics models in Section 4.1. Figure 5 (a) shows the tracking errors on test trajectory for 7 DoFs, where the error is computed as root mean squared error (RMSE). Here, LGP provides a competitive control performance compared to GPR while being superior to LWPR and the state-of-the art rigid-body model. It can be seen that for several DoFs the tracking errors are large, for example 5., 6. and 7. DoF. The reason is that for these DoFs the unknown nonlinearities are time-dependent, e.g., gear drive for 7. DoF, which can not be approximated well using just one offline learned model. Since it is not possible to learn the complete state space using a single data set, online learning is necessary. 5.1 Online Learning of Inverse Dynamics Models with LGP The ability of online adaptation of the learned inverse dynamics models with LGP is shown by the rank-one update of the local models which has a complexity of O(n2) [9]. Since the number of training examples in each local model is limited (500 points in average), the update procedure is fast enough for real-time application. For online learning the models are updated as shown in Figure 4. For doing so, we regularly sample the joint torques u and the corresponding robot trajectories [q, ˙q, ¨q] online. For the time being, as a new point is inserted we randomly delete another data point from the local model if the maximal number of data point is reached. The process of insertion and deletion of data points can be further improved by considering the information gain (and information lost) of the operation. Figure 5 (b) shows the tracking error after online learning with LGP. It can be seen that the errors for each DoF are significantly reduced with online LGP compared to the ones with offline learned models. With online learning, LGP is also able to outperform standard GPR. 6 Conclusion We combine with LGP the fast computation of local regression with more accurate regression methods while having little tuning efforts. LGP achieves higher learning accuracy compared to locally linear methods such as LWPR while having less computational cost compared to GPR and ν-SVR. The reducing cost allows LGP for model online learning which is necessary in oder to generalize the model for all trajectories. Model-based tracking control using online learned model achieves superior control performance compared to the state-of-the-art method as well as offline learned model for unknown trajectories. 7 References [1] C. E. Rasmussen and C. K. Williams, Gaussian Processes for Machine Learning. Massachusetts Institute of Technology: MIT-Press, 2006. [2] J. Q. Candela and C. E. Rasmussen, “A unifying view of sparse approximate gaussian process regression,” Journal of Machine Learning Research, 2005. [3] V. Treps, “Mixtures of gaussian process,” Advances in Neural Information Processing Systems, 2001. [4] C. E. Rasmussen and Z. Ghahramani, “Infinite mixtures of gaussian process experts,” Advances in Neural Information Processing Systems, 2002. [5] L. Csato and M. Opper, “Sparse online gaussian processes,” Neural Computation, 2002. [6] E. Snelson and Z. Ghahramani, “Local and global sparse gaussian process approximations,” Artificial Intelligence and Statistics, 2007. [7] S. Schaal, C. G. Atkeson, and S. Vijayakumar, “Scalable techniques from nonparameteric statistics for real-time robot learning,” Applied Intelligence, pp. 49–60, 2002. [8] S. Vijayakumar, A. D’Souza, and S. Schaal, “Incremental online learning in high dimensions,” Neural Computation, 2005. [9] M. Seeger, “Low rank update for the cholesky decomposition,” Tech. Rep., 2007. [Online]. Available: http://www.kyb.tuebingen.mpg.de/bs/people/seeger/ [10] J. J. Craig, Introduction to Robotics: Mechanics and Control, 3rd ed. Prentice Hall, 2004. [11] B. Sch¨olkopf and A. Smola, Learning with Kernels: Support Vector Machines, Regularization, Optimization and Beyond. Cambridge, MA: MIT-Press, 2002. [12] S. Schaal, “The SL simulation and real-time control software package,” Tech. Rep., 2006. [Online]. Available: http://www-clmc.usc.edu/publications/S/schaal-TRSL.pdf [13] C.-C. Chang and C.-J. Lin, LIBSVM: a library for support vector machines, 2001, http://www.csie.ntu.edu.tw/ cjlin/libsvm. [14] M. Seeger, LHOTSE: Toolbox for Adaptive Statistical Model, 2007, http://www.kyb.tuebingen.mpg.de/bs/people/seeger/lhotse/. [15] D. Nguyen-Tuong, J. Peters, and M. Seeger, “Computed torque control with nonparametric regression models,” Proceedings of the 2008 American Control Conference (ACC 2008), 2008. 8
2008
4
3,526
Estimating vector fields using sparse basis field expansions Stefan Haufe1, 2, * Vadim V. Nikulin3, 4 Andreas Ziehe1, 2 Klaus-Robert M¨uller1, 2, 4 Guido Nolte2 1TU Berlin, Dept. of Computer Science, Machine Learning Laboratory, Berlin, Germany 2Fraunhofer Institute FIRST (IDA), Berlin, Germany 3Charit´e University Medicine, Dept. of Neurology, Campus Benjamin Franklin, Berlin, Germany 4Bernstein Center for Computational Neuroscience, Berlin, Germany * haufe@cs.tu-berlin.de Abstract We introduce a novel framework for estimating vector fields using sparse basis field expansions (S-FLEX). The notion of basis fields, which are an extension of scalar basis functions, arises naturally in our framework from a rotational invariance requirement. We consider a regression setting as well as inverse problems. All variants discussed lead to second-order cone programming formulations. While our framework is generally applicable to any type of vector field, we focus in this paper on applying it to solving the EEG/MEG inverse problem. It is shown that significantly more precise and neurophysiologically more plausible location and shape estimates of cerebral current sources from EEG/MEG measurements become possible with our method when comparing to the state-of-the-art. 1 Introduction Current machine learning is frequently concerned with the estimation of functions with multivariate output. While in many cases the outputs can be treated as mere collections of scalars (e.g. different color channels in image processing), in some contexts there might be a deeper interpretation of them as spatial vectors with a direction and a magnitude. Such “truly” vectorial functions are called vector fields and become manifest for example in optical flow fields, electromagnetic fields and wind fields in meteorology. Vector field estimators have to take into account that the numerical representation of a vector depends on the coordinate system it is measured in. That is, the estimate should be invariant with respect to a rotation of the coordinate system. Let v : RP 7→RQ be a vector field. Mathematically speaking, we are seeking to approximate v by a field ˆv using empirical measurements. Here we consider two types of measurements. The first type are direct samples (xn, yn), xn ∈RP , yn ∈RQ, n = 1, . . . , N of v leading to a regression problem. The second case occurs, if only indirect measurements zm ∈R, m = 1, . . . , M are available, which we assume to be generated by a known linear1 transformation of the vector field outputs yn belonging to nodes xn, n = 1, . . . , N. This kind of estimation problem is known as an inverse problem. Let z = (z1, . . . , zM)T denote the vector of indirect measurements, Y = (yT 1 , . . . , yT N)T the N × Q matrix of vector field outputs and vec(Y ) a column vector containing the stacked transposed rows of Y . The linear relationship between Y and z can be written as z = F vec(Y ) using the forward model F ∈RM×NQ. 1If the true relation is nonlinear, it is here assumed to be linearized. 1 As an example of an inverse problem consider the way humans localize acoustic sources. Here z comprises the signal arriving at the ears, v is the spatial distribution of the sound sources and F is given by physical equations of sound propagation. Using information from two ears, humans do already very well in estimating the direction of incoming sounds. By further incorporating prior knowledge, e.g. on the loudness of the sources, v can usually be well approximated. The use of prior knowledge (a.k.a. regularization) is indeed the most effective strategy for solving inverse problems [13], which are inherently ambiguous. Hence, the same mechanisms used to avoid overfitting in, e.g., regression may be applied to cope with the ambiguity of inverse problems. For the estimation of scalar functions, methods that utilize sparse linear combinations of basis functions have gained considerable attention recently (e.g. the “lasso” [14]). Apart from the computational tractability that comes with the sparsity of the learned model, the possibility of interpreting the estimates in terms of their basis functions is a particularly appealing feature of these methods. While sparse expansions are also desirable in vector field estimation, lasso and similar methods cannot be used for that purpose, as they break rotational invariance in the output space RQ. This is easily seen as sparse methods tend to select different basis functions in each of the Q dimensions. Only few attempts have been made on rotation-invariant sparse vector field expansions so far. In [8] a dense expansion is discussed, which could be modified to a sparse version maintaining rotational invariance. Unfortunately, this method is restricted to approximating curl-free fields. In contrast, we here propose a method that can be used to decompose any vector field. We will derive the general framework in section 2. In section 3 we will apply the (appropriately customized) method for solving the EEG/MEG inverse problem. Finally, we will draw a brief conclusion in section 4. 2 Method Our model is based on the assumption that v can be well approximated by a linear combination of some basis fields. A basis field is defined here (unlike in [8]) as a vector field, in which all output vectors point in the same direction, while the magnitudes are proportional to a scalar (basis) function b : RP 7→R. As demonstrated in Fig. 1, this model has an expressive power which is comparable to a basis function expansion of scalar functions. Given a set (dictionary) of basis functions bl(x), l = 1, . . . , L, the basis field expansion is written as v(x) = L X l=1 clbl(x) , (1) with coefficients cl ∈RQ, l = 1, . . . , L to be estimated. Note that by including one coefficient for each output dimension, both orientations and proportionality factors are learned in this model (the term “basis field” thus refers to a basis function with learned coefficients). In order to select a small set of fields, most of the coefficient vectors cl have to vanish. This can be accomplished by solving a least-squares problem with an additional lasso-like ℓ1-norm penalty on the coefficients. However, care has to be taken in order to maintain rotational invariance of the solution. We here propose to use a regularizer that imposes sparsity and is invariant with respect to rotations, namely the ℓ1-norm of the magnitudes of the coefficient vectors. Let C = (c1, . . . , cL)T ∈RL×Q contain the coefficients and B =    b1(x1) . . . bL(x1) ... ... b1(xN) . . . bL(xN)   ∈RN×L (2) the basis functions evaluated at the xn. The parameters are estimated using ˆC = arg min C L(C) + λR(C) , (3) where R(C) = ∥C∥1,2 = PL l=1 ∥cl∥2 is the regularizer (the so-called ℓ1,2-norm of the matrix C), L(C) is the quadratic loss function, which is defined by L(C) = ∥vec(Y −BC)∥2 2 in the regression case and L(C) = ∥z−F vec(BC)∥2 2 in the inverse reconstruction case, and λ is a positive constant. In the statistics literature ℓ1,2-norm regularization is already known as a general mechanism for achieving sparsity of grouped predictors [18]. Besides vector field estimation, this concept has natural applications in, e.g, multiple kernel learning [1] and channel selection for brain computer interfacing [15]. It has also recently been considered in the general multiple output setting [17]. 2 1 2 3 SUM Figure 1: Complicated vector field (SUM) as a sum of three basis fields (1-3). 2.1 Rotational Invariance Rotational invariance, in the sense that the estimates after rotation of the coordinates axes are equal to the rotated estimates, is a desirable property of an estimator. One has to distinguish invariance in input- from invariance in output space. The former requirement may arise in many estimation settings and can be fulfilled by the choice of appropriate basis functions bl(x). The latter one is specific to vector field estimation and has to be assured by formulating a rotationally invariant cost function. Our proposed estimator Eq. 3 is rotationally invariant. This is due to the use of the ℓ2norm in output space RQ, which does not change under rotation. I.e. for an orthogonal matrix R ∈RQ×Q, RT R = I L X l=1 ∥Rcl∥2 = L X l=1 q tr(cT l RT Rcl) = L X l=1 ∥cl∥2 . (4) For the same argument, additional regularizers R∗(C) = ∥vec(D∗C)∥2 2 (the well-known Tikhonov regularizer) or R+(C) = ∥D+C∥1,2 (promoting sparsity of the linearly transformed vectors) may be introduced without breaking the rotational invariance in RQ. 2.2 Optimization Eq. 3 is a convex problem, composed of the quadratic term L(C) and the convex nondifferentiable term R(C). It is equivalent to the following program ˆC = arg min C,u L P l=1 ul s.t. ∥cl∥2 ≤ ul , l = 1, . . . , L L(C) ≤ ε , (5) in which a linear function of the variables is minimized subject to quadratic and second-order cone constraints [6]. The latter constraints are obtained by introducing auxiliary variables ul ∈R, l = 1, . . . , L encoding upper bounds of the magnitudes of the coefficient vectors. Problem Eq. 5 is an instance of second-order cone programming (SOCP), a standard class of convex programs, for which efficient interior-point based solvers are available. The problem stays inside the SOCP class even if the original formulation is modified in any of the following ways: • Additional regularizers R+(C) or R∗(C) are used. • The quadratic loss function is replaced by a more robust ℓ1-norm based loss (e.g. hinge loss). In the regression case, this loss should be defined based on the magnitude of the residual vector, which leads to a formulation involving the ℓ1,2-norm (and thus additional SOCP constraints). • Complex basis functions (e.g. Fourier bases or Morlet wavelets) are used. This approach also requires complex coefficients, by which it is then possible not only to optimally scale the basis functions, but also to optimally shift their phase. Similarly, it is possible to reconstruct complex vector fields from complex measurements using real-valued basis functions. 3 3 Application to the EEG/MEG inverse problem Vector fields occur, for example, in form of electrical currents in the brain, which are produced by postsynaptic neuronal processes. Knowledge of the electrical fields during a certain experimental condition allows one to draw conclusions about the locations in which the cognitive processing takes place and is thus of high value for research and medical diagnosis. Invasive measurements allow very local assessment of neuronal activations, but such procedure in humans is only possible when electrodes are implanted for treatment/diagnosis of neurological diseases, e.g., epilepsy. In the majority of cases recordings of cortical activity are performed with non-invasive measures such as electro- and magnetoencephalography, EEG and MEG respectively. The reconstruction of the current density from such measurements is an inverse problem. 3.1 Method specification In the following the task is to infer the generating cerebral current density given an EEG measurement z ∈RM. The current density is a vector field v : R3 7→R3 assigning a vectorial current source to each location in the brain. We obtained a realistic head model from high-resolution MRI (magnetic resonance imaging) slices of a human head [4]. Inside the brain, we arranged 2142 nodes in a regular grid of 1 cm distance. The forward mapping F ∈RM×2142·3 from these nodes to the electrodes was constructed according to [9] – taking into account the realistic geometry and conductive properties of brain, skull and skin. Dictionary In most applications the “true” sources are expected to be small in number and spatial extent. However, many commonly used methods estimate sources that almost cover the whole brain (e.g. [11]). Another group of methods delivers source estimates that are spatially sparse, but usually not rotationally invariant (e.g. [7]). Here often too many sources, which are scattered around the true sources, are estimated. Both the very smooth and the very sparse estimates are unrealistic from a physiological point of view. Only very recently, approaches capable of achieving a compromise between these two extremes have been outlined [16, 3]. For achieving a similar effect we here propose a sparse basis field expansion using radial basis functions. More specifically we consider spherical Gaussians bn,s(x) = (2πσs)−3 2 exp  −1 2 ∥x −xn∥2 2 σ−2 s  (6) s = 1, . . . , 4, having spatial standard deviations σ1 = 0.5 cm, σ2 = 1 cm, σ3 = 1.5 cm, σ4 = 2 cm and being centered at nodes xn, n = 1, . . . , N (see Fig. 2 for examples). Using this redundant dictionary our expectation is that sources of different spatial extent can be reconstructed by selecting the appropriate basis functions. Unlike the approaches taken in [16, 3] this approach does not require an additional hyperparameter for controlling the tradeoff between sparsity and smoothness. Figure 2: Gaussian basis functions with fixed center and standard deviations 0.5 cm −2 cm. Normalization Our ℓ1,2-norm based regularization is a heuristic for selecting the smallest possible number of basis fields necessary to explain the measurement. Using this approach, however, not only the number of nonzero coefficient vectors, but also their magnitudes enter the cost function. It is therefore important to normalize the basis functions in order not to a-priori prefer some of them. Let Bs be the N × N matrix containing the basis functions with standard deviation σs. The large matrix B = (B1/∥vec(B1)∥1, . . . , B4/∥vec(B4)∥1) ∈RN×4N is then constructed using normalized Bs. By this means, no length scale is artificially prefered. 4 An estimation bias is also introduced by the location of the sources. Due to volume conduction, the signal captured at the sensors is much stronger for superficial sources compared to deep sources. In [10] the variance estimate ˆS = ¯F T ¯F ¯F T −1 ¯F ∈R3N×3N is derived for the (least-squares) estimated sources, where ¯F = HF and H = I −11T /1T 1 ∈RM×M. We found that ˆS can be used for removing the location bias. This can be done by either penalizing activity at locations with high variance or by penalizing basis functions with high variance in the center. We here employ the former approach, as the latter may be problematic for basis functions with large extent. Using this approach, evaluation of ˆv(x) requires knowledge of the forward model for x. Therefore, we restrict ourselves here to nodes xn, n = 1, . . . , N. Let Wn ∈R3×3 denote the inverse matrix square root of the part of ˆS belonging to node xn. Defining W =    W1 . . . 0 ... ... ... 0 . . . WN   ∈R3N×3N , (7) the coefficients are estimated using ˆC = arg min C ∥C∥1,2 s.t. ∥z −FW vec(BC)∥2 2 < ε. The estimated current density at node xn is ˆv(xn) = Wn PL l=1 ˆclbl(xn). 3.2 Experiments Validation of methods for inverse reconstruction is generally difficult due to the lack of a “ground truth”. The measurements z cannot be used in this respect, as the main goal is not to predict the EEG/MEG measurements, but the vector field v(x) as accurately as possible. Therefore, the only way to evaluate inverse methods is to assess their ability to reconstruct known functions. We do this by reconstructing a) simulated current sources and b) sources of real EEG data that are already well-localized by other studies. For each EEG measurement, simulated or not, we conduct a 5 × 5 crossvalidation, i.e. we perform 25 inverse reconstructions based on different training sets containing 80 % of the electrodes. In each crossvalidation run, we evaluate two criteria. Most important is the reconstruction error, defined as Cy = ∥vec(Y )/∥vec(Y )∥2 −vec( ˆY tr)/∥vec( ˆY tr)∥2∥2, where ˆY tr are the vector field outputs at nodes xn, n = 1, . . . , N estimated using only the training set. This criterion can only be evaluated for the simulated data. For real and simulated data we also evaluate the generalization error, i.e. the error in the prediction of the remaining 20% (the test set) of the EEG measurements. This is defined as Cz = ∥zte −Fte vec( ˆY tr)∥2 2, where zte and Fte are the parts of z and F belonging to the test set. We compared the sparse basis field expansion (S-FLEX) approach using Gaussian basis functions (see section 3.1) to the commonly used approaches of LORETA [11] and Minimum Current Estimate (MCE) [7], and the recently proposed Focal Vectorfield Reconstruction (FVR) technique [3]. All three competitors correspond to using unit impulses as basis functions while employing different regularizers. The LORETA solution, e.g., is a Tikhonov regularized least-squares estimate while MCE is equivalent to applying lasso to each dimension separately, yielding current vectors that are biased towards being axes-parallel. We here used a variant of MCE, in which the original depth compensation approach was replaced by the approach outlined in section 3.1. Interestingly, FVR can be interpreted as a special case of S-FLEX employing the rotation-invariant regularizer R+(C) to enforce both sparsity and smoothness. The tradeoff parameter α of this method was chosen as suggested in [3]. All methods were formulated such that the fitness of the solution was ensured by the constraint ∥z −F vec( ˆY tr)∥2 2 < ε. The optimization was carried out using freely available packages for convex programming [12, 2]. Simulated data We simulated current densities in the following way. First, we sampled outputs yn, n = 1, . . . , N from a multivariate standard normal distribution. The function (xn, yn) was then spatially smoothed using a Gaussian lowpass filter with standard deviation 2.5 cm. Finally, each yn was shortened by the 90th percentile of the magnitudes of all yn – leaving only 10% of the current vectors active. Current densities obtained by this procedure usually feature 2-3 active patches (sources) with small to medium extent and smoothly varying magnitude and orientation (see Fig. 3 for an example). This 5 behaviour was considered consistent with the general believe on the sources. We simulated five densities and computed respective pseudo-measurements for 118 channels using the forward model F. As no noise was injected in the system, ε was set to zero in the following reconstruction. Real data We recorded 113-channel EEG of one healthy subject (male, 26 years) during electrical median nerve stimulation. The EEG electrodes were positioned according to the international 10-20 system. The exact positions were obtained using a 3D digitizer and mapped onto the surface of the head model. EEG data were recorded with sampling frequency of 2500 Hz and digitally bandpassfiltered between 15 Hz and 450 Hz. Left and right median nerves were stimulated in separate blocks by applying constant square 0.2 ms current pulses to the respective thenars. Current pulses had intensities above motor threshold (approx. 9 mA), inducing unintended twitches of the thumbs. The interstimulus interval varied randomly between 500 ms and 700 ms. About 1100 trials were recorded for each hand. Artifactual trials as well as artifactual electrodes were excluded from the analysis. For the remaining data, baseline correction was done based on the mean amplitude in the prestimulus interval (-100 ms to -10 ms). Finally, a single measurement vector was constructed by averaging the EEG amplitudes at 21 ms across 1946 trials (50% left hand, 50% right hand). By this means the EEG response to somatosensory input at the hands was captured with high signal-to-noise ratio (SNR). Based on that the brain areas representing left and right hand were to be reconstructed with ε set according to the estimated SNR. 3.3 Results Fig. 3 shows a simulated current density along with reconstructions according to LORETA, MCE, FVR and S-FLEX. From the figure it becomes apparent, that LORETA and MCE do not approximate the true current density very well. While the LORETA solution is rather blurry, merging the two true sources, the MCE solution exhibits many spikes, which could easily be misinterpreted as different sources. Note that the strong orientation bias of MCE cannot be seen in Fig. 3 as only dipole amplitudes are plotted. The estimates of FVR and S-FLEX approximately recover the shape of the sources. S-FLEX comes closest to the true shape, as its estimates are less focal than the ones of FVR. However, S-FLEX still slightly underestimate the extent of the sources. The localization results of left and right N20 generators are shown in Fig. 4. The solutions of FVR and S-FLEX are almost indistinguishable. Both show activity concentrated in two major patches, one in each contralateral somatosensory cortex. This is in good agreement with the localization of the hand areas reported in the literature (e.g. [5]). LORETA estimates only one large active region over the whole central area, with the maximum lying exactly in between the hand areas. The MCE solution consists of eight spikes scattered across the whole somatosensory area. Tab. 1 shows that S-FLEX generalizes better than its competitors, although insignificantly. More importantly S-FLEX outperforms its peers in terms of reconstruction accuracy. The distance to the runner-up FVR is, however, larger than expected from Fig. 3. This is due to the fact that the parameter of FVR controlling the tradeoff between sparsity and smoothness was fixed here to a value promoting “maximally sparse sources which are still smooth”. While this might be a good assumption in practise, it was not rewarded in our validation setting. We here explicitly required reconstruction rather than shrinkage of the sources. Cy SIM Cz SIM Cz REAL LORETA 1.00 ± 0.01 2.87 ± 0.78 8.18 ± 1.38 FVR 0.955 ± 0.02 1.21 ± 1.00 8.01 ± 1.79 S-FLEX 0.71 ± 0.04 0.952 ± 0.28 7.95 ± 1.84 MCE 1.21 ± 0.01 1.86 ± 0.57 8.13 ± 1.60 Table 1: Ability of LORETA, FVR, S-FLEX and MCE to reconstruct simulated currents (Cy SIM) and generalization performance with respect to the EEG measurements (Cz SIM/REAL). Winning entries (reaching significance) are shown in bold face. 6 SIM LORETA FVR S-FLEX MCE Figure 3: Simulated current density (SIM) and reconstruction according to LORETA, FVR, S-FLEX and MCE. Color encodes current magnitude. LORETA FVR S-FLEX MCE Figure 4: Localization of somatosensory evoked N20 generators according to LORETA, FVR, S-FLEX and MCE. Color encodes current magnitude. 7 4 Conclusion and Outlook This paper contributes a novel and general methodology for obtaining sparse decompositions of vector fields. An important ingredient of our framework is the insight that the vector field estimate should be invariant with respect to a rotation of the coordinate system. Interestingly, the latter constraint together with sparsity leads to a second-order cone programming formulation. We have focussed here on solving the EEG/MEG inverse problem, where our proposed S-FLEX approach outperformed the state-of-the-art in approximating the true shape of the current sources. However, other fields might as well benefit from the use of S-FLEX: in meteorology for example, an improved decomposition of wind fields into their driving components might provide novel insights that could be useful for better weather forecasting. Acknowledgments This work was supported in part by the German BMBF grants BCCNB-A4 (FKZ 01GQ0415), BFNTB-A1 (FKZ 01GQ0850) and FaSor (FKZ 16SV2234). We thank Friederike Hohlefeld and Monika Weber for help in preparing the experiment, and Ryota Tomioka for fruitful discussions. References [1] F.R. Bach, G.R.G. Lanckriet, and M.I. Jordan. Multiple kernel learning, conic duality and the SMO algorithm. In Proceedings of the Twenty-first International Conference on Machine Learning, 2004. [2] M. Grant, S. Boyd, and Y. Ye. CVX: Matlab Software for Disciplined Convex Programming, October 2006. http://www.stanford.edu/˜boyd/cvx/, Version 1.0RC. [3] S. Haufe, V.V. Nikulin, A. Ziehe, K.-R. M¨uller, and G. Nolte. Combining sparsity and rotational invariance in EEG/MEG source reconstruction. NeuroImage, 42(2):26–738, 2008. [4] C.J. Holmes, R. Hoge, L. Collins, R. Woods, A.W. Toga, and A.C. Evans. Enhancement of MR images using registration for signal averaging. J. Comput. Assist. Tomogr., 22(2):324–333, 1998. [5] J. Huttunen, S. Komssi, and L. Lauronen. Spatial dynamics of population activities at S1 after median and ulnar nerve stimulation revisited: An MEG study. NeuroImage, 32:1024–1031, 2006. [6] M.S. Lobo, L. Vandenberghe, S. Boyd, and H. Lebret. Applications of second-order cone programming. Lin. Alg. Appl., 284:193–228, 1998. [7] K. Matsuura and Y. Okabe. Selective minimum-norm solution of the biomagnetic inverse problem. IEEE Trans. Biomed. Eng., 42:608–615, 1995. [8] F.A. Mussa-Ivaldi. From basis functions to basis fields: vector field approximation from sparse data. Biol. Cybern., 67:479–489, 1992. [9] G. Nolte and G. Dassios. Analytic expansion of the EEG lead field for realistic volume conductors. Phys. Med. Biol., 50:3807–3823, 2005. [10] R.D. Pascual-Marqui. Standardized low-resolution brain electromagnetic tomography (sLORETA): technical details. Meth. Find. Exp. Clin. Pharmacol., 24(1):5–12, 2002. [11] R.D. Pascual-Marqui, C.M. Michel, and D. Lehmann. Low resolution electromagnetic tomography: a new method for localizing electrical activity in the brain. Int. J. Psychophysiol., 18:49–65, 1994. [12] J.F. Sturm. Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones. Optim. Method. Softw., 11–12:625–653, 1999. [13] A. Tarantola. Inverse Problem Theory and Model Parameter Estimation. SIAM, Philadelphia, 2005. [14] R. Tibshirani. Regression shrinkage and selection via the lasso. J. Roy. Stat. Soc. B Meth., 58(1):267–288, 1996. [15] R. Tomioka and S. Haufe. Combined classification and channel/basis selection with L1-L2 regularization with application to P300 speller system. In Proceedings of the 4th International Brain-Computer Interface Workshop and Training Course 2008. Verlag der Technischen Universit¨at Graz, 2008. [16] M. Vega-Hern´andez, E. Mart´ınez-Montes, J.M. S´anchez-Bornot, A. Lage-Castellanos, and P.A. Vald´esSosa. Penalized least squares methods for solving the EEG inverse problem. Stat. Sinica, 2008. In press. [17] D.P. Wipf and B.D. Rao. An empirical bayesian strategy for solving the simultaneous sparse approximation problem. IEEE Trans. Signal Proces., 55(7):3704–3716, 2007. [18] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. J. Roy. Stat. Soc. B Meth., 68(1):49–67, 2006. 8
2008
40
3,527
Probabilistic detection of short events, with application to critical care monitoring Norm Aleks U.C. Berkeley norm@cs.berkeley.edu Stuart Russell U.C. Berkeley russell@cs.berkeley.edu Michael G. Madden National U. of Ireland, Galway michael.madden@nuigalway.ie Diane Morabito U.C. San Francisco morabitod@ neurosurg.ucsf.edu Kristan Staudenmayer Stanford University kristans@ stanford.edu Mitchell Cohen U.C. San Francisco mcohen@ sfghsurg.ucsf.edu Geoffrey Manley U.C. San Francisco manleyg@ neurosurg.ucsf.edu Abstract We describe an application of probabilistic modeling and inference technology to the problem of analyzing sensor data in the setting of an intensive care unit (ICU). In particular, we consider the arterial-line blood pressure sensor, which is subject to frequent data artifacts that cause false alarms in the ICU and make the raw data almost useless for automated decision making. The problem is complicated by the fact that the sensor data are averaged over fixed intervals whereas the events causing data artifacts may occur at any time and often have durations significantly shorter than the data collection interval. We show that careful modeling of the sensor, combined with a general technique for detecting sub-interval events and estimating their duration, enables detection of artifacts and accurate estimation of the underlying blood pressure values. Our model’s performance identifying artifacts is superior to two other classifiers’ and about as good as a physician’s. 1 Introduction The work we report here falls under the general heading of state estimation, i.e., computing the posterior distribution P(Xt|e1:t) for the state variables X of a partially observable stochastic system, given a sequence of observations e1:t. The specific setting for our work at the Center for Biomedical Informatics in Critical Care (C-BICC) is an intensive care unit (ICU) at San Francisco General Hospital (SFGH) specializing in traumatic brain injury, part of a major regional trauma center. In this setting, the state variables Xt include aspects of patient state, while the evidence variables Et include up to 40 continuous streams of sensor data such as blood pressures (systolic/diastolic/mean, arterial and venous), oxygenation of blood, brain, and other tissues, intracranial pressure and temperature, inspired and expired oxygen and CO2, and many other measurements from the mechanical ventilator. A section of data from these sensors is shown in Figure 1(a). It illustrates a number of artifacts, including, in the top traces, sharp deviations in blood pressure due to external interventions in the arterial line; in the middle traces, ubiquitous drop-outs in the venous oxygen level; and in the lower traces, many jagged spikes in measured lung compliance due to coughing. The artifacts cannot be modeled simply as “noise” in the sensor model; many are extended over time (some for as long as 45 minutes) and most exhibit complex patterns of their own. Simple techniques for “cleaning” such data, such as median filtering, fail. Instead, we follow the general approach suggested by Russell and Norvig (2003), which involves careful generative modeling of sensor state using dynamic Bayesian networks (DBNs). This paper focuses on the arterial-line blood pressure sensor (Figure 1(b)), a key element of the monitoring system. As we describe in Section 2, this sensor is subject to multiple artifacts, including 1 Input to bedside monitor Flush solution (heparinized saline) Pressure bag and gauge Transducer 3-way stopcock with site for zeroing or blood draw Radial artery catheter (a) (b) Figure 1: (a) One day’s worth of minute-by-minute monitoring data for an ICU patient. (b) Arterialline blood pressure measurement. artificially low or high values due to zeroing, line flushes, or the drawing of blood samples. These artifacts not only complicate the state estimation and diagnosis task; they also corrupt recorded data and cause a large number of false alarms in the ICU, which lead in turn to true alarms being ignored and alarms being turned off (Tsien & Fackler, 1997). By modeling the artifact-generating processes, we hope to be able to infer the true underlying blood pressure even when artifacts occur. To this point, the task described would be an applied Bayesian modeling problem of medium difficulty. What makes it slightly unusual and perhaps of more general interest is the fact that our sensor data are recorded as averages over each minute (our analysis is off-line, for the purpose of making recorded data useable for biomedical research), whereas the events of interest—in this case, re-zeroings, line flushes, and blood draws—can occur at any time and have durations ranging from under 5 seconds to over 100 seconds. Thus, the natural time step for modeling the sensor state transitions might be one second, whereas the measurement interval is much larger. This brings up the question of how a “slow” (one-minute) model might be constructed and how it relates to a “fast” (one-second) model. This is an instance of a very important issue studied in the dynamical systems and chemical kinetics literatures under the heading of separation of time scales (see, e.g., Rao & Arkin, 2003). Fortunately, in our case the problem has a simple, exact solution: Section 3 shows that a one-minute model can be derived efficiently, offline, from the more natural one-second model and gives exactly the same evidence likelihood. The more general problem of handling multiple time scales within DBNs, noted by Aliferis and Cooper (1996), remains open. Section 4 describes the complete model for blood pressure estimation, including artifact models, and Section 5 then evaluates the model on real patient data. We show a number of examples of artifacts, their detection, and inference of the underlying state values. We analyze model performance over more than 300 hours of data from 7 patients, containing 228 artifacts. Our results show very high precision and recall rates for event detection; we are able to eliminate over 90% of false alarms for blood pressure while missing fewer than 1% of the true alarms. Our work is not the first to consider the probabilistic analysis of intensive care data. Indeed, one of the best known early Bayes net applications was the ALARM model for patient monitoring under ventilation (Beinlich et al., 1989)—although this model had no temporal element. The work most closely related to ours is that of Williams, Quinn, and McIntosh (2005), who apply factorial switching Kalman filters—a particular class of DBNs—to artifact detection in neonatal ICU data. Their (one-second) model is roughly analogous to the models described by Russell and Norvig, using Boolean state variables to represent events that block normal sensor readings. Sieben and 2 Figure 2: 1-second (top) and 1-minute-average (bottom) data for systolic/mean/diastolic pressures. One the left, a blood draw and line flush in quick succession. On the right, a zeroing. Gather (2007) have applied discriminative models (decision forests and, more recently, SVMs) to correction of one-second-resolution heart-rate data. Another important line of work is the MIMIC project, which, like ours, aims to apply model-based methods to the interpretation of ICU data (Heldt et al., 2006). 2 Blood Pressure Monitoring Blood pressure provides informs much of medical thinking and is typically measured continuously in the ICU. The most common ICU blood pressure measurement device is the arterial line, illustrated in Figure 1(b); a catheter placed into one of the patient’s small arteries is connected to a pressure transducer whose output is displayed on a bedside monitor. Because blood flow varies during the cardiac cycle, blood pressure is pulsatile. In medical records, including our data set, blood pressure measurements are summarized in two or three values: systolic blood pressure, which is the maximum reached during the cardiac cycle, diastolic, which is the corresponding minimum, and sometimes the mean. We consider the three common artifact types illustrated in Figure 2: 1) in a blood draw, sensed pressure gradually climbs toward that of the pressure bag, then suddenly returns to the blood pressure when the stopcock is closed, seconds or minutes later; 2) in a line flush, the transducer is exposed to bag pressure for a few seconds; 3) during zeroing, the transducer is exposed to atmospheric pressure (defined as zero). We refer to blood draws and line flushes collectively as “bag events.” Figure 2(top) shows the artifacts using data collected at one-second intervals. However, the data we work with are the one-minute means of the one-second data, as shown in Figure 2(bottom). A fairly accurate simplification is that each second’s reading reflects either the true blood pressure or an artifactual pressure, thus our model for the effect of averaging is that each recorded oneminute datum is a linear function of the true pressure, the artifactual pressure(s), and the fraction of the minute occupied by artifact(s). Using systolic pressure s as an example, for an artifact of length p (as a fraction of the averaging interval) and mean artifact pressure x, the apparent pressure s = px+(1 p)s. Our DBN model in Section 4 includes summary variables and equations relating the one-minute readings to the true underlying pressures, artifacts’ durations, bag and atmospheric pressure, etc.; it can therefore estimate the duration and other characteristics of artifacts that have corrupted the data. Patterns produced by artifacts in the one-minute data are highly varied, but it turns out (see Section 5) that the detailed modeling pays off in revealing the characteristic relationships that follow from the nature of the corrupting events. 3 Modeling Sub-Interval Events The data we work with are generated by a combination of physiological processes that vary over timescales of several minutes and artifactual events lasting perhaps only a few seconds. A natural 3 Figure 3: (left) DBN model showing relationships among the fast event variables fi, interval count variables GN j, and measurement variables EN j. (right) A reduced model with the same distributions for G0,GN,...,GNt. choice would be a “fast” time step for the DBN model, e.g., 1 second: on this timescale, the sensor state variables indicate whether or not an artifactual event is currently in progress. The transition model for these variables indicates the probability at each second that a new event begins and the probability that an event already in progress continues. Assuming for now that there is only one event type, and given memoryless (geometric) distribution of durations such as we see in Section 5, only two parameters are necessary: p = P(fi = 1| fi 1 = 1) and q = P(fi = 1| fi 1 = 0). Both can be estimated simply by measuring event frequencies and durations. The main drawback of using a fast time step is computational: inference must be carried out over 60 time steps for every one measurement that arrives. Furthermore, much of this inference is pointless given the lack of evidence at all the intervening time steps. We could instead build a model with a “slow” time step of one minute, so that evidence arrives at each time step. The problem here is to determine the structure and parameters of such a model. First, to explain the evidence, we’ll need a count variable saying how many seconds of the minute were occupied by events. It is easy to see that this variable must depend on the corresponding variable one minute earlier: for example, if the preceding minute was fully occupied by a blood draw event, then the blood draw was in progress at the beginning of the current minute, so the current minute is likely to be at least partially occupied by the event. (If there are multiple mutually exclusive event types, then each count variable depends on all the preceding variables.) Each count variable can take on 61 values, which leads to huge conditional distributions summarizing how the preceding 60 seconds could be divided among the various event types. Estimating these seems hopeless. However, as we will now see, CPTs for the slow model need not be estimated or guessed—they can be derived from the fast model. This is the typical situation with separation of time scales: slowtime-scale models are computationally more tractable but can only be constructed by deriving them from a fast-time-scale model. Consider a fast model as shown in Figure 3(a). Let the fast time step be  and a measurement interval be N (where N =60 in our domain). fi =1 iff an event is occurring at time i ; GN j   N j 1 i=N(j 1) fi counts the number of fast time steps within the jth measurement interval during which an event is occurring. The jth observed measurement EN j is determined entirely by GN j; therefore, it suffices to consider the joint distribution over G0,GN,...,GNt. To obtain a model containing only variables at the slow intervals, we simply need to sum out the fi variables other than the ones at interval boundaries. We can do this topologically by a series of arc reversal and node removal operations (Shachter, 1986); a simple proof by induction (omitted) shows that, regardless of the number of fast steps per slow step, we obtain the reduced structure in Figure 3(b). By construction, this model gives the same joint distribution for G0,GN,...,GNt. Importantly, neither fN j nor GN j depends on GN(j 1).1 To complete the reduced model, we need the conditional distributions P(GN j| fN(j 1)) and P(fN j| fN(j 1)GN j). That is, how many “ones” do we expect in an interval, given the event status at the beginning of the interval, and what is the probability that an event is occurring at the beginning of the next interval, given also the number of ones in the current interval? Given the fast model’s parameters p and q, these quantities can be calculated offline using dynamic programming: 1Intuitively, the distribution over GN j for the Nth interval is determined by the value of f at the beginning of the interval, independent of GN(j 1), whereas fN j depends on the count GN j for the preceding interval because, for example, a high count implies that an event is likely to be in progress at the end of the interval. 4 a table is constructed for the variables fi and Ci for i from 1 up to N, where Ci is the number of ones up to i−1 and C0 =0. The recurrences for fi and Ci are as follows: P(Ci, fi =1|f0) = pP(Ci−1 =Ci −1, fi−1 =1|f0)+qP(Ci−1 =Ci, fi−1 =0|f0) (1) P(Ci, fi =0|f0) = (1−p)P(Ci−1 =Ci −1, fi−1 =1|f0)+(1−q)P(Ci−1 =Ci, fi−1 =0|f0) (2) Extracting the required conditional probabilities from the table is straightforward. The table is of size O(N2), so the total time to compute the table is negligible for any plausible value of N. Now we have the following result: Theorem 1 Given the conditional distributions computed by Equations 1 and 2, the reduced model in Figure 3(b) yields the same distribution for the count sequence G0,GN,...,GNt as the fine-grained model in Figure 3(a). The conditional distributions that we obtain by dynamic programming have some interesting limit cases. In particular, when events are short compared to measurement intervals and occur frequently, we expect the dependence on fN(j−1) to disappear and the distribution for GN j to be approximately Gaussian with mean N 1+p/(1−q). When p=q, the fis become i.i.d. and GN j is exactly binomial—the recurrences compute the binomial coefficients via Pascal’s rule. Generalizing the analysis to the case of multiple disjoint event types (i.e., fi takes on more than two values) is mathematically straightforward and the details are omitted. There is, however, a complexity problem as the number of event types increases. The count variables GN j, HN j, and so on at time N j are all dependent on each other given fN(j−1), and fN j depends on all of them; thus, using the approach given above, the precomputed tables will scale exponentially with the number of event types. This is not a problem in our application, where we do not expect sensors to have more than a few distinct types of “error” state. Furthermore, if each event type occurs independently of the others (except for the mutual exclusion constraint), then the conditional distribution for the count variable of each depends not on the combination of counts for the other types but on the sum of those counts, leading to low-order polynomial growth in the table sizes. The preceding analysis covers only the case in which fi depends just on fi−1, leading to independently occurring events with a geometric length distribution. Constructing models with other length distributions is a well-studied problem in statistics and most cases can be well approximated with a modest increase in the size of the dynamic programming table. Handling non-independent event occurrence is often more important; for example, blood draws may occur in clusters if multiple samples are required. Such dependencies can be handled by augmenting the state with timer variables, again at modest cost. Before we move on to describe the complete model, it is important to note that a model with a finer time scale that the measurement frequency can provide useful extra information. By analogy with sub-pixel localization in computer vision, such a model can estimate the time of occurrence of an event within a measurement interval. 4 Combined model The complete model for blood pressure measurements is shown in Figure 4. It has the same basic structure as the reduced model in Figure 3(b) but extends it in various ways. The evidence variables ENj are just the three reported blood pressure values ObservedDiaBP, ObservedSysBP, and ObservedMeanBP. These reflect, with some Gaussian noise, idealized Apparent values, determined in turn by • the true time-averaged pressures: TrueDiaBP, TrueSysBP, and TrueMeanBP; • the total duration of artifacts within the preceding minute (i.e., the GN j variables): BagTime and ZeroTime; • the average induced pressure to which the transducer is exposed during each event type: BagPressure and ZeroPressure (these have their own slowly varying dynamics). 5 Figure 4: The blood pressure artifact detection DBN. Gray edges connect nodes within a time slice; black edges are between time slices. “Nodes” without surrounding ovals are deterministic functions included for clarity. The Apparent variables are deterministic functions of their parents. For example, we have ApparentDiaBP = 1 N  BagTime· BagPressure+ZeroTime· ZeroPressure+ (N  BagTime ZeroTime)· TrueDiaBP  . The physiological state variables in this model are TrueSystolicFraction (the average portion of each heartbeat spent ejecting blood), TruePulseBP (the peak-to-trough size of the pressure wave generated by each heartbeat), and TrueMeanBP. For simplicity, basic physiologic factors are modeled with random walks weighted toward physiologically sensible values.2 The key event variable in the model, corresponding to fN j in Figure 3(b), is EndingValveState. This has three values for the three possible stopcock positions at the one-minute boundary: open to patient, open to bag, or open to air. The CPTs for this variable and for its children (at the next time step) BagTime and ZeroTime are the ones computed by the method of Section 3. The CPT for EndingValveState has 3× 3× 61× 61=33,489 entries. 5 Experimental Results To estimate the CPT parameters (P(ft+1 =1| ft =0) and P(ft+1 =1| ft =1)) for the one-second model, and to evaluate the one-minute model’s performance, we first needed ground truth for event occurrence and length. By special arrangement we were able to obtain 300 hours of 1Hz data, in which the artifacts we describe here are obvious to the human eye; one of us (a physician) then tagged each of those data points for artifact presence and type, giving the ground truth. (There were a total of 228 events of various lengths in the 300 hours’ data.) With half the annotated data we verified that event durations were indeed approximately geometrically distributed, and estimated the one-second CPT parameters; from those, as described in Section 3, we calculated corresponding one-minute-interval CPTs. Using averaging equivalent to that used by the regular system, we transformed the other half of the high-resolution data into 1-minute average blood pressures with associated artifact-time ground truth. We then used standard particle filtering (Gordon et al., 1993) with 8000 particles to derive posteriors for true blood pressure and the presence and length of each type of artifact at each minute. For comparison, we also evaluated three other artifact detectors: • a support vector machine (SVM) using blood pressures at times t, t  1, t  2, and t  3 as its features; • a deterministic model-based detector, based on the linear-combination model of Section 2, which calculates three estimates of artifact pressure and length, pairwise among the current measured systolic, diastolic, and mean pressures, to explain the current measurements 2More accurate modeling of the physiology actually improves the accuracy of artifact detection, but this point is explored in a separate paper. 6 Figure 5: ROC curves for the DBN’s performance detecting bag events (left) and zeroing events (right), as compared with an SVM, a deterministic model-based detector, and a physician. Figure 6: Two days’ blood pressure data for one patient, with the hypertension threshold overlaid. Raw data are on the left; on the right are filtering results showing elimination (here) of false declarations of hypertension. given the assumption that the true blood pressure is that recorded at the most recent minute during which no artifact was detected; it predicts artifact presence if the sum of the estimates’ squared distances from their mean is below some threshold. (Because this model’s prediction for any particular minute depends on its prediction at the previous minute, its sensitivity and specificity do not vary monotonically with changes in the threshold; the ROC curve shown is of only the undominated points.) • a physician working only with the one-minute-average data. Figure 5(left) shows results for the detection of bag events. The DBN achieves a true positive rate of 80% with almost no false positives, or a TPR of 90% with 10% false positives. It does less well with zeroing events, as shown in Figure 5(b), achieving a TPR of nearly 70% with minimal false positives, but beyond that having unacceptable false positive levels. The physician had an even lower false positive rate for each artifact type, but with a true positive rate of only about 50%; the SVM and deterministic model-based detector both had better-than-chance performance but were clinically useless due to high false positive rates. The model’s accuracy in tracking true blood pressure is harder to evaluate because we have no minute-by-minute gold standard. (Arterial blood pressure measurements as we’ve described them, despite their artifacts, are the gold standard in the ICU. Placing a second arterial line, besides being subject to the same artifacts, also exposes patients to unnecessary infection risk.) However, on a more qualitative level, four physicians in our group have examined many hours of measured and inferred blood pressure traces, a typical example of which is shown in Figure 7, and have nearly always agreed with the inference results. Where the system’s inferences are questionable, examining other sensors often helps to reveal whether a pressure change was real or artifactual. 7 Figure 7: Sensed blood pressure (dark lines) and inferred true blood pressure (lighter bands, representing mean ± 1SD) across an observed blood draw with following zeroing. The lowest two lines show the inferred fraction of each minute occupied by bag or zero artifact. 6 Conclusions and Further Work We have applied dynamic Bayesian network modeling to the problem of handling aggregated data with sub-interval artifacts. In preliminary experiments, this model of a typical blood pressure sensor appears quite successful at tracking true blood pressure and identifying and classifying artifacts. Our approach has reduced the need for learning (as distinct from modeling and inference) to the small but crucial role of determining the distribution of event durations. It is interesting that the more straightforward learning approach, the SVM described above, had performance markedly inferior to the generative model’s. Modified to run at 1Hz, this model could run on-line at the bedside, helping to reduce false alarms. We are currently extending the model to include more sensors and physiological state variables and anticipate further improvements in detection accuracy as a result of combining multiple sensors. References Aliferis, C., & Cooper, G. (1996). A structurally and temporally extended Bayesian belief network model: Definitions, properties, and modeling techniques. Proc. Uncertainty in Artificial Intelligence (pp. 28–39). Beinlich, I., Suermondt, H., Chavez, R., & Cooper, G. (1989). The ALARM monitoring system. Proc. Second European Conference on Artificial Intelligence in Medicine (pp. 247–256). Gordon, N. J., Salmond, D., & Smith, A. (1993). Novel approach to nonlinear/non-Gaussian Bayesian state estimation. Radar and Signal Processing, IEE Proceedings–F, 140, 107–113. Heldt, T., Long, W., Verghese, G., Szolovits, P., & Mark, R. (2006). Integrating data, models, and reasoning in critical care. Proceedings of the 28th IEEE EMBS International Conference (pp. 350–353). Rao, C. V., & Arkin, A. P. (2003). Stochastic chemical kinetics and the quasi-steady-state assumption: Application to the Gillespie algorithm. Journal of Chemical Physics, 18. Russell, S. J., & Norvig, P. (2003). Artificial intelligence: A modern approach. Upper Saddle River, New Jersey: Prentice-Hall. 2nd edition. Shachter, R. D. (1986). Evaluating influence diagrams. Operations Research, 34, 871–882. Sieben, W., & Gather, U. (2007). Classifying alarms in intensive care—analogy to hypothesis testing. Lecture notes in computer science, 130–138. Tsien, C. L., & Fackler, J. C. (1997). Poor prognosis for existing monitors in the intensive care unit. Critical Care Medicine, 25, 614–619. Williams, C. K. I., Quinn, J., & McIntosh, N. (2005). Factorial switching Kalman filters for condition monitoring in neonatal intensive care. NIPS. Vancouver, Canada. 8
2008
41
3,528
Spectral Clustering with Perturbed Data Ling Huang Intel Research ling.huang@intel.com Donghui Yan UC Berkeley dhyan@stat.berkeley.edu Michael I. Jordan UC Berkeley jordan@cs.berkeley.edu Nina Taft Intel Research nina.taft@intel.com Abstract Spectral clustering is useful for a wide-ranging set of applications in areas such as biological data analysis, image processing and data mining. However, the computational and/or communication resources required by the method in processing large-scale data are often prohibitively high, and practitioners are often required to perturb the original data in various ways (quantization, downsampling, etc) before invoking a spectral algorithm. In this paper, we use stochastic perturbation theory to study the effects of data perturbation on the performance of spectral clustering. We show that the error under perturbation of spectral clustering is closely related to the perturbation of the eigenvectors of the Laplacian matrix. From this result we derive approximate upper bounds on the clustering error. We show that this bound is tight empirically across a wide range of problems, suggesting that it can be used in practical settings to determine the amount of data reduction allowed in order to meet a specification of permitted loss in clustering performance. 1 Introduction A critical problem in machine learning is that of scaling: Algorithms should be effective computationally and statistically as various dimensions of a problem are scaled. One general tool for approaching large-scale problems is that of clustering or partitioning, in essence an appeal to the principle of divide-and-conquer. However, while the output of a clustering algorithm may yield a set of smaller-scale problems that may be easier to tackle, clustering algorithms can themselves be complex, and large-scale clustering often requires the kinds of preprocessing steps that are invoked for other machine learning algorithms [1], including proto-clustering steps such as quantization, downsampling and compression. Such preprocessing steps also arise in the distributed sensing and distributed computing setting, where communication and storage limitations may preclude transmitting the original data to centralized processors. A number of recent works have begun to tackle the issue of determining the tradeoffs that arise under various “perturbations” of data, including quantization and downsampling [2, 3, 4]. Most of these analyses have been undertaken in the context of well-studied domains such as classification, regression and density estimation, for which there are existing statistical analyses of the effect of noise on performance. Although extrinsic noise differs conceptually from perturbations to data imposed by a data analyst to cope with resource limitations, the mathematical issues arising in the two cases are similar and the analyses of noise have provided a basis for the study of the tradeoffs arising from perturbations. In this paper we focus on spectral clustering, a class of clustering methods that are based on eigendecompositions of affinity, dissimilarity or kernel matrices [5, 6, 7, 8]. These algorithms often outperform traditional clustering algorithms such as the K-means algorithm or hierarchical clustering. To date, however, their impact on real-world, large-scale problems has been limited; in particular, a distributed or “in-network” version of spectral clustering has not yet appeared. Moreover, there has been little work on the statistical analysis of spectral clustering, and thus there is little theory to guide the design of distributed algorithms. There is an existing literature on numerical techniques for 1 Procedure SpectralClustering (x1, . . . , xn) Input: n data samples {xi}n i=1, xi ∈Rd Output: Bipartition S and ¯S of the input data 1. Compute the similarity matrix K: Kij = exp “ − ∥xi−xj∥2 2σ2 k ” , ∀xi, xj 2. Compute the diagonal degree matrix D: Di = Pn j=1 Kij 3. Compute the normalized Laplacian matrix: L = I −D−1K 4. Find the second eigenvector v2 of L 5. Obtain the two partitions using v2: 6. S = {[i] : v2i > 0}, ¯S = {[i] : v2i ≤0} Figure 1: A spectral bipartitioning algorithm. ? ? ? ? 6 6 6 6 Laplacian matrix error Similarity matrix error Eqn. (7)−(13) Lemma 2 & Eqn. (5), (6) Lemma 3 or 4 Proposition 1 Assumption A σ dK dL ∥˜v2 −v2∥2 η Eigen error Data error Mis-clustering rate Perturbation analysis Error propagation Figure 2: Perturbation analysis: from clustering error to data perturbation error. scaling spectral clustering (including downsampling [9, 10] and the relaxation of precision requirements for the eigenvector computation [7]), but this literature does not provide end-to-end, practical bounds on error rates as a function of data perturbations. In this paper we present the first end-to-end analysis of the effect of data perturbations on spectral clustering. Our focus is quantization, but our analysis is general and can be used to treat other kinds of data perturbation. Indeed, given that our approach is based on treating perturbations as random variables, we believe that our methods will also prove useful in developing statistical analyses of spectral clustering (although that is not our focus in this paper). The paper is organized as follows. In Section 2, we provide a brief introduction to spectral clustering. Section 3 contains the main results of the paper; specifically we introduce the mis-clustering rate η, and present upper bounds on η due to data perturbations. In Section 4, we present an empirical evaluation of our analyses. Finally, in Section 5 we present our conclusions. 2 Spectral clustering and data perturbation 2.1 Background on spectral clustering algorithms Given a set of data points {xi}n i=1, xi ∈R1×d and some notion of similarity between all pairs of data points xi and xj, spectral clustering attempts to divide the data points into groups such that points in the same group are similar and points in different groups are dissimilar. The point of departure of a spectral clustering algorithm is a weighted similarity graph G(V, E), where the vertices correspond to data points and the weights correspond to the pairwise similarities. Based on this weighted graph, spectral clustering algorithms form the graph Laplacian and compute an eigendecomposition of this Laplacian [5, 6, 7]. While some algorithms use multiple eigenvectors and find a k-way clustering directly, the most widely studied algorithms form a bipartitioning of the data by thresholding the second eigenvector of the Laplacian (the eigenvector with the second smallest eigenvalue). Larger numbers of clusters are found by applying the bipartitioning algorithm recursively. We present a specific example of a spectral bipartitioning algorithm in Fig. 1. 2.2 Input data perturbation Let the data matrix X ∈Rn×d be formed by stacking n data samples in rows. To this data matrix we assume that perturbation W is applied, such that we obtain a perturbed version ˜X of the original data X. We assume that a spectral clustering algorithm is applied to ˜X and we wish to compare the results of this clustering with respect to the spectral clustering of X. This analysis captures a number of data perturbation methods, including data filtering, quantization, lossy compression and synopsis-based data approximation [11]. The multi-scale clustering algorithms that use “representative” samples to approximate the original data can be treated using our analysis as well [12]. 2 3 Mis-clustering rate and effects of data perturbation Let K and L be the similarity and Laplacian matrix on the original data X, and let ˜K and ˜L be those on the perturbed data. We define the mis-clustering rate η as the proportion of samples that have different cluster memberships when computed on the two different versions of the data, X and ˜X. We wish to bound η in terms of the “magnitude” of the error matrix W = ˜X −X, which we now define. We make the following general stochastic assumption on the error matrix W: A. All elements of the error matrix W are i.i.d. random variables with zero mean, bounded variance σ2 and bounded fourth central moment µ4; and are independent of X. Remark. (i) Note that we do not make i.i.d. assumptions on the elements of the similarity matrix; rather, our assumption refers to the input data only. (ii) This assumption is distribution free, and captures a wide variety of practical data collection and quantization schemes. (iii) Certain data perturbation schemes may not satisfy the independence assumption. We have not yet conducted an analysis of the robustness of our bounds to lack of independence, but in our empirical work we have found that the bounds are robust to relatively small amounts of correlation. We aim to produce practically useful bounds on η in terms of σ and the data matrix X. The bounds should be reasonably tight so that in practice they could be used to determine the degree of perturbation σ given a desired level of clustering performance, or to provide a clustering error guarantee on the original data even though we have access only to its approximate version. Fig. 2 outlines the steps in our theoretical analysis. Briefly, when we perturb the input data (e.g., by filtering, quantization or compression), we introduce a perturbation W to the data which is quantified by σ2. This induces an error dK := ˜K −K in the similarity matrix, and in turn an error dL := ˜L −L in the Laplacian matrix. This further yields an error in the second eigenvector of the Laplacian matrix, which results in mis-clustering error. Overall, we establish an analytical relationship between the mis-clustering rate η and the data perturbation error σ2, where η is usually monotonically increasing with σ2. Our goal is to allow practitioners to specify a mis-clustering rate η∗, and by inverting this relationship, to determine the right magnitude of the perturbation σ∗ allowed. That is, our work can provide a practical method to determine the tradeoff between data perturbation and the loss of clustering accuracy due to the use of ˜X instead of X. When the data perturbation can be related to computational or communications savings, then our analysis yields a practical characterization of the overall resource/accuracy tradeoff. Practical Applications Consider in particular a clustering task in a distributed networking system that allows an application to specify a desired clustering error C∗on the distributed data (which is not available to the coordinator). Through a communication protocol similar to that in [4], the coordinator (e.g., network operation center) gets access to the perturbed data ˜X for spectral clustering. The coordinator can compute a clustering error bound C using our method. By setting C ≤C∗, it determines the tolerable data perturbation error σ∗and instructs distributed devices to use appropriate numbers of bits to quantize their data. Thus we can provide guarantees on the achieved error, C ≤C∗, with respect to the original distributed data even with access only to the perturbed data. 3.1 Upper bounding the mis-clustering rate Little is currently known about the connection between clustering error and perturbations to the Laplacian matrix in the spectral clustering setting. [5] presented an upper bound for the clustering error, however this bound is usually quite loose and is not viable for practical applications. In this section we propose a new approach based on a water-filling argument that yields a tighter, practical bound. Let v2 and ˜v2 be the unit-length second eigenvectors of L and ˜L, respectively. We derive a relationship between the mis-clustering rate η and δ2 := ∥˜v2 −v2∥2. The intuition behind our derivation is suggested in Fig. 3. Let a and b denote the sets of components in v2 corresponding to clusters of size k1 and k2, respectively, and similarly for a′ and b′ in the case of ˜v2. If v2 is changed to ˜v2 due to the perturbation, an incorrect clustering happens whenever a component of v2 in set a jumps to set b′, denoted as a →b′, or a component in set b jumps to set a′, denoted as b →a′. The key observation is that each flipping of cluster membership in either a →b′ 3 clustering misa′ a b b′ Component Component values indices clustering misFigure 3: The second eigenvector v2 and its perturbed counterpart ˜v2 (denoted by dashed lines). 0.005 0.01 0.015 0.02 0.025 0.03 0.035 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Wisconsin Breast Cancer Data Perturbation σ of noise Upper Bound of Kannan Our Upper Bound Mis−clustering Rate Figure 4: An example of the tightness of the upper bound for η in Eq. (1). or b →a′ contributes a fairly large amount to the value of δ2, compared to the short-range drifts in a →a′ or b →b′. Given a fixed value of δ2, the maximum possible number of flippings (i.e., missed clusterings) is therefore constrained, and this translates into an upper bound for η. We make the following assumptions on the data X and its perturbation: B1. The components of v2 form two clusters (with respect to the spectral bipartitioning algorithm in Fig. 1). The size of each cluster is comparable to n. B2. The perturbation is small with the total number of mis-clusterings m < min(k1, k2), and the components of ˜v2 form two clusters. The size of each cluster is comparable to n. B3. The perturbation of individual components of v2 in each set of a →a′, a →b′, b →a′ and b →b′ have identical (not necessary independent) distributions with bounded second moments, respectively, and they are uncorrelated with the components in v2. Our perturbation bound can now be stated as follows: Proposition 1. Under assumptions B1, B2 and B3, the mis-clustering rate η of the spectral bipartitioning algorithm under the perturbation satisfies η ≤δ2 = ∥˜v2 −v2∥2. If we further assume that all components of ˜v2 −v2 are independent, then η ≤(1 + op(1))E∥˜v2 −v2∥2. (1) The proof of the proposition is provided in the Appendix. Remarks. (i) Assumption B3 was motivated by our empirical work. Although it is difficult to establish general necessary and sufficient conditions for B3 to hold, in the Appendix we present some special cases that allow B3 to be verified a priori. It is also worth noting that B3 appears to hold (approximately) across a range of experiments presented in Section 4. (ii) If we assume piecewise constancy for v2, then we can relax the uncorrelated assumption in B3. (iii) Our bound has a different flavor than that obtained in [5]. Although the bound in Theorem 4.3 in [5] works for k-way clustering, it assumes a block-diagonal Laplacian matrix and requires the gap between the kth and (k + 1)th eigenvalues to be greater than 1/2, which is unrealistic in many data sets. In the setting of 2-way spectral clustering and a small perturbation, our bound is much tighter than that derived in [5]; see Fig. 4 in particular. 3.2 Perturbation on the second eigenvector of Laplacian matrix We now turn to the relationship between the perturbation of eigenvectors with that of its matrix. One approach is to simply draw on the classical domain of matrix perturbation theory; in particular, applying Theorem V.2.8 from [13], we have the following bound on the (small) perturbation of the second eigenvector: ∥˜v2 −v2∥≤ ∥4dL∥F ν − √ 2∥dL∥F , (2) where ν is the gap between the second and the third eigenvalue. However, in our experimental evaluation we found that ν can be quite small in some data sets, and in these cases the right-hand 4 0.005 0.01 0.015 0.02 0.025 0.03 0.035 0 0.02 0.04 0.06 0.08 (a) Wisconsin Breast Cancer Data Value σ of noise RHS LHS 0.005 0.01 0.015 0.02 0.025 0.03 0.035 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 (b) Waveform Data Value σ of noise RHS LHS 0.005 0.01 0.015 0.02 0.025 0.03 0.035 0 0.01 0.02 0.03 0.04 0.05 (c) Pen−digits Data Value σ of noise RHS LHS Figure 5: Experimental examples of the fidelity of the approximation in Eq. (5). We add i.i.d. zero mean Gaussian noise to the input data with different σ, and we see that the right-hand side (RHS) of (5) approximately upper bounds the left-hand side (LHS). side of (2) can be quite large even for a small perturbation. Thus the bound given by (2) is often not useful in practical applications. To derive a more practically useful bound, we begin with a well-known first-order Taylor expansion to compute the perturbation on the second eigenvector of a Laplacian matrix as follows: ˜v2 −v2 = n X j=1,j̸=2 vT j dLv2 λ2 −λj vj + O(dL2) ≈ n X j=1,j̸=2 vj λ2 −λj n X p=1 n X q=1 vpjvq2dLpq = n X p=1   n X q=1 vq2dLpq ! ·   n X j=1,j̸=2 vpj · vj λ2 −λj    = n X p=1 βpup, (3) where βp = Pn q=1 vq2dLpq is a random variable determined by the effect of the perturbation on the Laplacian matrix L, and the vector up = Pn j=1,j̸=2  vpjvj λ2−λj  is a constant determined by the eigendecomposition of the Laplacian matrix L. Then we have E∥˜v2 −v2∥2 ≈E n X p=1 βpup 2 = n X p=1 E∥βpup∥2 + 2 n X i=1 n X j=i+1 E βiui · βjuT j  . (4) In our experimental work we have found that for i ̸= j, βiui is either very weakly correlated with βjuj (i.e., the total sum of all cross terms is typically one or two orders of magnitude less than that of squared term), or negatively correlated with βjuj (i.e., the total sum of all cross terms is less than zero). This empirical evidence suggests the following approximate bound: E∥˜v2 −v2∥2 ≲ n X p=1 Eβ2 p · ∥up∥2. (5) Examples of the fidelity of this approximation for particular data sets are shown in Fig. 5. Finally, Eβ2 p is related to dLpq, and can be upper bounded by Eβ2 p = E n X q=1 vq2dLpq !2 ≤ n X i=1 n X j=1 [vi2vj2 · E (dLpi) E (dLpj) + |vi2vj2|σpiσpj] , (6) where σpi is the variance of dLpi. Remark. Through Eqs. (5) and (6), we can bound the squared norm of the perturbation on the second eigenvector in expectation, which in turn bounds the mis-clustering rate. To compute the bound, we need to estimate the first two moments of dL, which we discuss next. 3.3 Perturbation on the Laplacian matrix Let D be the diagonal matrix with Di = P j Kij. We define the normalized Laplacian matrix as L = I −D−1K. Letting ∆= ˜D −D and dK = ˜K −K, we have the following approximation for dL = ˜L −L: 5 Lemma 2. If perturbation dK is small compared to K, then dL = (1 + o(1)) ∆D−2K −D−1dK. (7) Then, element-wise, the first two moments of dL can be estimated as E(dL) ≈ E(∆)D−2K −D−1E(dK) (8) E(dL2) ≈ E ∆D−2K ◦∆D−2K −2D−1dK ◦∆D−2K + D−1dK ◦D−1dK  = E ∆2 D−4K2 + D−2E dK2 −2E(∆dK)D−3 ◦K, (9) where ◦denotes element-wise product. The quantities needed to estimate E(dL) and E(dL2) can be obtained from moments and correlations among the elements of the similarity matrix ˜Kij. In particular, we have E(dKij) = E  ˜Kij  −Kij, E(dKij)2 = E ˜K2 ij −2KijE  ˜Kij  + K2 ij (10) E∆i = E ˜Di −Di, E ˜Di = n X j=1 E  ˜Kij  , E∆2 i = E ˜D2 i −2Di · E ˜Di + D2 i (11) E ˜D2 i = E   n X j=1 ˜Kij   2 = n X j=1 E ˜K2 ij + 2 n X j=1 n X q=j+1  E ˜KijE ˜Kiq + ρk ijqσk ijσk iq  (12) E(∆dK)ij = E( ˜Di −Di)( ˜Kij −Kij) = E  ˜Di ˜Kij  −DiE ˜Kij −KijE∆i = E  ˜K2 ij + ˜Kij   n X q=1,q̸=j ˜Kiq    −DiE ˜Kij −KijE∆i = E ˜K2 ij + n X q=1,q̸=j  E ˜KijE ˜Kiq + ρk ijqσk ijσk iq  −DiE ˜Kij −KijE∆i, (13) where σk ij is the standard deviation of ˜Kij and −1 ≤ρk ijq ≤1 is the correlation coefficient between ˜Kij and ˜Kiq. Estimating all ρk ijq ′s would require an intensive effort. For simplicity, we could set ρk ijq to 1 in Eq. (12) and to −1 in Eq. (13), and obtain an upper bound for E(dL2). This bound could optionally be tightened by using a simulation method to estimate the values of ρk ijq. However, in our experimental work we have found that our results are insensitive to the values of ρk ijq, and setting ρk ijq = 0.5 usually achieves good results. Remark. Eqs. (8)–(13) allow us to estimate (i.e., to upper bound) the first two moments of dL using those of dK, which are computed using Eq. (15) or (16) in Section 3.4. 3.4 Perturbation on the similarity matrix The similarity matrix ˜K on perturbed data ˜X is ˜Kij = exp  −||xi −xj + ǫi −ǫj||2 2σ2 k  , (14) where σk is the kernel bandwidth. Then, given data X, the first two moments of dKij = ˜Kij −Kij, the error in the similarity matrix, can be determined by one of the following lemmas. Lemma 3. Given X, if all components of ǫi and ǫj are i.i.d. Gaussian N(0, σ2), then E  ˜Kij  = Mij  −σ2 σ2 k  , E  ˜K2 ij  = Mij  −2σ2 σ2 k  , (15) where Mij(t) = h exp  λijt 1−2t  /(1 −2t)d/2i , and λij = ||xi −xj||2/2σ2 . 6 −2 0 2 4 −2 −1 0 1 2 3 4 5 (a) Gaussian data −2 −1 0 1 2 −3 −2 −1 0 1 2 3 (b) Sin−sep data −15 −10 −5 0 5 10 −10 −5 0 5 10 (c) Concentric data Figure 6: Synthetic data sets illustrated in two dimensions. Lemma 4. Under Assumption A, given X and for large values of the dimension d, the first two moments of ˜Kij can be computed approximately as follows: E  ˜Kij  = Mij  −1 2σ2 k  , E  ˜K2 ij  = Mij  −1 σ2 k  , (16) where Mij(t) = exp λij + 2dσ2 t + dµ4 + dσ4 + 4σ2λ2 ij  t2 , and λij = ||xi −xj||2. Remark. (i) Given data perturbation error σ, kernel bandwidth σk and data X, the first two moments of dKij can be estimated directly using (15) or (16). (ii) Through Eqs. (1)–(16), we have established a relationship between the mis-clustering rate η and the data perturbation magnitude σ. By inverting this relationship (e.g., using binary search), we can determine a σ∗for a given η∗. 4 Evaluation In this section we present an empirical evaluation of our analysis on 3 synthetic data sets (see Fig. 6) and 6 real data sets from the UCI repository [14]. The data domains are diverse, including image, medicine, agriculture, etc., and the different data sets impose different difficulty levels on the underlying spectral clustering algorithm, demonstrating the wide applicability of our analysis. In the experiments, we use data quantization as the perturbation scheme to evaluate the upper bound provided by our analysis on the clustering error. Fig. 7 plots the mis-clustering rate and the upper bound for data sets subject to varying degrees of quantization. As expected, the mis-clustering rate increases as one decreases the number of quantization bits. We find that the error bounds are remarkably tight, which validate the assumptions we make in the analysis. It is also interesting to note that even when using as few as 3-4 bits, the clustering degrades very little in both real error and as assessed by our bound. The effectiveness of our bound should allow the practitioner to determine the right amount of quantization given a permitted loss in clustering performance. 5 Conclusion In this paper, we proposed a theoretical analysis of the clustering error for spectral clustering in the face of stochastic perturbations. Our experimental evaluation has provided support for the assumptions made in the analysis, showing that the bound is tight under conditions of practical interest. We believe that our work, which provides an analytical relationship between the mis-clustering rate and the variance of the perturbation, constitutes a critical step towards enabling a large class of applications that seek to perform clustering of objects, machines, data, etc in a distributed environment. Many networks are bandwidth constrained, and our methods can guide the process of data thinning so as to limit the amount of data transmitted through the network for the purpose of clustering. References [1] L. Bottou and O. Bousquet, “The tradeoffs of large scale learning,” in Advances in Neural Information Processing Systems 20, 2007. [2] A. Silberstein, G. P. A. Gelfand, K. Munagala, and J. Yang, “Suppression and failures in sensor networks: A Bayesian approach,” in Proceedings of VLDB, 2007. [3] X. Nguyen, M. J. Wainwright, and M. I. Jordan, “Nonparametric decentralized detection using kernel methods,” IEEE Transactions on Signal Processing, vol. 53, no. 11, pp. 4053–4066, 2005. 7 3 4 5 6 7 8 9 0 0.05 0.1 0.15 0.2 0.25 (a) Sin−sep Data Mis−Clustering Rate Number of quantization bits 0.037 0.018 0.009 0.005 0.002 0.001 0.001 Upper Bound Test Value 3 4 5 6 7 8 9 0 0.2 0.4 0.6 0.8 1 1.2 1.4 (b) Concentric Circle Data Mis−Clustering Rate Number of quantization bits 0.036 0.018 0.009 0.004 0.002 0.001 0.001 Upper Bound Test Value 3 4 5 6 7 8 9 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 (c) Gaussian Data Mis−Clustering Rate Number of quantization bits 0.036 0.018 0.009 0.005 0.002 0.001 0.001 Upper Bound Test Value 2 3 4 5 6 7 8 0 2 4 6 8 x 10 −3 (d) Image Segmentation Data Mis−Clustering Rate Number of quantization bits 0.056 0.029 0.015 0.008 0.004 0.002 0.001 Upper Bound Test Value 2 3 4 5 6 7 8 0 0.01 0.02 0.03 0.04 0.05 0.06 (e) Pen−digits Data Mis−Clustering Rate Number of quantization bits 0.062 0.030 0.015 0.008 0.004 0.002 0.001 Upper Bound Test Value 2 3 4 5 6 7 8 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 (f) Wine Data Mis−Clustering Rate Number of quantization bits 0.071 0.036 0.018 0.009 0.005 0.002 0.001 Upper Bound Test Value 2 3 4 5 6 7 8 0 0.005 0.01 0.015 0.02 0.025 0.03 (g) Iris Data Mis−Clustering Rate Number of quantization bits 0.070 0.037 0.017 0.009 0.004 0.002 0.001 Upper Bound Test Value 2 3 4 5 6 7 8 0 0.01 0.02 0.03 0.04 0.05 (h) Wisconsin Breast Cancer Data Mis−Clustering Rate Number of quantization bits 0.074 0.036 0.018 0.009 0.005 0.002 0.001 Upper Bound Test Value 2 3 4 5 6 7 8 0 0.02 0.04 0.06 0.08 (i) Waveform Data Mis−Clustering Rate Number of quantization bits 0.072 0.036 0.018 0.009 0.005 0.002 0.001 Upper Bound Test Value Figure 7: Upper bounds of clustering error on approximate data obtained from quantization as a function of the number of bits. (a–c) Simulated data sets (1000 sample size, 2, 2, 10 features, respectively); (d) Statlog image segmentation data (2310 sample size, 19 features); (e) Handwritten digits data (10992 sample size, 16 features); (f) Wine data (178 sample size, 13 features); (g) Iris data (150 sample size, 4 features). (h) Wisconsin breast cancer data (569 sample size, 30 features); (i) Waveform data (5000 sample size, 21 features). The x-axis shows the number of quantization bits and (above the axis) the corresponding data perturbation error σ. Error bars are derived from 25 replications. In the experiments, all data values are normalized in range [0, 1]. For data sets with more than two clusters, we choose two of them for the experiments. [4] L. Huang, X. Nguyen, M. Garofalakis, A. D. Joseph, M. I. Jordan, and N. Taft, “In-network PCA and anomaly detection,” in Advances in Neural Information Processing Systems (NIPS), 2006. [5] R. Kannan, S. Vempala, and A. Vetta, “On clusterings: Good, bad and spectral,” Journal of the ACM, vol. 51, no. 3, pp. 497–515, 2004. [6] A. Y. Ng, M. Jordan, and Y. Weiss, “On spectral clustering: Analysis and an algorithm,” in Advances in Neural Information Processing Systems (NIPS), 2002. [7] J. Shi and J. Malik, “Normalized cuts and image segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 888–905, 2000. [8] U. von Luxburg, M. Belkin, and O. Bousquet, “Consistency of spectral clustering,” Annals of Statistics, vol. 36, no. 2, pp. 555–586, 2008. [9] P. Drineas and M. W. Mahoney, “On the Nystr¨om method for approximating a Gram matrix for improved kernel-based learning,” in Proceedings of COLT, 2005, pp. 323–337. [10] C. Fowlkes, S. Belongie, F. Chung, and J. Malik, “Spectral grouping using the Nystr¨om method,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 2, 2004. [11] G. Cormode and M. Garofalakis, “Sketching streams through the net: Distributed approximate query tracking,” in Proceedings of VLDB, 2005, pp. 13–24. [12] D. Kushnir, M. Galun, and A. Brandt, “Fast multiscale clustering and manifold identification,” Pattern Recognition, vol. 39, no. 10, pp. 1876–1891, 2006. [13] G. W. Stewart and J. Guang Sun, Matrix Perturbation Theory. Academic Press, 1990. [14] A. Asuncion and D. Newman, “UCI Machine Learning Repository, Department of Information and Computer Science,” 2007, http://www.ics.uci.edu/ mlearn/MLRepository.html. 8
2008
42
3,529
Estimating the Location and Orientation of Complex, Correlated Neural Activity using MEG D.P. Wipf, J.P. Owen, H.T. Attias, K. Sekihara, and S.S. Nagarajan Biomagnetic Imaging Laboratory University of California, San Francisco Abstract The synchronous brain activity measured via MEG (or EEG) can be interpreted as arising from a collection (possibly large) of current dipoles or sources located throughout the cortex. Estimating the number, location, and orientation of these sources remains a challenging task, one that is significantly compounded by the effects of source correlations and the presence of interference from spontaneous brain activity, sensor noise, and other artifacts. This paper derives an empirical Bayesian method for addressing each of these issues in a principled fashion. The resulting algorithm guarantees descent of a cost function uniquely designed to handle unknown orientations and arbitrary correlations. Robust interference suppression is also easily incorporated. In a restricted setting, the proposed method is shown to have theoretically zero bias estimating both the location and orientation of multi-component dipoles even in the presence of correlations, unlike a variety of existing Bayesian localization methods or common signal processing techniques such as beamforming and sLORETA. Empirical results on both simulated and real data sets verify the efficacy of this approach. 1 Introduction Magnetoencephalography (MEG) and related electroencephalography (EEG) use an array of sensors to take electromagnetic field (or voltage potential) measurements from on or near the scalp surface with excellent temporal resolution. In both cases, the observed field is generated by the same synchronous, compact current sources located within the brain. Although useful for research and clinical purposes, accurately determining the spatial distribution of these unknown sources is an open problem. The relevant estimation problem can be posed as follows: The measured electromagnetic signal is B ∈Rdb×dt, where db equals the number of sensors and dt is the number of time points at which measurements are made. Each unknown source Si ∈Rdc×dt is a dc-dimensional neural current dipole , at dt timepoints, projecting from the i-th (discretized) voxel or candidate location distributed throughout the cortex. These candidate locations can be obtained by segmenting a structural MR scan of a human subject and tesselating the gray matter surface with a set of vertices. B and each Si are related by the likelihood model B = ds X i=1 LiSi + E, (1) where ds is the number of voxels under consideration, Li ∈Rdb×dc is the so-called lead-field matrix for the i-th voxel. The k-th column of Li represents the signal vector that would be observed at the scalp given a unit current source/dipole at the i-th vertex with a fixed orientation in the k-th direction. It is common to assume dc = 2 (for MEG) or dc = 3 (for EEG), which allows flexible source orientations to be estimated in 2D or 3D space. Multiple methods based on the physical properties of the brain and Maxwell’s equations are available for the computation of each Li [7]. Finally, E is a noise-plus-interference term where we assume, for simplicity, that columns are drawn independently from N(0, Σϵ). However, temporal correlations can easily be incorporated if desired using a simple transformation outlined in [3]. To obtain reasonable spatial resolution, the number of candidate source locations will necessarily be much larger than the number of sensors (ds ≫db). The salient inverse problem then becomes the ill-posed estimation of regions with significant brain activity, which are reflected by voxels i such that ∥Si∥> 0; we refer to these as active dipoles or sources. Because the inverse model is severely underdetermined (the mapping from source activity configuration S ≜[S1, . . . , Sds]T to sensor measurement B is many to one), all efforts at source reconstruction are heavily dependent on prior assumptions, which in a Bayesian framework are embedded in the distribution p(S). Such a prior is often considered to be fixed and known, as in the case of minimum current estimation (MCE) [10], minimum variance adaptive beamforming (MVAB) [9], and sLORETA [5]. Alternatively, a number of empirical Bayesian approaches have been proposed that attempt a form of model selection by using the data, whether implicitly or explicitly, to guide the search for an appropriate prior. Examples include variational Bayesian methods and hierarchical covariance component models [3, 6, 8, 12, 13]. While advantageous in many respects, all of these methods retain substantial weaknesses estimating complex, correlated source configurations with unknown orientation in the presence of background interference (e.g., spontaneous brain activity, sensor noise, etc.). There are two types of correlations that can potentially disrupt the source localization process. First, there are correlations within dipole components (meaning the individual rows of Si are correlated), which always exists to a high degree in real data with unknown orientation (i.e., dc > 1). Secondly, there are correlations between different dipoles that are simultaneously active (meaning rows of Si are correlated with rows of Sj for some voxels i ̸= j). These correlations are more application specific and may or may not exist. The larger the number of active sources, the greater the chance that both types or correlation can disrupt the estimation process. This issue can be problematic for two reasons. First, failure to accurately account for unknown orientations or correlations can severely disrupt the localization process, leading to a very misleading impression of which brain areas are active. Secondly, the orientations and correlations themselves may have clinical significance. In this paper, we present an alternative empirical Bayesian scheme that attempts to improve upon existing methods in terms of source reconstruction accuracy and/or computational robustness and efficiency. Section 2 presents the basic generative model which underlies the proposed method and describes the associated inference problem. Section 3 derives a robust algorithm for estimating the sources using this model and proves that each iteration is guaranteed to reduce the associated cost function. It also describes how interference suppression can be naturally incorporated. Section 4 then provides a theoretical analysis of the bias involved in estimating both the location and orientation of active sources, demonstrating that the proposed method has substantial advantages over existing approaches. Finally, Section 5 contains experimental results using our algorithm on both simulated and real data, followed by a brief discussion in Section 6. 2 Modeling Assumptions To begin we invoke the noise model from (1), which fully defines the assumed likelihood p(B|S) ∝exp  −1 2 B − ds X i=1 LiSi 2 Σ−1 ϵ  , (2) where ∥X∥W denotes the weighted matrix norm p trace[XT WX]. The unknown noise covariance Σϵ will be estimated from the data using a variational Bayesian factor analysis (VBFA) model as discussed in Section 3.2 below; for now we will consider that it is fixed and known. Next we adopt the following source prior for S: p (S|Γ) ∝exp −1 2trace " ds X i=1 ST i Γ−1 i Si #! . (3) This is equivalent to applying independently, at each time point, a zero-mean Gaussian distribution with covariance Γi to each source Si. We define Γ to be the dsdc × dsdc block-diagonal matrix formed by ordering each Γi along the diagonal of an otherwise zero-valued matrix. This implies, equivalently, that p (S|Γ) ∝exp −1 2trace  ST Γ−1S  . If Γ were somehow known, then the conditional distribution p(S|B, Γ) ∝p(B|S)p(S|Γ) is a fully specified Gaussian distribution with mean and covariance given by Ep(S|B,Γ) [S] = ΓLT Σϵ + LΓLT −1 B (4) Covp(sj|B,Γ) [sj] = Γ −ΓLT Σϵ + LΓLT −1 LΓ, ∀j, (5) where sj denotes the j-th column of S and individual columns are uncorrelated. However, since Γ is actually not known, a suitable approximation ˆΓ ≈Γ must first be found. One principled way to accomplish this is to integrate out the sources S and then maximize p(B|Γ) = Z p(B|S)p(S|Γ)dS ∝exp  −1 2BT Σ−1 b B  , Σb ≜Σϵ + LΓLT . (6) This is equivalent to minimizing the cost function L(Γ) ≜−2 log p(B|Γ)p(Γ) ≡trace  CbΣ−1 b  + log |Σb, | , (7) where Cb ≜n−1BBT is the empirical covariance, and is sometimes referred to as type-II maximum likelihood, evidence maximization, or empirical Bayes [1]. The first term of (7) is a measure of the dissimilarity between the empirical data covariance Cb and the model data covariance Σb; in general, this factor encourages Γ to be large. The second term provides a regularizing or sparsifying effect, penalizing a measure of the volume formed by the model covariance Σb.1 Since the volume of any high dimensional space is more effectively reduced by collapsing individual dimensions as close to zero as possible (as opposed to incrementally reducing all dimensions isometrically), this penalty term promotes a model covariance that is maximally degenerate (or non-spherical), which pushes elements of Γ to exactly zero. This intuition is supported theoretically by the results in Section 4. Given some type-II ML estimate ˆΓ, we obtain the attendant empirical prior p(S|ˆΓ). To the extent that this ‘learned’ prior is realistic, the resulting posterior p(S|B, ˆΓ) quantifies regions of significant current density and point estimates for the unknown source dipoles Si can be obtained by evaluating the posterior mean computed using (4). If a given ˆΓi →0 as described above, then the associated ˆSi computed using (4) also becomes zero. It is this pruning mechanism that naturally chooses the number of active dipoles. 3 Algorithm Derivation Given Σϵ and Γ, computing the posterior on S is trivial. Consequently, determining these unknown quantities is the primary estimation task. We will first derive an algorithm for computing Γ assuming Σϵ is known. Later in Section 3.2, we will describe a powerful procedure for learning Σϵ. 3.1 Learning the Hyperparameters Γ The primary objective of this section is to minimize (7) with respect to Γ. Of course one option is to treat the problem as a general nonlinear optimization task and perform gradient descent or some other generic procedure. Related methods in the MEG literature rely, either directly or indirectly, on a form of the EM algorithm [3, 8]. However, these algorithms are exceedingly slow when ds is large and they have not been extended to handle flexible orientations. Consequently, here we derive an alternative optimization procedures that expands upon ideas from [8, 12], handles arbitrary/unknown dipole orientations, and converges quickly. To begin, we note that L(Γ) only depends on the data B through the db×db sample correlation matrix Cb. Therefore, to reduce the computational burden, we replace B with a matrix eB ∈Rdb×rank(B) such that eB eBT = Cb. This removes any per-iteration dependency on dt, which can potentially be large, without altering that actual cost function. It also implies that, for purposes of computing Γ, the number of columns of S is reduced to match rank(B). We now re-express the cost function L(Γ) in an alternative form leading to convenient update rules and, by construction, a proof that L Γ(k+1) ≤L Γ(k) at each iteration. 1The determinant of a matrix is equal to the product of its eigenvalues, a well-known volumetric measure. First, the data fit term can be expressed as trace  CbΣ−1 b  = min X   eB − ds X i=1 LiXi 2 Σ−1 ϵ + ds X i=1 ∥Xi∥2 Γ−1 i  , (8) where X ≜  XT 1 , . . . , XT ds T is a matrix of auxiliary variables. Likewise, because the logdeterminant term of L(Γ) is concave in Γ, it can be expressed as a minimum over upper-bounding hyperplanes via log |Σb| = min Z " ds X i=1 trace ZT i Γi  −h∗(Z) # , (9) where Z ≜  ZT 1 , . . . , ZT ds T and h∗(Z) is the concave conjugate of log |Σb|. For our purposes below, we will never actually have to compute h∗(Z). Dropping the minimizations and combining terms from (8) and (9) leads to the modified cost function L(Γ, X, Z) = eB − ds X i=1 eLiXi 2 Σ−1 ϵ + ds X i=1 h ∥Xi∥2 Γ−1 i + trace ZT i Γi i −h∗(Z), (10) where by construction L(Γ) = minX minZ L(Γ, X, Z). It is straightforward to show that if {ˆΓ, ˆX, ˆZ} is a local (global) minimum to L(Γ, X, Z), then ˆΓ is a local (global) minimum to L(Γ). Since direct optimization of L(Γ) may be difficult, we can instead iteratively optimize L(Γ, X, Z) via coordinate descent over Γ, X, and Z. In each case, when two are held fixed, the third can be globally minimized in closed form. This ensures that each cycle will reduce L(Γ, X, Z), but more importantly, will reduce L(Γ) (or leave it unchanged if a fixed-point or limit cycle is reached). The associated update rules from this process are as follows. The optimal X (with Γ and Z fixed) is just the standard weighted minimum-norm solution given by Xnew i →ΓiLT i Σ−1 b eB (11) for each i. The minimizing Z equals the slope at the current Γ of log |Σb|. As such, we have Znew i →▽Γi log |Σb| = LT i Σ−1 b Li. (12) With Z and X fixed, computing the minimizing Γ is a bit more difficult because of the constraint Γi ∈H+ for all i, where H+ is the set of positive-semidefinite, symmetric dc × dc covariance matrices. To obtain each Γi, we must solve Γnew i →arg min Γi∈H+ h ∥Xi∥2 Γ−1 i + trace ZT i Γi i (13) An unconstrained solution will satisfy ▽ΓiL(Γi, Xi, Zi) = 0, (14) which, after computing the necessary derivatives and re-arranging terms gives the equivalent condition XiXT i = ΓiZiΓi. (15) There are multiple (unconstrained) solutions to this equation; we will choose the unique one that satisfies the constraint Γi ∈H+. This can be found using XiXT i = Z−1/2 i  Z1/2 i XiXT i Z1/2 i  Z−1/2 i = Z−1/2 i  Z1/2 i XiXT i Z1/2 i 1/2  Z1/2 i XiXT i Z1/2 i 1/2 Z−1/2 i (16) =  Z−1/2 i  Z1/2 i XiXT i Z1/2 i 1/2 Z−1/2 i  Zi  Z−1/2 i  Z1/2 i XiXT i Z1/2 i 1/2 Z−1/2 i  . This indicates the solution (or update equation) Γnew i →Z−1/2 i  Z1/2 i XiXT i Z1/2 i 1/2 Z−1/2 i , (17) which is satisfies the constraint. And since we are minimizing a convex function of Γi (over the constraint set), we know that this is indeed a minimizing solution. In summary then, to estimate Γ, we need simply iterate (11), (12), and (17), and with each pass we are guaranteed to reduce (or leave unchanged) L(Γ). The per-iteration cost is linear in the number of voxels ds so the computational cost is relatively modest (it is quadratic in db, and cubic in dc, but these quantities are relatively small). The convergence rate is orders of magnitude faster than EM-based algorithms such as those in [3, 8] (see Figure 1 (right) ). 3.2 Learning the Interference Σϵ The learning procedure described in the previous section boils down to fitting a structured maximum likelihood covariance estimate Σb = Σϵ + FΓF T to the data covariance Cb. The idea here is that FΓF T will reflect the brain signals of interest while Σϵ will capture all interfering factors, e.g., spontaneous brain activity, sensor noise, muscle artifacts, etc. Since Σϵ is unknown, it must somehow be estimated or otherwise accounted for. Given access to pre-stimulus data (i.e., data assumed to have no signal/sources of interest), stimulus evoked factor analysis (SEFA) provides a powerful means of decomposing a data covariance matrix Cb into signal and interference components. While details can be found in [4], SEFA computes the approximation Cb ≈Λ + EET + AAT , (18) where E represents a matrix of learned interference factors, Λ is a diagonal noise matrix, and A is a matrix of signal factors. There are two ways to utilize this decomposition (more details can be found in [11]). First, we can simply set Σϵ →Λ + EET and proceed as in Section 3.1. Alternatively, we can set Σϵ →0 and then substitute AAT for Cb, i.e., run the same algorithm on a de-noised signal covariance. For technical reasons beyond the scope of this paper, it appears that algorithm performance may be superior when the latter paradigm is adopted. 4 Analysis of Theoretical Localization/Orientation Bias Theoretical support for the proposed algorithm is possible in the context of estimation bias assuming simplified source configurations. For example, substantial import has been devoted to quantifying localization bias when estimating a single dipolar source. Recently it has been shown, both empirically and theoretically [5, 9], that the MVAB and sLORETA algorithms have zero location bias under this condition at high SNR. This has been extended to include certain empirical Bayesian methods [8, 12]. However, these results assume a single dipole with fixed, known orientation (or alternatively, that dc = 1), and therefore do not formally handle source correlations or multi-component dipoles. The methods from [6, 13] also purport to address these issues, but no formal analyses are presented. In contrast, despite being a complex, non-convex function, we now demonstrate that L(Γ) has very attractive bias properties regarding both localization and orientation. We will assume that the full lead-field L ≜  LT 1 , . . . , LT ds T represents a sufficiently high sampling of the source space such that any active dipole component aligns with some lead-field columns. Unbiasedness can also be shown in the continuous case, but the discrete scenario is more straightforward and of course more relevant to any practical task. Some preliminary definitions are required to proceed. We define the empirical intra-dipole correlation matrix at the i-th voxel as Cii ≜ 1 dt ST i Si; non-zero off-diagonal elements imply that correlations are present. Except in highly contrived situations, this type of correlation will always exist. The empirical inter-dipole correlation matrix between voxels i and j is Cij ≜ 1 dt ST i Sj; any nonzero element implies the existence of a correlation. In practice, this form of correlation may or may not be present. With regard to the lead-field L, spark is defined as the smallest number of linearly dependent columns [2]. By definition then, 2 ≤spark(L) ≤db + 1. Finally, da denotes the number of active sources, i.e., the number of voxels whereby ∥Si∥> 0. Theorem 1. In the limit as Σϵ →0 (high SNR) and assuming dadc < spark(L) −1, the cost function L(Γ) maintains the following two properties: 1. For arbitrary Cii and Cij, the unique global minimum Γ∗produces a source estimate S∗= Ep(S|B,Γ∗) [S] computed using (4) that equals the generating source matrix S, i.e., it is unbiased in both location and orientation for all active dipoles and correctly zeros out the inactive ones. 2. If Cij = 0 for all active dipoles (although Cii is still arbitrary), then there are no local minima, i.e., the cost function is unimodal. The proof has been deferred to [11]. In words, this theorem says that intra-dipole correlations do not disrupt the estimation process by creating local minima, and that the global minimum is always unbiased. In contrast, inter-dipole correlations can potentially create local minima, but they do not affect the global minimum. Empirically, we will demonstrate that the algorithm derived in Section 3 is effective at avoiding these local minima (see Section 5). With added assumptions these results can be extended somewhat to handle the inclusion of noise. The cost functions from [8, 12] bear the closest resemblance to L(Γ); however, neither possesses the second attribute from Theorem 1. This is a very significant failing because, as mentioned previously, intra-dipole correlations are always present in each active dipole. Consequently, localization and orientation bias can occur because of convergence to a local minimum. The iterative Bayesian scheme from [13], while very different in structure, also directly attempts to estimate flexible orientations and handle, to some extent, source correlations. While details are omitted for brevity, we can prove that the full model upon which this algorithm is based fails to satisfy the first property of the theorem, so the corresponding global minimum can be biased. In contrast, beamformers and sLORETA are basically linear methods with no issue of global or local minima. However, the popular sLORETA and MVAB solutions will in general display a bias for multi-component dipoles (dc > 1) or when multiple dipoles (da > 1) are present, regardless of correlations. 5 Empirical Evaluation In this section we test the performance of our algorithm on both simulated and real data sets. We focus here on localization accuracy assuming strong source correlations and unknown orientations. While orientation estimates themselves are not shown for space considerations, accurate localization implicitly indicates that this confound has been adequately handled. More comprehensive experiments, including comparisons with additional algorithms, are forthcoming [11]. Simulated Data: We first conducted tests using simulated data with realistic source configurations. The brain volume was segmented into 5mm voxels and a two orientation (dc = 2) forward leadfield was calculated using a spherical-shell model [7]. The data time course was partitioned into pre- and post-stimulus periods. In the pre-stimulus period (263 samples) there is only noise and interfering brain activity, while in the post-stimulus period (437 samples) there is the same (statistically) noise and interference factors plus source activity of interest. We used two noise conditions - Gaussiannoise and real-brain noise. In the former case, we seeded voxels with Gaussian noise in each orientation and then projected the activity to the sensors using the leadfield, producing colored Gaussian noise at the sensors. To this activity, we added additional Gaussian sensor noise. For the real-brain noise case, we used resting-state data collected from a human subject that is presumed to have ongoing and spontaneous activity and sensor noise. In both the Gaussian and real-brain noise cases, the pre-stimulus activity was on-going and continued into the post-stimulus period, where the simulated source signals were added. Sources were seeded at locations in the brain as damped-sinusoids and this voxel activity was projected to the sensors. We could adjust both the signal-to-noise-plusinterefence ratio (SNIR) and the correlations between the different voxel time-courses to examine the algorithm performance on correlated sources and unknown dipole orientations. We ran 100 simulations of three randomly seeded sources at different SNIR levels (-5, 0, 5, 10dB). The sources in these simulations always had an inter-dipole correlation coefficient of 0.5; intradipole correlations were present as well. We ran the simulation with both Gaussian-noise and real brain noise using a MVAB and our proposed method. In order to evaluate performance, we used the following test for a hit or miss. We drew spheres around each seeded source location and obtained the maximum voxel value in each sphere. Then we calculated the maximum voxel activation outside the three spheres. If the maximum inside each sphere was greater than the maximum outside all of the spheres, it was counted as a hit (in this way, we are implicitly accounting somewhat for false alarms). Each simulation could get a score or 0, 1 ,2 , or 3, with 3 being the best. Figure 1 (left) displays comparative results averaged over 100 trials with standard errors. Our method quite significantly outperforms the MVAB, which is designed to handle unknown orientations but has difficulty with source correlations. Figure 1 (middle) shows a sample reconstruction on a much more complex source configuration composed of 10 dipolar sources. Finally, Figure 1 (right) gives an example of the relative convergence improvement afforded by our method relative to an EM implementation analogous to [3, 8]. We also wanted to test the performance on perfectly correlated sources with unknown orientations and compare it to other state-of-the-art Bayesian methods. An example using three such sources and 5 dB SNIR is given in Figure 2. −6 −4 −2 0 2 4 6 8 10 12 0 0.5 1 1.5 2 2.5 3 SNIR successful localizations Gaussian Noise Proposed Method Real Brain Noise Proposed Method Gaussain Noise MVAB Real Brain Noise MVAB x (mm) z (mm) −60 −40 −20 0 20 40 −20 −10 0 10 20 30 40 50 0 10 20 30 40 50 60 70 80 90 100 −280 −260 −240 −220 −200 −180 −160 iteration number cost function value EM algorithm proposed method Figure 1: Left: Aggregate localization results for MVAB and the proposed method recovering three correlated sources with unknown orientations. Middle: Example reconstruction of 10 relatively shallow sources (green circles) using proposed method (MVAB performs poorly on this task). Right: Convergence rate of proposed method relative to a conventional EM implementation based on [3, 8]. x (mm) z (mm) −60 −40 −20 0 20 40 −20 −10 0 10 20 30 40 50 x (mm) z (mm) −60 −40 −20 0 20 40 −20 −10 0 10 20 30 40 50 x (mm) z (mm) −60 −40 −20 0 20 40 −20 −10 0 10 20 30 40 50 Figure 2: Reconstructions of three perfectly correlated dipoles (green circles) with unknown orientations using, Left: MVAB, Middle: variational Bayesian method from [13], Right: proposed method. Real Data: Two stimulus-evoked data sets were collected from normal, healthy research subjects on a 275-channel CTF System MEG device. The first data set was a sensory evoked field (SEF) paradigm, where the subject’s right index finger was tapped for a total of 256 trials. A peak is typically seen 50ms after stimulation in the contralateral (in this case, the left) somatosensory cortical area for the hand, i.e., dorsal region of the postcentral gyrus. The proposed algorithm was able to localize this activation to the correct area of somatosensory cortex as seen in Figure 3 (left) and the estimated time course shows the typical 50ms peak (data not shown). The second data set analyzed was an auditory evoked field (AEF) paradigm. In this paradigm the subject is presented tones binaurally for a total of 120 trials. There are two typical peaks seen after the presentation of an auditory stimulus, one at 50ms and one at 100ms, called the M50 and M100 respectively. The auditory processing of tones is bilateral at early auditory areas and the activations are correlated. The algorithm was able to localize activity in both primary auditory cortices and the time courses for these two activations reveal the M50 and M100. Figure 3 (middle) and (right) displays these results. The analysis of simple auditory paradigms is problematic because many source localization algorithms, such as the MVAB, do not handle the bilateral correlated sources well. We also ran MVAB on the AEF data and it localized activity to the center of the head between the two auditory cortices (data not shown). 6 Discussion This paper derives a novel empirical Bayesian algorithm for MEG source reconstruction that readily handles multiple correlated sources with unknown orientations, a situation that commonly arises even with simple imaging tasks. Based on a principled cost function and fast, convergent update 0 50 100 150 100 200 300 400 500 600 700 800 Time (ms) Normalized Intensity (0-1000) Figure 3: Real-world example Left: Somatosensory reconstruction. Middle: Bilateral auditory reconstruction. Right: Recovered timecourse from left auditory cortex (right auditory cortex, not shown, is similar). rules, this procedure displays significant theoretical and empirical advantages over many existing methods. We have restricted most of our exposition and analyses to MEG; however, preliminary work with EEG is also promising. For example, on a real-world passive visual task where subjects viewed flashing foreground/background textured images, our method correctly localizes activity to the lateral occipital cortex while two state-of-the-art beamformers fail. This remains an active area of research. References [1] J.O. Berger, Statistical Decision Theory and Bayesian Analysis, Springer-Verlag, New York, 2nd edition, 1985. [2] D.L. Donoho and M. Elad, “Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ1 minimization,” Proc. National Academy of Sciences, vol. 100, no. 5, pp. 2197– 2202, March 2003. [3] K. Friston, L. Harrison, J. Daunizeau, S. Kiebel, C. Phillips, N. Trujillo-Barreto, R. Henson, G. Flandin, and J. Mattout, “Multiple sparse priors for the MEG/EEG inverse problem,” NeuroImage, 2008 (in press). [4] S.S. Nagarajan, H.T. Attias, K.E. Hild K.E., K. Sekihara, “A probabilistic algorithm for robust interference suppression in bioelectromagnetic sensor data,” Stat Med. vol. 26, no. 21, pp. 3886–910 Sept. 2007. [5] R.D. Pascual-Marqui, “Standardized low resolution brain electromagnetic tomography (sloreta): Technical details,” Methods and Findings in Experimental and Clinical Pharmacology, vol. 24, no. Suppl D, pp. 5–12, 2002. [6] M. Sahani and S.S. Nagarajan, “Reconstructing MEG sources with unknown correlations,” Advances in Neural Information Processing Systems 16, 2004. [7] J. Sarvas, “Basic methematical and electromagnetic concepts of the biomagnetic inverse problem,” Phys. Med. Biol., vol. 32, pp. 11–22, 1987. [8] M. Sato, T. Yoshioka, S. Kajihara, K. Toyama, N. Goda, K. Doya, and M. Kawato, “Hierarchical Bayesian estimation for MEG inverse problem,” NeuroImage, vol. 23, pp. 806–826, 2004. [9] K. Sekihara, M. Sahani, and S.S. Nagarajan, “Localization bias and spatial resolution of adaptive and non-adaptive spatial filters for MEG source reconstruction,” NeuroImage, vol. 25, pp. 1056– 1067, 2005. [10] K. Uutela, M. Hamalainen, and E. Somersalo, “Visualization of magnetoencephalographic data using minimum current estimates,” NeuroImage, vol. 10, pp. 173–180, 1999. [11] D.P. Wipf, J.P. Owen, H.T. Attias, K. Sekihara, and S.S. Nagarajan “Robust Bayesian Estimation of the Location, Orientation, and Timecourse of Mutliple Correlated Neural Sources using MEG,” submitted, 2009. [12] D.P. Wipf, R.R. Ram´ırez, J.A. Palmer, S. Makeig, and B.D. Rao, “Analysis of empirical Bayesian methods for neuroelectromagnetic source localization,” Advances in Neural Information Processing Systems 19, 2007. [13] J.M. Zumer, H.T. Attias, K. Sekihara, and S.S. Nagarajan, “A probabilistic algorithm for interference suppression and source reconstruction from MEG/EEG data,” Advances in Neural Information Processing System 19, 2007.
2008
43
3,530
Multi-stage Convex Relaxation for Learning with Sparse Regularization Tong Zhang Statistics Department Rutgers University, NJ tzhang@stat.rutgers.edu Abstract We study learning formulations with non-convex regularizaton that are natural for sparse linear models. There are two approaches to this problem: • Heuristic methods such as gradient descent that only find a local minimum. A drawback of this approach is the lack of theoretical guarantee showing that the local minimum gives a good solution. • Convex relaxation such as L1-regularization that solves the problem under some conditions. However it often leads to sub-optimal sparsity in reality. This paper tries to remedy the above gap between theory and practice. In particular, we investigate a multi-stage convex relaxation scheme for solving problems with non-convex regularization. Theoretically, we analyze the behavior of a resulting two-stage relaxation scheme for the capped-L1 regularization. Our performance bound shows that the procedure is superior to the standard L1 convex relaxation for learning sparse targets. Experiments confirm the effectiveness of this method on some simulation and real data. 1 Introduction Consider a set of input vectors x1, . . . , xn ∈Rd, with corresponding desired output variables y1, . . . , yn. The task of supervised learning is to estimate the functional relationship y ≈f(x) between the input x and the output variable y from the training examples {(x1, y1), . . . , (xn, yn)}. The quality of prediction is often measured through a loss function φ(f(x), y). We assume that φ(f, y) is convex in f throughout the paper. In this paper, we consider linear prediction model f(x) = wT x. As in boosting or kernel methods, nonlinearity can be introduced by including nonlinear features in x. We are mainly interested in the scenario that d ≫n. That is, there are many more features than the number of samples. In this case, an unconstrained empirical risk minimization is inadequate because the solution overfits the data. The standard remedy for this problem is to impose a constraint on w to obtain a regularized problem. An important target constraint is sparsity, which corresponds to the (non-convex) L0 regularization, defined as ∥w∥0 = |{j : wj ̸= 0}| = k. If we know the sparsity parameter k for the target vector, then a good learning method is L0 regularization: ˆw = arg min w∈Rd 1 n n X i=1 φ(wT xi, yi) subject to ∥w∥0 ≤k. (1) If k is not known, then one may regard k as a tuning parameter, which can be selected through crossvalidation. This method is often referred to as subset selection in the literature. Sparse learning is an essential topic in machine learning, which has attracted considerable interests recently. It can be shown that the solution of the L0 regularization problem in (1) achieves good prediction accuracy 1 if the target function can be approximated by a sparse ¯w. However, a fundamental difficulty with this method is the computational cost, because the number of subsets of {1, . . . , d} of cardinality k (corresponding to the nonzero components of w) is exponential in k. Due to the computational difficult, in practice, it is necessary to replace (1) by some easier to solve formulations below: ˆw = arg min w∈Rd 1 n n X i=1 φ(wT xi, yi) + λg(w), (2) where λ > 0 is an appropriately chosen regularization condition. We obtain a formulation equivalent to (2) by choosing the regularization function as g(w) = ∥w∥0. However, this function is discontinuous. For computational reasons, it is helpful to consider a continuous approximation with g(w) = ∥w∥p, where p > 0. If p ≥1, the resulting formulation is convex. In particular, by choosing the closest approximation with p = 1, one obtain Lasso, which is the standard convex relaxation formulation for sparse learning. With p ∈(0, 1), the Lp regularization ∥w∥p is non-convex but continuous. In this paper, we are also interested in the following capped-L1 approximation of ∥w∥0, with g(w) = Pd j=1 min(|wj|, α), where for v ∈R: This is a good approximation to L0 because as α →0, P j min(|wj|, α)/α →∥w∥0. Therefore when α →0, this regularization condition is equivalent to the sparse L0 regularization upto a rescaling of λ. Note that the capped-L1 regularization is also non-convex. It is related to the so-called SCAD regularization in statistics, which is a smoother version. We use the simpler capped-L1 regularization because the extra smoothness does not affect our algorithm or theory. For a non-convex but smooth regularization condition such as capped-L1 or Lp with p ∈(0, 1), standard numerical techniques such as gradient descent leads to a local minimum solution. Unfortunately, it is difficult to find the global optimum, and it is also difficult to analyze the quality of the local minimum. Although in practice, such a local minimum solution may outperform the Lasso solution, the lack of theoretical (and practical) performance guarantee prevents the more wide-spread applications of such algorithms. As a matter of fact, results with non-convex regularization are difficult to reproduce because different numerical optimization procedures can lead to different local minima. Therefore the quality of the solution heavily depend on the numerical procedure used. The situation is very difficult for a convex relaxation formulation such as L1-regularization (Lasso). The global optimum can be easily computed using standard convex programming techniques. It is known that in practice, 1-norm regularization often leads to sparse solutions (although often suboptimal). Moreover, its performance has been theoretically analyzed recently. For example, it is known from the compressed sensing literature that under certain conditions, the solution of L1 relaxation may be equivalent to L0 regularization asymptotically even when noise is present (e.g. [3] and references therein). If the target is truly sparse, then it was shown in [9] that under some restrictive conditions referred to as irrepresentable conditions, 1-norm regularization solves the feature selection problem. The prediction performance of this method has been considered in [4, 8, 1]. Despite of its success, L1-regularization often leads to suboptimal solutions because it is not a good approximation to L0 regularization. Statistically, this means that even though it converges to the true sparse target when n →∞(consistency), the rate of convergence can be suboptimal. The only way to fix this problem is to employ a non-convex regularization condition that is closer to L0 regularization, such as the capped-L1 regularization. The superiority of capped-L1 is formally proved later in this paper. Because of the above gap between practice and theory, it is important to study direct solutions of non-convex regularization beyond the standard L1 relaxation. Our goal is to design a numerical procedure that leads to a reproducible solution with better theoretical behavior than L1-regularization. This paper shows how this can be done. Specifically, we consider a general multi-stage convex relaxation method for solving learning formulations with non-convex regularization. In this scheme, concave duality is used to construct a sequence of convex relaxations that give better and better approximations to the original non-convex problem. Moreover, using the capped-L1 regularization, we show that after only two stages, the solution gives better statistical performance than standard Lasso when the target is approximately sparse. In essence, this paper establishes a performance guarantee for non-convex formulations using a multi-stage convex relaxation approach that is more sophisticated than the standard one-stage convex relaxation (which is the standard approach com2 monly studied in the current literature). Experiments confirm the effectiveness of the multi-stage approach. 2 Concave Duality Given a continuous regularization function g(w) in (2) which may be non-convex, we are interested in rewriting it using concave duality. Let h(w) : Rd →Ω⊂Rd be a map with range Ω. It may not be a one-to-one map. However, we assume that there exists a function ¯gh(u) defined on Ωsuch that g(w) = ¯gh(h(w)) holds. We assume that we can find h so that the function ¯gh(u) is a concave function of u on Ω. Under this assumption, we can rewrite the regularization function g(w) as: g(w) = inf v∈Rd  vT h(w) + g∗ h(v)  (3) using concave duality [6]. In this case, g∗ h(v) is the concave dual of ¯gh(u) given below g∗ h(v) = inf u∈Ω  −vT u + ¯gh(u)  . Moreover, it is well-known that the minimum of the right hand side of (3) is achieved at ˆv = ∇u¯gh(u)|u=h(w). (4) This is a very general framework. For illustration, we include two example non-convex sparse regularization conditions discussed in the introduction. Lp regularization We consider the regularization condition g(w) = Pd j=1 |wj|p for some p ∈(0, 1). Given any q > p, (3) holds with h(w) = [|w1|q, . . . , |wd|q] and g∗ h(v) = c(p, q) P j vp/(p−q) j defined on the domain {v : vj ≥0}, where c(p, q) = (q −p)pp/(q−p)qq/(p−q). In this case, ¯gh(u) = Pd j=1 up/q j on Ω= {u : uj ≥0}. The solution in (4) is given by ˆvj = (p/q)|wj|p−q. Capped-L1 regularization We consider the regularization condition g(w) = Pd j=1 min(|wj|, α). In this case, (2) holds with h(w) = [|w1|, . . . , |wd|] and g∗ h(v) = Pd j=1 α(1 −vj)I(vj ∈[0, 1]) defined on the domain {v : vj ≥0}, where I(·) is the set indicator function. The solution in (4) is given by ˆvj = I(|wj| ≤α). 3 Multi-stage Convex Relaxation We consider a general procedure for solving (2) with convex loss and non-convex regularization g(w). Let h(w) = P j hj(w) be a convex relaxation of g(w) that dominates g(w) (for example, it can be the smallest convex upperbound (i.e., the inf over all convex upperbounds) of g(w)). A simple convex relaxation of (2) becomes ˆw = arg min w∈Rd  1 n n X i=1 φ(wT xi, yi) + λ d X j=1 hj(w)  . (5) This simple relaxation can yield a solution that is not close to the solution of (2). However, if h satisfies the condition of Section 2, then it is possible to write g(w) as (3). Now, with this new representation, we can rewrite (2) as [ ˆw, ˆv] = arg min w,v∈Rd " 1 n n X i=1 φ(wT xi, yi) + λvT h(w) + λg∗ h(v), # , (6) This is clearly equivalent to (2) because of (3). If we can find a good approximation of ˆv that improves upon the initial value of ˆv = [1, . . . , 1], then the above formulation can lead to a refined convex problem in w that is a better convex relaxation than (5). 3 Our numerical procedure exploits the above fact, which tries to improve the estimation of vj over the initial choice of vj = 1 in (5) using an iterative algorithm. This can be done using an alternating optimization procedure, which repeatedly applies the following two steps: • First we optimize w with v fixed: this is a convex problem in w with appropriately chosen h(w). • Second we optimize v with w fixed: although non-convex, it has a closed form solution that is given by (4). The general procedure is presented in Figure 1. It can be regarded as a generalization of CCCP (concave-convex programming) [7], which takes h(w) = w. By repeatedly refining the parameter v, we can potentially obtain better and better convex relaxation, leading to a solution superior to that of the initial convex relaxation. Note that using the Lp and capped-L1 regularization conditions in Section 2, this procedure lead to more specific multi-stage convex relaxation algorithms. We skip the details due to the space limitation. Tuning parameters: λ Input: training data (x1, y1), . . . , (xn, yn) Output: weight vector ˆw initialize ˆvj = 1 Repeat the following two steps until convergence: • Let ˆw = arg minw∈Rd  1 n Pn i=1 φ(wT xi, yi) + λˆvT h(w)  (∗) • Let ˆv = ∇u¯gh(u))|u=h(w) Figure 1: Multi-stage Convex Relaxation Method 4 Theory of Two-stage Convex Relaxation for Capped-L1 Regularization Although the reasoning in Section 3 is appealing, it is only a heuristic argument without any formal theoretical guarantee. In contrast, the simple one-stage L1 relaxation is known to perform reasonably well under certain assumptions. Therefore unless we can develop a theory to show the effectiveness of the multi-stage procedure in Figure 1, our proposal is mere yet another local minimum finding scheme that may potentially stuck into a bad local solution. This section tries to address this issue. Although we have not yet developed a complete theory for the general procedure, we are able to obtain a learning bound for the capped-L1 regularization. In particular, if the target function is sparse, then the performance of the solution after merely twostages of our procedure is superior to that of Lasso. This demonstrates the effectiveness of the multi-stage approach. Since the analysis is rather complicated, we focus on the least squares loss only, and only for the solution after two-stages of the algorithm. For a complete theory, the following questions are worth asking: • Under what conditions, the global solution with non-convex penalty is statistically better than the (one-stage) convex relaxation solution? That is, when does it lead to better prediction accuracy or generalization error? • Under what conditions, there is only one local minimum solution close to the solution of the initial convex relaxation, and it is also the global optimum? Moreover, does multi-stage convex relaxation find this solution? The first question answers whether it is beneficial to use a non-convex penalty function. The second question answers whether we can effectively solve the resulting non-convex problem using multistage convex relaxation. The combination of the two questions leads to a satisfactory theoretical answer to the effectiveness of the multi-stage procedure. A general theory along this line will be developed in the full paper. In the following, instead of trying to answer the above questions separately, we provide a unified finite sample analysis for the procedure that directly addresses the combined effect of the two questions. The result is adopted 4 from [8], which justifies the multi-stage convex relaxation approach by showing that the two-stage procedure using capped-L1 regularization can lead to better generalization than the standard one stage L1 regularization. The procedure we shall analyze, which is a special case of the multi-stage algorithm in Figure 1 with capped-L1 regularization and only two stages, is described in Figure 2. It is related to the adaptive Lasso method [10]. The result is reproducible when the solution of the first stage is unique because it involves two well-defined convex programming problems. Note that it is described with least squares loss only because our analysis assumes least squares loss: a more general analysis for other loss functions is possible but would lead to extra complications that are not central to our interests. Tuning parameters: λ, α Input: training data (x1, y1), . . . , (xn, yn) Output: weight vector ˆw′ Stage 1: Compute ˆw by solving the L1 penalization problem: ˆw = arg min w∈Rd " 1 n n X i=1 (wT xi −yi)2 + λ∥w∥1 # . Stage 2: Solving the following selective L1 penalization problem: ˆw′ = arg min w∈Rd  1 n n X i=1 (wT xi −yi)2 + λ X j:| ˆwj|≤α |wj|  . Figure 2: Two-stage capped-L1 Regularization This particular two-stage procedure also has an intuitive interpretation (besides treating it as a special case of multi-stage convex relaxation). We shall refer to the feature components corresponding to the large weights as relevant features, and the feature components smaller the cut-off threshold α as irrelevant features. We observe that as an estimation method, L1 regularization has two important properties: shrink estimated weights corresponding to irrelevant features toward zero; shrink estimated weights corresponding to relevant features toward zero. While the first effect is desirable, the second effect is not. In fact, we should avoid shrinking the weights corresponding to the relevant features if we can identify these features. This is why the standard L1 regularization may have suboptimal performance. However, after the first stage of L1 regularization, we can identify the relevant features by picking the components corresponding to the largest weights; in the second stage of L1 regularization, we do not have to penalize the features selected in the first stage, as in Figure 2. A related method, called relaxed Lasso, was proposed recently by Meinshausen [5], which is similar to a two-stage Dantzig selector in [2]. Their idea differs from our proposal in that in the second stage, the weight coefficients w′ j are forced to be zero when j /∈supp0( ˆw). It was pointed out in [5] that if supp0( ˆw) can exactly identify all non-zero components of the target vector, then in the second stage, the relaxed Lasso can asymptotically remove the bias in the first stage Lasso. However, it is not clear what theoretical result can be stated when Lasso cannot exactly identify all relevant features. In the general case, it is not easy to ensure that relaxed Lasso does not degrade the performance when some relevant coefficients become zero in the first stage. On the contrary, the two-stage penalization procedure in Figure 2, which is based on the capped-L1 regularization, does not require that all relevant features are identified. Consequently, we are able to prove a result for Figure 2 with no counterpart for relaxed Lasso. Definition 4.1 Let w = [w1, . . . , wd] ∈Rd and α ≥0, we define the set of relevant features with threshold α as: suppα(w) = {j : |wj| > α}. Moreover, if |wi1| ≥· · · ≥|wid| are in descending order, then define δk(w) = P j>k |wij|21/2 as the 2-norm of the largest k components (in absolute value) of w. For simplicity, we assume sub-Gaussian noise as follows. 5 Assumption 4.1 Assume that {yi}i=1,...,n are independent (but not necessarily identically distributed) sub-Gaussians: there exists σ ≥0 such that ∀i and ∀t ∈R, Eyiet(yi−Eyi) ≤eσ2t2/2. Both Gaussian and bounded random variables are sub-Gaussian using the above definition. For example, if a random variable ξ ∈[a, b], then Eξet(ξ−Eξ) ≤e(b−a)2t2/8. If a random variable is Gaussian: ξ ∼N(0, σ2), then Eξetξ ≤eσ2t2/2. Theorem 4.1 Let Assumption 4.1 hold. Let ˆA = 1 n Pn i=1 xixT i , define M ˆ A = supi̸=j | ˆAi,j|, and assume that ˆAj,j = 1 for all j. Consider any target vector ¯w such that Ey = ¯wT x, and assume that ¯w contains only s non-zeros where s ≤d/3 and assume that M ˆ As ≤1/6. Let k = |suppλ( ¯w)|. Consider the two-stage method in Figure 2. Given η ∈(0, 0.5), with probability larger than 1 −2η: if α/48 ≥λ ≥12σ p 2 ln(2d/η)/n, then ∥ˆw′ −¯w∥2 ≤24 p k −qλ + 24σ 1 + r 20q n ln(1/η) ! + 168δk( ¯w), where q = |supp1.5α( ¯w)|. The proof of this theorem can be found in [8]. Note that the theorem allows the situation d ≫ n, which is what we are interested in. The condition M ˆ As ≤1/6, often referred to as mutual coherence, is also quite standard in the analysis of L1 regularization, e.g., in [1, 3]. Although the condition is idealized, the theorem nevertheless yields important insights into the behavior of the two-stage algorithm. This theorem leads to a bound for Lasso with α = ∞or q = 0. The bound has the form ∥ˆw′ −¯w∥2 = O(δk( ¯w) + √ kλ). This bound is tight for Lasso, in the sense that the right hand side cannot be improved except for the constant. In particular, the factor O( √ kλ) cannot be removed using Lasso — this can be easily verified with an orthogonal design matrix. It is known that in order for Lasso to be effective, one has to pick λ no smaller than the order σ p ln d/n. Therefore, the generalization of standard Lasso is of the order δk( ¯w) + σ p k ln d/n, which cannot be improved. Similar results appear in [1, 4]. Now, with a small α, the bound in Theorem 4.1 can be significantly better than that of the standard Lasso result if the sparse target satisfies δk( ¯w) ≪ √ kλ and k −q ≪k. The latter condition is true when |supp1.5α( ¯w)| ≈|suppλ( ¯w)|. These conditions are satisfied when most non-zero coefficients of ¯w in suppλ( ¯w) are relatively large in magnitude and the rest is small in 2-norm. That is, when the target ¯w can be decompose as a sparse vector with large coefficients plus another (less sparse) vector with small coefficients. In the extreme case when q = k = |supp0( ¯w)| (that is, all nonzero components of ¯w are large), we obtain ∥ˆw′−¯w∥2 = O( p k ln(1/η)/n) for the two-stage procedure, which is superior to the standard one-stage Lasso bound ∥ˆw −¯w∥2 = O( p k ln(d/η)/n). Again, this bound cannot be improved for Lasso, and the difference can be significant when d is large. 5 Experiments In the following, we show with a synthetic and a real data that our multi-stage approach improves the standard Lasso in practice. In order to avoid cluttering, we only study results for the two-stage procedure of Figure 2, which corresponds to the capped-L1 regularization. We shall also compare it to the two-stage Lp regularization method with p = 0.5, which corresponds to the adaptive Lasso approach [10]. Note that instead of tuning the α parameter in Figure 2, in these experiments, we tune the number of features q in ˆw that are larger than the threshold α (i.e., q = |{j : | ˆwj| > α}| is the number of features that are not regularized in stage-2). This is clearly more convenient than tuning α. The standard Lasso corresponds to q = 0. In the first experiment, we generate an n × d random matrix with its column j corresponding to [x1,j, . . . , xn,j], and each element of the matrix is an independent standard Gaussian N(0, 1). We then normalize its columns so that Pn i=1 x2 i,j = n. A truly sparse target ¯β, is generated with k 6 nonzero elements that are uniformly distributed from [−10, 10]. The observation yi = ¯βT xi + ϵi, where each ϵi ∼N(0, σ2). In this experiment, we take n = 25, d = 100, k = 5, σ = 1, and repeat the experiment 100 times. The average training error and 2-norm parameter estimation error are reported in Figure 3. We compare the performance of the two-stage method with different q versus the regularization parameter λ. As expected, the training error becomes smaller when q increases. Compared to the standard Lasso (which corresponds to q = 0), substantially smaller estimation error is achieved with q = 3 for Capped-L1 regularization and with p = 0.5 for Lp regularization. This shows that the multi-stage convex relaxation approach is effective. G G G G G G G G G 0.01 0.02 0.05 0.10 0.20 0.50 1.00 2.00 0.005 0.020 0.100 0.500 2.000 lambda training error G G G G G G G G G G G q=0 q=1 q=3 Lp (p=0.5) G G G G G G G G G 0.01 0.02 0.05 0.10 0.20 0.50 1.00 2.00 2 3 4 5 6 7 lambda parameter estimation error G G G G G G G G G G G q=0 q=1 q=3 Lp (p=0.5) Figure 3: Performance of multi-stage convex relaxation on simulation data. Left: average training squared error versus λ; Right: parameter estimation error versus λ. In the second experiment, we use real data to illustrate the effectiveness of the multi-stage approach. Due to the space limitation, we only report the performance on a single data, Boston Housing. This is the housing data for 506 census tracts of Boston from the 1970 census, available from the UCI Machine Learning Database Repository: http://archive.ics.uci.edu/ml/. Each census tract is a datapoint, with 13 features (we add a constant offset on e as the 14th feature), and the desired output is the housing price. In the experiment, we randomly partition the data into 20 training plus 456 test points. We perform the experiments 100 times, and report training and test squared error versus the regularization parameter λ for different q. The results are plotted in Figure 4. In this case, q = 1 achieves the best performance. This means one feature can be reliably identified in this example. In comparison, adaptive Lasso is not effective. Note that this dataset contains only a small number (d = 14) features, which is not the case where we can expect significant benefit from the multi-stage approach (most of other UCI data similarly contain only small number of features). In order to illustrate the advantage of the two-stage method more clearly, we also consider a modified Boston Housing data, where we append 20 random features (similar to the simulation experiments) to the original Boston Housing data, and rerun the experiments. The results are shown in Figure 5. As expected from Theorem 4.1 and the discussion thereafter, since d becomes large, the multi-stage convex relaxation approach with capped-L1 regularization (q > 0) has significant advantage over the standard Lasso (q = 0). References [1] Florentina Bunea, Alexandre Tsybakov, and Marten H. Wegkamp. Sparsity oracle inequalities for the Lasso. Electronic Journal of Statistics, 1:169–194, 2007. [2] Emmanuel Candes and Terence Tao. The Dantzig selector: statistical estimation when p is much larger than n. Annals of Statistics, 2007. [3] David L. Donoho, Michael Elad, and Vladimir N. Temlyakov. Stable recovery of sparse overcomplete representations in the presence of noise. IEEE Trans. Info. Theory, 52(1):6–18, 2006. 7 G G G G G G G G G G G 0.1 0.2 0.5 1.0 2.0 10 20 30 40 50 60 lambda training error G G G G G G G G G G G G G q=0 q=1 q=2 Lp (p=0.5) G G G G G G G G G G G 0.1 0.2 0.5 1.0 2.0 50 60 70 80 lambda test error G G G G G G G G G G G G G q=0 q=1 q=2 Lp (p=0.5) Figure 4: Performance of multi-stage convex relaxation on the original Boston Housing data. Left: average training squared error versus λ; Right: test squared error versus λ. G G G G G G G G G G G G G G 0.1 0.2 0.5 1.0 2.0 5.0 0.5 1.0 2.0 5.0 10.0 50.0 200.0 lambda training error G G G G G G G G G G G G G G G G q=0 q=1 q=2 Lp (p=0.5) G G G G G G G G G G G G G G 0.1 0.2 0.5 1.0 2.0 5.0 100 150 200 250 lambda test error G G G G G G G G G G G G G G G G q=0 q=1 q=2 Lp (p=0.5) Figure 5: Performance of multi-stage convex relaxation on the modified Boston Housing data. Left: average training squared error versus λ; Right: test squared error versus λ. [4] Vladimir Koltchinskii. Sparsity in penalized empirical risk minimization. Annales de l’Institut Henri Poincaré, 2008. [5] Nicolai Meinshausen. Lasso with relaxation. ETH Research Report, 2005. [6] R. Tyrrell Rockafellar. Convex analysis. Princeton University Press, Princeton, NJ, 1970. [7] Alan L. Yuille and Anand Rangarajan. The concave-convex procedure. Neural Computation, 15:915–936, 2003. [8] Tong Zhang. Some sharp performance bounds for least squares regression with L1 regularization. The Annals of Statistics, 2009. to appear. [9] Peng Zhao and Bin Yu. On model selection consistency of Lasso. Journal of Machine Learning Research, 7:2541–2567, 2006. [10] Hui Zou. The adaptive lasso and its oracle properties. Journal of the American Statistical Association, 101:1418–1429, 2006. 8
2008
44
3,531
Nonparametric sparse hierarchical models describe V1 fMRI responses to natural images Pradeep Ravikumar, Vincent Q. Vu and Bin Yu Department of Statistics University of California, Berkeley Berkeley, CA 94720-3860 Thomas Naselaris, Kendrick N. Kay and Jack L. Gallant Department of Psychology University of California, Berkeley Berkeley, CA Abstract We propose a novel hierarchical, nonlinear model that predicts brain activity in area V1 evoked by natural images. In the study reported here brain activity was measured by means of functional magnetic resonance imaging (fMRI), a noninvasive technique that provides an indirect measure of neural activity pooled over a small volume (≈2mm cube) of brain tissue. Our model, which we call the V-SPAM model, is based on the reasonable assumption that fMRI measurements reflect the (possibly nonlinearly) pooled, rectified output of a large population of simple and complex cells in V1. It has a hierarchical filtering stage that consists of three layers: model simple cells, model complex cells, and a third layer in which the complex cells are linearly pooled (called “pooled-complex” cells). The pooling stage then obtains the measured fMRI signals as a sparse additive model (SpAM) in which a sparse nonparametric (nonlinear) combination of model complex cell and model pooled-complex cell outputs are summed. Our results show that the V-SPAM model predicts fMRI responses evoked by natural images better than a benchmark model that only provides linear pooling of model complex cells. Furthermore, the spatial receptive fields, frequency tuning and orientation tuning curves of the V-SPAM model estimated for each voxel appears to be consistent with the known properties of V1, and with previous analyses of this data set. A visualization procedure applied to the V-SPAM model shows that most of the nonlinear pooling consists of simple compressive or saturating nonlinearities. 1 Introduction An important step toward understanding the neural basis of vision is to develop computational models that describe how complex visual stimuli are mapping onto evoked neuronal responses. This task is made challenging in part by the inherent difficulty of obtaining neurophysiological recordings from single neurons in vivo. An alternative approach is to base models on brain activity measured by means of functional magnetic resonance imaging (fMRI). fMRI measures changes in blood oxygenation and flow throughout the brain that occur as a consequence of metabolic demands. Although the relationship between measured fMRI activity and the spiking activity of neurons is rather complex, as a first-order approximation the fMRI signal can be considered to be monotonically related to the pooled activity of the underlying neural population. 1 In this paper we consider the task of predicting fMRI brain activity evoked by a series of grayscale natural images. Natural images are a useful stimulus set for efficiently probing the visual system, because they are likely to evoke response from both early visual areas and from more central, highly nonlinear visual areas. The fMRI scanner provides a three-dimensional image of the brain with a spatial resolution of a few cubic millimeters and fairly low temporal resolution (about 0.5–1 Hz). After pre-processing the fMRI signals are represented as a vector of three-dimensional volume elements called voxels. Here we restrict our analysis to voxels sampled from visual area V1, the primary visual area in humans. There are two problems that make predicting evoked responses of fMRI voxels difficult. First, fMRI signals are noisy and non-stationary in time. Second, each voxel reflects the combined influence of hundreds of thousands of neurons [4]. fMRI scans of a single voxel in human V1 likely reflect the nonlinearly-pooled, rectified outputs of two functionally distinct classes of neurons: simple cells that are sensitive to spatial phase, and phase-invariant complex cells [2]. Even if an accurate predictive model is obtained, there remains the issue of interpretability. It is not sufficient to construct a model that provides good predictions but whose function remains opaque (i.e., a black box). In order for a predictive model to advance our understanding of the brain, the function of any predictive model must be conceptually interpretable. In this paper we propose a new model that aims to overcome some of these problems. Our V-SPAM model is a hierarchical and sparse nonparametric additive model. It combines a biologically-inspired hierarchical filtering scheme with a nonlinear (nonparametric) pooling of the outputs from various levels of the hierarchical filtering stage. The model is estimated separately for each recorded fMRI voxel using a fit data set, and then its predictions are evaluated against an entirely separate data set reserved for this purpose. The filtering component of the model consists of three distinct layers: simple cells, complex cells, and linear combinations of the complex cells (here called pooled-complex cells). The fMRI response is then modeled as a sparse additive combination of nonlinear (nonparametric) functions of the complex and pooled-complex cell model outputs. This last step automatically learns the optimal combinatorial output nonlinearity of the hierarchical filtering stage, and so permits us to model nonlinear V1 responses not captured by the simple and complex cell model components alone [6]. The fMRI dataset used in this paper was collected as part of an earlier study by [5]. That study also used a filtering model to describe the relationship between natural images and evoked fMRI signals, and used the estimated models in turn to decode (identify) images. However, the earlier study only provided linear pooling of model complex cell filters. Our results show that the V-SPAM model predicts fMRI responses evoked by natural images better than does the earlier linear pooling model. Furthermore, the spatial receptive fields, frequency tuning and orientation tuning curves of the V-SPAM model estimated for each voxel appear to be consistent with the known properties of V1, and with the previous results [5]. 2 Background 2.1 Sparse Additive Models The regression task consists of estimating the regression function E(Y |X) for a real-valued response Y ∈R and a predictor-vector X = (X1, . . . , Xp) ∈Rp from data {(Xi, Yi), i = 1, . . . n}. In the nonparametric regression model, the response Yi = m(Xi) + ϵi, where m is a general smooth function. Estimating this function (i.e., smoothing) becomes challenging when the number of predictors p is large. Even estimating linear models of the form Yi = β⊤Xi + ϵi, is challenging in these high-dimensional settings. For linear models however, when the vector β is sparse, Tibshirani [8] and others have shown that the ℓ1 penalized estimator (also called the Lasso), ˆβ = arg minβ P i(Yi −β⊤Xi)2 + λ Pp j=1 |βj| can estimate a sparse model and has strong theoretical properties. The sparse additive model (SpAM) framework of Ravikumar et al [7] extends these sparse linear models to the nonparametric domain. In additive models, introduced by Hastie and Tibshirani [3], the response Y is an additive combination of functions of the predictors, Y = Pp j=1 fj(Xj) + ϵ Here the functions {fj} are constrained to lie in a class of smooth functions, such as the space of 2 functions with square integrable double derivatives (i.e., the Sobolev space of order two). A sparse additive model then imposes a sparsity constraint on the set J = {j : fj ̸≡0} of functions fj that are nonzero. 2.2 Fitting Algorithm for Sparse Additive Models The paper [7] proposes a fitting procedure for sparse additive models that has good statistical properties even in the large p small n regime. Their SpAM fitting algorithm is summarized in Figure 1. It performs a coordinate descent (in the L2(P n) space, with P n the sample distribution). At each step the algorithm performs nonparametric regression of the current residual onto a single predictor, and then does a soft threshold. Input: Data (Xi, Yi), regularization parameter λ. Initialize fj = f (0) j , for j = 1, . . . , p. Iterate until convergence: For each j = 1, . . . , p: Compute the residual: Rj = Y −P k̸=k fk(Xk); Estimate the conditional expectation Pj = E[Rj| Xj] by smoothing: ˆPj = SjRj; Set s2 j = n−1 Pn i=1 ˆP2 j (i). Soft-threshold: fj = [1 −λ/ˆsj]+ ˆPj; Center: fj ←fj −mean(fj). Output: Component functions fj and estimator ˆm(Xi) = P j fj(Xij). Figure 1: THE SPAM BACKFITTING ALGORITHM 3 A model for pooled neural activity of voxels Our V-SPAM model combines a biologically-inspired filtering scheme and a novel algorithm that permits nonlinear pooling of the outputs of the filtering stage. The filtering stage itself consists of three distinct layers, arranged hierarchically: simple cells, complex cells, and linear combinations of the complex cells (here called pooled-complex cells). The output of this filtering operation is then fed to an algorithm that estimates a nonlinear pooling function that optimizes predictive power. 3.1 Simple Cell Model The first stage of the hierarchical filter is inspired by simple cells that are known to exist in area V1. The receptive fields of V1 simple cells are known to be generally consistent with a Gabor wavelet model [6]. Most importantly, they are spatially localized, oriented, spatial frequency band-pass and phase selective. (see Figure 2.) Figure 2: Gabor wavelets. Each row shows a family of Gabor wavelets that share a common spatial location and frequency, but differ in orientation. This is only a small fraction of all of the wavelets in the pyramid. 3 In our model the simple cell filter bank was implemented as a Gabor wavelet pyramid, as follows. Let I denote an image, and d the number of pixels. It can thus be represented as a pixel vector in Rd. Denote by  j a Gabor wavelet sampled on a grid the size of the image, so that it too can be represented as vector in Rd. Then our simple cell model, for the activation given the image I as stimulus, is given by, Xj(I) = [  j,I ]+, where  · ,·  is the Euclidean inner product, and [· ]+ is a non-negative rectification. (See Figure 3.) Correspondingly, Xj(I) = [   j,I ]+ gives the activation of the 180 spatial phase counterpart. Gabor wavelet non-negative rectification image output Figure 3: Simple cell model. The activation of a model simple cell given an image is the inner product of the image with a Gabor wavelet, followed by a non-negative rectification. 3.2 Complex Cell Model The second stage of the hierarchical filter is inspired by complex cells that are also known to exist in area V1. Complex cells are similar to simple cells, except they are not sensitive to spatial phase. In our model the complex cell filter bank was implemented by taking the sum of squares of the outputs of four simple cells (corresponding to the wavelet pairs that are identical up to phase), followed by a fixed output nonlinearity. The activation of the model complex cell given an image I is given by, Xj(I) = log(1 +  [  j,I ]2 + + [   j,I ]2 + + [    j,I  ]2 + + [     j,I  ]2 +) (1) = log(1 +  [  j,I ]2 + [    j,I  ]2), (2) where  j and   j are Gabor wavelets identical up to phase (also called a quadrature pair; see Figure 4). Gabor wavelet quadrature pair squaring fixed nonlinearity + output image Figure 4: Complex cell model. The activation of a model complex cell given an image is the sum of squares of the inner products of the image with a quadrature pair of Gabor wavelets followed by a nonlinearity. This is equivalently modeled by summing the squares of 4 simple cell model outputs, followed by a nonlinearity. 3.3 Pooled-complex Cell Model The hierarchical filtering component of our model also includes a third filtering stage, linear pooling of complex cells sharing a common spatial location and frequency. This stage has no direct biological interpretation in terms of area V1, but has been included to improve representational power of the model: a linear combination of complex cells (the pooled-complex cell), followed by a nonlinearity, cannot be expressed as an additive combination of nonlinear functions of individual complex cells. Note that this element might be particularly useful for modeling responses in higher visual areas beyond V1. If { Xj1,...,Xjk} correspond to complex cells with the same spatial location and frequency, then the corresponding pooled-complex cell (which thus sums over different orientations) is given by, Zj1...jk =  k l=1 Xjl. (See Figure 5.) 4 + + + complex cells image output Figure 5: Pooled-complex cell model. Subsets of complex cells that share a common spatial location and frequency are summed. 3.4 V-SPAM model Finally, the predicted fMRI response Y is obtained as a sparse additive combination of complex cell and pooled-complex cell outputs. Denote the complex cell outputs by { X1,...,Xp} , and the pooled-complex cell outputs by { Z1,...,Zq} . Then the fMRI response Y is modeled as a sparse additive (nonparametric) model, Y =  p j=1 fj(Xj) +  q l=1 gl(Zl) + φ. Figure 6 summarizes the entire V-SPAM model, including both filtering and pooling components. image simple cell outputs complex cell outputs pooled-complex cell outputs nonlinearities + fMRI voxel response Figure 6: V-SPAM model. The fMRI voxel response is modeled as the summation of nonlinear functions of complex and pooled-complex cell outputs. The connections and components in the dashed region are to be estimated from the data under the assumption that many of them are null. 4 Experiments 4.1 Data description The data set analyzed in this paper consists of a total of 1,294 voxels recorded from area V1 of one human observer. A 4T Varian MRI scanner provided voxels of size 2mm x 2mm x 2.5mm at a frequency of 1Hz. The visual stimuli used in the experiment consisted of 1,750 20-by-20 degree grayscale natural images, masked by a circular aperture. A two-stage procedure was used for data collection. In the first stage, 1,750 natural images were presented to the subject 2 times each. This data set was used to fit the model. In the second stage, 120 additional natural images were presented 13 times each. This data set was used for model validation. (Note that the images used for estimation and validation were distinct.) In all cases images were flashed briefly 3 times during a 1 second display period, and there was a blank period of 3 seconds between successive images. After acquisition the fMRI signals were pre-processed to reduce temporal non-stationarity and increase signal-to-noise [5]. Complete details of the fMRI experiment can be found in [5]. 5 4.2 V-SPAM model fitting The V-SPAM model was fitted separately for each of the 1,294 voxels using the training set of 1,750 images and the evoked fMRI responses. The fitting procedure can be conceptualized in four successive stages that roughly parallel the hierarchical layers of the model itself. In the first stage, the model complex cell outputs are computed according to equation (2) using a pyramid (or family) of Gabor wavelets sampled on a grid of 128 x 128 pixels. The pyramid includes 5 spatial frequencies (or scales): 1, 2, 4, 8, 16, and 32 cycles/field of view. At each spatial frequency ω the wavelets are positioned evenly on a ω × ω grid covering the image. All combinations of 8 orientations and 2 phases occur at each of the ω × ω positions. In total, the pyramid consists of 10,920 quadrature pairs plus 1 constant wavelet (corresponding to mean luminance). In the second stage, the model complex cell outputs are pre-screened in order to eliminate complex cell outputs that are unrelated to a voxel’s response, and to reduce the computational complexity of successive stages of fitting. This is accomplished by considering the squared-correlation of the response of each complex cell with the evoked voxel response, using the 1,750 images in the training set. Only the top k complex cells are retained. In pilot studies we found empirically that k = 100 was enough to give good statistical and computational performance (data not shown). In the third stage, pooled-complex cells (see Section 3) are formed from the complex cell outputs that passed the pre-screening in fitting stage 2. In the fourth and final stage, the complex and pooled-complex cell responses to the images in the training set are used as predictors in the SpAM fitting algorithm (see Figure 1), and this is optimized to fit the voxel responses evoked by the same 1,750 images in the training set. The smoothing is done by means of Gaussian kernel regression with plug-in bandwidth, and the regularization parameter is selected by the Akaike information criterion (AIC). 4.3 Model validation For each voxel, we evaluate the fitted V-SPAM models by computing the predictive R2 (squared correlation) of the predicted and actual fMRI responses evoked by each of the 120 images in the validation set. To permit a more complete evaluation of the V-SPAM model, we used the same data to fit a simpler model more directly comparable to the one used in earlier work with this data set [5]. The sparse linear pooling model aims to predict each voxel’s response as a linear combination of all 10,921 estimated complex cell outputs. This model has the form, Y (I) = β0 + Pp j=1 βjXj(I) + ϵ, where the Xj(I) are the complex cell outputs estimated according to (2), with the p = 10, 921 Gabor wavelets described in Section 4.2. The coefficients βj, j = 0, . . . , p, were estimated by L2 Boosting [1] with the stopping criterion determined by 5-fold cross-validation within the same data set. This model is a sparsified version of the one used in [5], and has comparable prediction performance. 5 Results Figure 7 (left) shows a scatterplot comparing the performance of the V-SPAM model with that of the sparse linear pooling model for all 1,294 voxels. The vertical axis gives performance of the V-SPAM model, and the horizontal axis the sparse linear pooling model. Each point corresponds to a single voxel. The inset region contains 429 voxels for which both models had some predictive power (R2 ≥0.1). For these voxels, the relative improvement of the V-SPAM model over the sparse linear pooling model is shown in the histogram to the right. The predictions of the V-SPAM model were on average 14% better than those of the sparse linear pooling model (standard deviation 17%). 5.1 Estimated receptive fields and tuning curves Figure 8 shows the spatial receptive-fields (RF’s) and joint frequency and orientation tuning curves estimated using the V-SPAM model for 3 voxels. These voxels were chosen because they had high predictive power (R2’s of 0.65, 0.59, and 0.63, respectively from left to right) and so were modeled accurately. The upper row of the figure shows the spatial RF of each voxel. The intensity at each 6 G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G GGGG GG GG G G G GG G G GG G G G G GG G GG G G G G G G G G G G G GGG G G G G G G G G G G G G G G GGG GG G G G G G G G G G G G G G GG G G G G G G G G G G GG G G G G G G G GGG G G G GG G G G G G G G G G G G G G G G G G G GGGG G G G G G G G G G G G G G GG G G G G G G G G G G G GG G G G GGG G G GGG GGG G G GG GG GG GG G G G G G G GGGG G G GGG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG GGG G G G G G G G G G G G G GG G G G G G G G GGG G G G G G G G GGG G G G G GG G G G G G G G GG G G G GGG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G GGG G G G G G G G G G G G G G G GG G G G G GGG G G G G G G G G G GGG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G GG G G G G G G G GG G G G GG G GG G G G G G G G G G GG G G G G G G G G GG G G G G G G GG G G G G G G GGG G G G G G G GG G G GG G G GG G G G GGG G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G GG G GGGG G GG G G GG G G G G G GG G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G GG G G G GG G G G G G G G G G G G G G G G G GGG G G G G GGG G G G G G G G G G G G G G G GG G G G G G GG G G G G GG G G G G G GG G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G GGG G G GG G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GGGG G G GGG G G G GGG G G G G G G GG G G G G G G G G G G G GGG G GG GG G G G G G G GGG G G G G G G G G G G G G G GG G GGG G GG G G G G G G G G G G G GG G G GG G G G G G G G GGG GG GGGG G G G G G G GG G G GG GG GG GG G G GG G G G G G GG G G 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 sparse linear pooling model SpAM V1 Model (mean = 14, SD = 17, median = 12, IQR = 17) relative improvement (%) Frequency −20 0 20 40 60 80 100 0 50 100 150 Figure 7: Predictive R2 of the fitted V-SPAM model compared against the fitted sparse linear pooling model. (Left) Each of the 1,294 points in the scatterplot corresponds to a single voxel. (Right) Relative performance for the 429 voxels contained in the inset region on the left. location in the spatial RF represents the standardized predicted response of the voxel to an image stimulus consisting of a single pixel at that location. The spatial RF’s of these voxels are clearly localized in space, consistent with the known retinotopic organization of V1 and previous fMRI results [9]. The lower row of Figure 8 shows the joint frequency and orientation tuning properties of these same 3 voxels. Here the tuning curves were estimated by computing the predicted response of the fitted voxel model to cosine gratings of varying orientation (degrees) and spatial frequency (cycles/field of view). All of the voxels are tuned to spatial frequencies above about 8 cycles/field of view, while orientation tuning varies from voxel to voxel. The joint spatial frequency and orientation tuning of all 3 voxels appears to be non-separable (i.e. their orientation tuning is not a constant function of frequency). 0 5 10 15 0 5 10 0 5 10 15 orient freq 0 8 16 24 32 0 45 90 135 180 −2 −1 0 1 2 orient freq 0 8 16 24 32 0 45 90 135 180 −1 0 1 2 orient freq 0 8 16 24 32 0 45 90 135 180 −2 −1 0 1 2 Figure 8: (upper) Spatial receptive-fields (RF’s) and (lower) joint frequency and orientation tuning curves estimated by the V-SPAM model for 3 voxels with high predictive power (R2’s of 0.65, 0.59, 0.63, left to right). Each location in the spatial RF shows the standardized predicted response of the voxel to an image consisting of a single pixel at that location. The tuning curves show the standardized predicted response of the voxel to cosine gratings of varying orientation (degrees) and spatial frequency (cycles/field of view). 5.2 Nonlinearities One of the potential advantages of the V-SPAM model over other approaches is that it can reveal novel nonlinear tuning and pooling properties, as revealed by the nonlinear summation occurring in the final stage of the V-SPAM model. Figure 9 illustrates some of these functions estimated for a typical voxel with high predictive power (R2 of 0.63). These correspond to the nonlinearities appearing in the final stage of the V-SPAM model (see Figure 6). Here the horizontal axis is the input in standard units of the corresponding model complex or pooled-complex cell outputs, and the vertical axis is the output in standard units of predicted responses. For this voxel, these are the 7 4 largest (ranked by L2 norm) nonlinearities. All 4 of these nonlinearities are compressive. The remaining 75 nonlinearities present in the voxel’s fitted model have similar shapes, but are much smaller and hence contribute less to the predicted response. They are overlaid in the final panel of Figure 9. −1 0 1 2 −0.2 −0.1 0.0 0.1 0.2 input output −1 0 1 2 3 −0.2 −0.1 0.0 0.1 0.2 input output −1 0 1 2 −0.2 −0.1 0.0 0.1 0.2 input output −1 0 1 2 3 −0.2 −0.1 0.0 0.1 0.2 input output Figure 9: Nonlinearities estimated in the V-SPAM model for a voxel with high predictive power (R2: 0.63). The 4 largest (ranked by L2 norm) are shown left to right by the thick lines. The other 75 nonlinearities for this voxel (overlaid in the right panel) are smaller and contribute less to the predicted response. 6 Discussion and conclusions Our V-SPAM model provides better predictions of fMRI activity evoked by natural images than does a sparse linear model similar to that used in an earlier study of this data set [5]. This increased predictive power of the V-SPAM model reflects the fact that it can describe explicitly the nonlinear pooling that likely occurs among the many neurons whose pooled activity contributes to measured fMRI signals. These pooled output nonlinearities are likely a critical component of nonlinear computation across the visual hierarchy. Therefore, the SpAM framework may be particularly useful for modeling neurons or fMRI signals recorded in higher and more nonlinear stages of visual processing beyond V1. References [1] Peter B¨uhlmann and Bin Yu. Boosting with the l2 loss: Regression and classification. Journal of the American Statistical Association, 98(462):324–339, 2003. [2] R.L. De Valois and K. K. De Valois. Spatial Vision. Oxford University Press, 1990. [3] Trevor Hastie and Robert Tibshirani. Generalized additive models. Chapman & Hall Ltd., 1999. [4] D. J. Heeger, A. C. Huk, W. S. Geisler, and D. G. Albrecht. Spikes versus bold: what does neuroimaging tell us about neuronal activity? Nat Neurosci, 3(7):631–633, 2000. [5] Kendrick N. Kay, Thomas Naselaris, Ryan J. Prenger, and Jack L. Gallant. Identifying natural images from human brain activity. Nature, 452(7185):352–355, 2008. [6] Bruno A. Olshausen and David J. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381(6583):607–609, June 1996. [7] Pradeep Ravikumar, Han Liu, John Lafferty, and Larry Wasserman. Spam: Sparse additive models. Neural Information Processing Systems, 2007. [8] R. Tibshirani. Regression shrinkage and selection via the lasso. J. Royal. Statist. Soc B., 58, No. 1:267–288, 1996. [9] Brian A. Wandell, Serge O. Dumoulin, and Alyssa A. Brewer. Visual field maps in human cortex. Neuron, 56(2):366–383, 2007. 8
2008
45
3,532
A Scalable Hierarchical Distributed Language Model Andriy Mnih Department of Computer Science University of Toronto amnih@cs.toronto.edu Geoffrey Hinton Department of Computer Science University of Toronto hinton@cs.toronto.edu Abstract Neural probabilistic language models (NPLMs) have been shown to be competitive with and occasionally superior to the widely-used n-gram language models. The main drawback of NPLMs is their extremely long training and testing times. Morin and Bengio have proposed a hierarchical language model built around a binary tree of words, which was two orders of magnitude faster than the nonhierarchical model it was based on. However, it performed considerably worse than its non-hierarchical counterpart in spite of using a word tree created using expert knowledge. We introduce a fast hierarchical language model along with a simple feature-based algorithm for automatic construction of word trees from the data. We then show that the resulting models can outperform non-hierarchical neural models as well as the best n-gram models. 1 Introduction Statistical language modelling is concerned with building probabilistic models of word sequences. Such models can be used to discriminate probable sequences from improbable ones, a task important for performing speech recognition, information retrieval, and machine translation. The vast majority of statistical language models are based on the Markov assumption, which states that the distribution of a word depends only on some fixed number of words that immediately precede it. While this assumption is clearly false, it is very convenient because it reduces the problem of modelling the probability distribution of word sequences of arbitrary length to the problem of modelling the distribution on the next word given some fixed number of preceding words, called the context. We will denote this distribution by P(wn|w1:n−1), where wn is the next word and w1:n−1 is the context (w1, ..., wn−1). n-gram language models are the most popular statistical language models due to their simplicity and surprisingly good performance. These models are simply conditional probability tables for P(wn|w1:n−1), estimated by counting the n-tuples in the training data and normalizing the counts appropriately. Since the number of n-tuples is exponential in n, smoothing the raw counts is essential for achieving good performance. There is a large number of smoothing methods available for n-gram models [4]. In spite of the sophisticated smoothing methods developed for them, n-gram models are unable to take advantage of large contexts since the data sparsity problem becomes extreme. The main reason for this behavior is the fact that classical n-gram models are essentially conditional probability tables where different entries are estimated independently of each other. These models do not take advantage of the fact that similar words occur in similar contexts, because they have no concept of similarity. Class-based n-gram models [3] aim to address this issue by clustering words and/or contexts into classes based on their usage patterns and then using this class information to improve generalization. While it can improve n-gram performance, this approach introduces a very rigid kind of similarity, since each word typically belongs to exactly one class. An alternative and much more flexible approach to counteracting the data sparsity problem is to represent each word using a real-valued feature vector that captures its properties, so that words 1 used in similar contexts will have similar feature vectors. Then the conditional probability of the next word can be modelled as a smooth function of the feature vectors of the context words and the next word. This approach provides automatic smoothing, since for a given context similar words are now guaranteed to be assigned similar probabilities. Similarly, similar contexts are now likely to have similar representations resulting in similar predictions for the next word. Most models based on this approach use a feed-forward neural network to map the feature vectors of the context words to the distribution for the next word (e.g. [12], [5], [9]). Perhaps the best known model of this type is the Neural Probabilistic Language Model [1], which has been shown to outperform n-gram models on a dataset of about one million words. 2 The hierarchical neural network language model The main drawback of the NPLM and other similar models is that they are very slow to train and test [10]. Since computing the probability of the next word requires explicitly normalizing over all words in the vocabulary, the cost of computing the probability of the given next word and the cost of computing the full distribution over the next word are virtually the same – they take time linear in the vocabulary size. Since computing the exact gradient in such models requires repeatedly computing the probability of the next word given its context and updating the model parameters to increase that probability, training time is also linear in the vocabulary size. Typical natural language datasets have vocabularies containing tens of thousands of words, which means that training NPLM-like models the straightforward way is usually too computationally expensive in practice. One way to speed up the process is to use a specialized importance sampling procedure to approximate the gradients required for learning [2]. However, while this method can speed up training substantially, testing remains computationally expensive. The hierarchical NPLM introduced in [10], provides an exponential reduction in time complexity of learning and testing as compared to the NPLM. It achieves this reduction by replacing the unstructured vocabulary of the NPLM by a binary tree that represents a hierarchical clustering of words in the vocabulary. Each word corresponds to a leaf in the tree and can be uniquely specified by the path from the root to that leaf. If N is the number of words in the vocabulary and the tree is balanced, any word can be specified by a sequence of O(log N) binary decisions indicating which of the two children of the current node is to be visited next. This setup replaces one N-way choice by a sequence of O(log N) binary choices. In probabilistic terms, one N-way normalization is replaced by a sequence of O(log N) local (binary) normalizations. As a result, a distribution over words in the vocabulary can be specified by providing the probability of visiting the left child at each of the nodes. In the hierarchical NPLM, these local probabilities are computed by giving a version of the NPLM the feature vectors for the context words as well as a feature vector for the current node as inputs. The probability of the next word is then given by the probability of making a sequence of binary decisions that corresponds to the path to that word. When applied to a dataset of about one million words, this model outperformed class-based trigrams, but performed considerably worse than the NPLM [10]. The hierarchical model however was more than two orders of magnitude faster than the NPLM. The main limitation of this work was the procedure used to construct the tree of words for the model. The tree was obtained by starting with the WordNet IS-A taxonomy and converting it into a binary tree through a combination of manual and data-driven processing. Our goal is to replace this procedure by an automated method for building trees from the training data without requiring expert knowledge of any kind. We will also explore the performance benefits of using trees where each word can occur more than once. 3 The log-bilinear model We will use the log-bilinear language model (LBL) [9] as the foundation of our hierarchical model because of its excellent performance and simplicity. Like virtually all neural language models, the LBL model represents each word with a real-valued feature vector. We will denote the feature vector for word w by rw and refer to the matrix containing all these feature vectors as R. To predict the next word wn given the context w1:n−1, the model computes the predicted feature vector ˆr for the next word by linearly combining the context word feature vectors: 2 ˆr = n−1 X i=1 Cirwi, (1) where Ci is the weight matrix associated with the context position i. Then the similarity between the predicted feature vector and the feature vector for each word in the vocabulary is computed using the inner product. The similarities are then exponentiated and normalized to obtain the distribution over the next word: P(wn = w|w1:n−1) = exp(ˆrT rw + bw) P j exp(ˆrT rj + bj). (2) Here bw is the bias for word w, which is used to capture the context-independent word frequency. Note that the LBL model can be interpreted as a special kind of a feed-forward neural network with one linear hidden layer and a softmax output layer. The inputs to the network are the feature vectors for the context words, while the matrix of weights from the hidden layer to the output layer is simply the feature vector matrix R. The vector of activities of the hidden units corresponds to the the predicted feature vector for the next word. Unlike the NPLM, the LBL model needs to compute the hidden activities only once per prediction and has no nonlinearities in its hidden layer. In spite of its simplicity the LBL model performs very well, outperforming both the NPLM and the n-gram models on a fairly large dataset [9]. 4 The hierarchical log-bilinear model Our hierarchical language model is based on the hierarchical model from [10]. The distinguishing features of our model are the use of the log-bilinear language model for computing the probabilities at each node and the ability to handle multiple occurrences of each word in the tree. Note that the idea of using multiple word occurrences in a tree was proposed in [10], but it was not implemented. The first component of the hierarchical log-bilinear model (HLBL) is a binary tree with words at its leaves. For now, we will assume that each word in the vocabulary is at exactly one leaf. Then each word can be uniquely specified by a path from the root of the tree to the leaf node the word is at. The path itself can be encoded as a binary string d of decisions made at each node, so that di = 1 corresponds to the decision to visit the left child of the current node. For example, the string “10” corresponds to a path that starts at the root, visits its left child, and then visits the right child of that child. This allows each word to be represented by a binary string which we will call a code. The second component of the HLBL model is the probabilistic model for making the decisions at each node, which in our case is a modified version of the LBL model. In the HLBL model, just like in its non-hierarchical counterpart, context words are represented using real-valued feature vectors. Each of the non-leaf nodes in the tree also has a feature vector associated with it that is used for discriminating the words in the left subtree form the words in the right subtree of the node. Unlike the context words, the words being predicted are represented using their binary codes that are determined by the word tree. However, this representation is still quite flexible, since each binary digit in the code encodes a decision made at a node, which depends on that node’s feature vector. In the HLBL model, the probability of the next word being w is the probability of making the sequences of binary decisions specified by the word’s code, given the context. Since the probability of making a decision at a node depends only on the predicted feature vector, determined by the context, and the feature vector for that node, we can express the probability of the next word as a product of probabilities of the binary decisions: P(wn = w|w1:n−1) = Y i P(di|qi, w1:n−1), (3) where di is ith digit in the code for word w, and qi is the feature vector for the ith node in the path corresponding to that code. The probability of each decision is given by P(di = 1|qi, w1:n−1) = σ(ˆrT qi + bi), (4) where σ(x) is the logistic function and ˆr is the predicted feature vector computed using Eq. 1. bi in the equation is the node’s bias that captures the context-independent tendency to visit the left child when leaving this node. 3 The definition of P(wn = w|w1:n−1) can be extended to multiple codes per word by including a summation over all codes for w as follows: P(wn = w|w1:n−1) = X d∈D(w) Y i P(di|qi, w1:n−1), (5) where D(w) is a set of codes corresponding to word w. Allowing multiple codes per word can allow better prediction of words that have multiple senses or multiple usage patterns. Using multiple codes per word also makes it easy to combine several separate words hierarchies to into a single one to to reflect the fact that no single hierarchy can express all the relationships between words. Using the LBL model instead of the NPLM for computing the local probabilities allows us to avoid computing the nonlinearities in the hidden layer which makes our hierarchical model faster at making predictions than the hierarchical NPLM. More importantly, the hierarchical NPLM needs to compute the hidden activities once for each of the O(log N) decisions, while the HLBL model computes the predicted feature vector just once per prediction. However, the time complexity of computing the probability for a single binary decision in an LBL model is still quadratic in the feature vector dimensionality D, which might make the use of high-dimensional feature vectors too computationally expensive. We make the time complexity linear in D by restricting the weight matrices Ci to be diagonal.1 Note that for a context of size 1, this restriction does not reduce the representational power of the model because the context weight matrix C1 can be absorbed into the word feature vectors. And while this restriction does makes the models with larger contexts slightly less powerful, we believe that this loss is more than compensated for by much faster training times which allow using more complex trees. HLBL models can be trained by maximizing the (penalized) log-likelihood. Since the probability of the next word depends only on the context weights, the feature vectors of the context words, and the feature vectors of the nodes on the paths from the root to the leaves containing the word in question, only a (logarithmically) small fraction of the parameters need to be updated for each training case. 5 Hierarchical clustering of words The first step in training a hierarchical language model is constructing a binary tree of words for the model to use. This can be done by using expert knowledge, data-driven methods, or a combination of the two. For example, in [10] the tree was constructed from the IS-A taxonomy DAG from WordNet [6]. After preprocessing the taxonomy by hand to ensure that each node had only one parent, datadriven hierarchical binary clustering was performed on the children of the nodes in the taxonomy that had more than two children, resulting in a binary tree. We are interested in using a pure learning approach applicable in situations where the expert knowledge is unavailable. It is also not clear that using expert knowledge, even when it is available, will lead to superior performance. Hierarchical binary clustering of words based on the their usage statistics is a natural choice for generating binary trees of words automatically. This task is similar to the task of clustering words into classes for training class-based n-gram models, for which a large number of algorithms has been proposed. We considered several of these algorithms before deciding to use our own algorithm which turned out to be surprisingly effective in spite of its simplicity. However, we will mention two existing algorithms that might be suitable for producing binary word hierarchies. Since we wanted an algorithm that scaled well to large vocabularies, we restricted our attention to the top-down hierarchical clustering algorithms, as they tend to scale better than their agglomerative counterparts [7]. The algorithm from [8] produces exactly the kind of binary trees we need, except that its time complexity is cubic in the vocabulary size.2 We also considered the distributional clustering algorithm [11] but decided not to use it because of the difficulties involved in using contexts of more than one word for clustering. This problem is shared by most n-gram clustering algorithms, so we will describe it in some detail. Since we would like to cluster words for easy prediction of the next word based on its context, it is natural to describe each word in terms of the contexts that can precede it. For example, for a single-word context one such description is the 1Thus the feature vector for the next word can now be computed as ˆr = Pn−1 i=1 ci ◦rwi, where ci is a vector of context weights for position i and ◦denotes the elementwise product of two vectors. 2More precisely, the time complexity of the algorithm is cubic in the number of the frequent words, but that is still to slow for our purposes. 4 distribution of words that precede the word of interest in the training data. The problem becomes apparent when we consider using larger contexts: the number of contexts that can potentially precede a word grows exponentially in the context size. This is the very same data sparsity problem that affects the n-gram models, which is not surprising, since we are trying to describe words in terms of exponentially large (normalized) count vectors. Thus, clustering words based on such large-context representations becomes non-trivial due to the computational cost involved as well as the statistical difficulties caused by the sparsity of the data. We avoid these difficulties by operating on low-dimensional real-valued word representations in our tree-building procedure. Since we need to train a model to obtain word feature vectors, we perform the following bootstrapping procedure: we generate a random binary tree of words, train an HLBL model based on it, and use the distributed representations it learns to represent words when building the word tree. Since each word is represented by a distribution over contexts it appears in, we need a way of compressing such a collection of contexts down to a low-dimensional vector. After training the HLBL model, we summarize each context w1:n−1 with the predicted feature vector produced from it using Eq. 1. Then, we condense the distribution of contexts that precede a given word into a feature vector by computing the expectation of the predicted representation w.r.t. that distribution. Thus, for the purposes of clustering each word is represented by its average predicted feature vector. After computing the low-dimensional real-valued feature vectors for words, we recursively apply a very simple clustering algorithm to them. At each step, we fit a mixture of two Gaussians to the feature vectors and then partition them into two subsets based on the responsibilities of the two mixture components for them. We then partition each of the subsets using the same procedure, and so on. The recursion stops when the current set contains only two words. We fit the mixtures by running the EM algorithm for 10 steps3. The algorithm updates both the means and the spherical covariances of the components. Since the means of the components are initialized based on a random partitioning of the feature vectors, the algorithm is not deterministic and will produce somewhat different clusterings on different runs. One appealing property of this algorithm is that the running time of each iteration is linear in the vocabulary size, which is a consequence of representing words using feature vectors of fixed dimensionality. In our experiments, the algorithm took only a few minutes to build a hierarchy for a vocabulary of nearly 18000 words based on 100-dimensional feature vectors. The goal of an algorithm for generating trees for hierarchical language models is to produce trees that are well-supported by the data and are reasonably well-balanced so that the resulting models generalize well and are fast to train and test. To explore the trade-off between these two requirements, we tried several splitting rules in our tree-building algorithm. The rules are based on the observation that the responsibility of a component for a datapoint can be used as a measure of confidence about the assignment of the datapoint to the component. Thus, when the responsibilities of both components for a datapoint are close to 0.5, we cannot be sure that the datapoint should be in one component but not the other. Our simplest rule aims to produce a balanced tree at any cost. It sorts the responsibilities and splits the words into two disjoint subsets of equal size based on the sorted order. The second rule makes splits well-supported by the data even if that results in an unbalanced tree. It achieves that by assigning the word to the component with the higher responsibility for the word. The third and the most sophisticated rule is an extension of the second rule, modified to assign a point to both components whenever both responsibilities are within ǫ of 0.5, for some pre-specified ǫ. This rule is designed to produce multiple codes for words that are difficult to cluster. We will refer to the algorithms that use these rules as BALANCED, ADAPTIVE, and ADAPTIVE(ǫ) respectively. Finally, as a baseline for comparison with the above algorithms, we will use an algorithm that generates random balanced trees. It starts with a random permutation of the words and recursively builds the left subtree based one the first half of the words and the right subtree based on the second half of the words. We will call this algorithm RANDOM. 3Running EM for more than 10 steps did not make a significant difference in the quality of the resulting trees. 5 Table 1: Trees of words generated by the feature-based algorithm. The mean code length is the sum of lengths of codes associated with a word, averaged over the distribution of the words in the training data. The run-time complexity of the hierarchical model is linear in the mean code length of the tree used. The mean number of codes per word refers to the number of codes per word averaged over the training data distribution. Since each non-leaf node in a tree has its own feature vector, the number of free parameters associated with the tree is linear in this quantity. Tree Generating Mean code Mean number of Number of label algorithm length codes per word non-leaf nodes T1 RANDOM 14.2 1.0 17963 T2 BALANCED 14.3 1.0 17963 T3 ADAPTIVE 16.1 1.0 17963 T4 ADAPTIVE(0.25) 24.2 1.3 22995 T5 ADAPTIVE(0.4) 29.0 1.7 30296 T6 ADAPTIVE(0.4) × 2 69.1 3.4 61014 T7 ADAPTIVE(0.4) × 4 143.2 6.8 121980 Table 2: The effect of the feature dimensionality and the word tree used on the test set perplexity of the model. Feature Perplexity using Perplexity using Reduction dimensionality a random tree a non-random tree in perplexity 25 191.6 162.4 29.2 50 166.4 141.7 24.7 75 156.4 134.8 21.6 100 151.2 131.3 19.9 6 Experimental results We compared the performance of our models on the APNews dataset containing the Associated Press news stories from 1995 and 1996. The dataset consists of a 14 million word training set, a 1 million word validation set, and 1 million word test set. The vocabulary size for this dataset is 17964. We chose this dataset because it had already been used to compare the performance of neural models to that of n-gram models in [1] and [9], which allowed us to compare our results to the results in those papers. Except for where stated otherwise, the models used for the experiments used 100 dimensional feature vectors and a context size of 5. The details of the training procedure we used are given in the appendix. All models were compared based on their perplexity score on the test set. We started by training a model that used a tree generated by the RANDOM algorithm (tree T1 in Table 1). The feature vectors learned by this model were used to build a tree using the BALANCED algorithm (tree T2). We then trained models of various feature vector dimensionality on each of these trees to see whether a highly expressive model can compensate for using a poorly constructed tree. The test scores for the resulting models are given in Table 2. As can be seen from the scores, using a non-random tree results in much better model performance. Though the gap in performance can be reduced by increasing the dimensionality of feature vectors, using a non-random tree drastically improves performance even for the model with 100-dimensional feature vectors. It should be noted however, that models that use the random tree are not entirely hopeless. For example, they outperform the unigram model which achieved the perplexity of 602.0 by a very large margin. This suggests that the HLBL architecture is sufficiently flexible to make effective use of a random tree over words. Since increasing the feature dimensionality beyond 100 did not result in a substantial reduction in perplexity, we used 100-dimensional feature vectors for all of our models in the following experiments. Next we explored the effect of the tree building algorithm on the performance of the resulting HLBL model. To do that, we used the RANDOM, BALANCED, and ADAPTIVE algorithms to generate one tree each. The ADAPTIVE(ǫ) algorithm was used to generate two trees: one with ǫ set 6 Table 3: Test set perplexity results for the hierarchical LBL models. All the distributed models in the comparison used 100-dimensional feature vectors and a context size of 5. LBL is the nonhierarchical log-bilinear model. KNn is a Kneser-Ney n-gram model. The scores for LBL, KN3, and KN5 are from [9]. The timing for LBL is based on our implementation of the model. Model Tree Tree generating Perplexity Minutes type used algorithm per epoch HLBL T1 RANDOM 151.2 4 HLBL T2 BALANCED 131.3 4 HLBL T3 ADAPTIVE 127.0 4 HLBL T4 ADAPTIVE(0.25) 124.4 6 HLBL T5 ADAPTIVE(0.4) 123.3 7 HLBL T6 ADAPTIVE(0.4) × 2 115.7 16 HLBL T7 ADAPTIVE(0.4) × 4 112.1 32 LBL – – 117.0 6420 KN3 – – 129.8 – KN5 – – 123.2 – to 0.25 and the other with ǫ set to 0.4. We then generated a 2× overcomplete tree by running the ADAPTIVE(ǫ = 0.4) algorithm twice and creating a tree with a root node that had the two generated trees as its subtrees. Since the ADAPTIVE(ǫ) algorithm involves some randomization we tried to improve the model performance by allowing the model to choose dynamically between two possible clusterings. Finally, we generated a 4× overcomplete using the same approach. Table 1 lists the generated trees as well as some statistics for them. Note that trees generated using ADAPTIVE(ǫ) using ǫ > 0 result in models with more parameters due to the greater number of tree-nodes and thus tree-node feature vectors, as compared to trees generated using methods producing one code/leaf per word. Table 3 shows the test set perplexities and time per epoch for the resulting models along with the perplexities for models from [9]. The results show that the performance of the HLBL models based on non-random trees is comparable to that of the n-gram models. As expected, building word trees adaptively improves model performance. The general trend that emerges is that bigger trees tend to lead to better performing models. For example, a model based on a single tree produced using the ADAPTIVE(0.4) algorithm, performs as well as the 5-gram but not as well as the non-hierarchical LBL model. However, using a 2× overcomplete tree generated using the same algorithm results in a model that outperforms both the n-gram models and the LBL model, and using a 4× overcomplete tree leads to a further reduction in perplexity. The time-per-epoch statistics reported for the neural models in Table 3 shows the great speed advantage of the HLBL models over the LBL model. Indeed, the slowest of our HLBL models is over 200 times faster than the LBL model. 7 Discussion and future work We have demonstrated that a hierarchal neural language model can actually outperform its nonhierarchical counterparts and achieve state-of-the-art performance. The key to making a hierarchical model perform well is using a carefully constructed hierarchy over words. We have presented a simple and fast feature-based algorithm for automatic construction of such hierarchies. Creating hierarchies in which every word occurred more than once was essential to getting the models to perform better. An inspection of trees generated by our adaptive algorithm showed that the words with the largest numbers of codes (i.e. the word that were replicated the most) were not the words with multiple distinct senses. Instead, the algorithm appeared to replicate the words that occurred relatively infrequently in the data and were therefore difficult to cluster. The failure to use multiple codes for words with several very different senses is probably a consequence of summarizing the distribution over contexts with a single mean feature vector when clustering words. The “sense multimodality” of context distributions would be better captured by using a small set of feature vectors found by clustering the contexts. 7 Finally, since our tree building algorithm is based on the feature vectors learned by the model, it is possible to periodically interrupt training of such a model to rebuild the word tree based on the feature vectors provided by the model being trained. This modified training procedure might produce better models by allowing the word hierarchy to adapt to the probabilistic component of the model and vice versa. Appendix: Details of the training procedure The models have been trained by maximizing the log-likelihood using stochastic gradient ascent. All model parameters other than the biases were initialized by sampling from a Gaussian of small variance. The biases for the tree nodes were initialized so that the distribution produced by the model with all the non-bias parameters set to zero matched the base rates of the words in the training set. Models were trained using the learning rate of 10−3 until the perplexity on the validation set started to increase. Then the learning rate was reduced to 3 × 10−5 and training was resumed until the validation perplexity started increasing again. All model parameters were regulated using a small L2 penalty. Acknowledgments We thank Martin Szummer for his comments on a draft of this paper. This research was supported by NSERC and CFI. GEH is a fellow of the Canadian Institute for Advanced Research. References [1] Yoshua Bengio, Rejean Ducharme, Pascal Vincent, and Christian Jauvin. A neural probabilistic language model. Journal of Machine Learning Research, 3:1137–1155, 2003. [2] Yoshua Bengio and Jean-S´ebastien Sen´ecal. Quick training of probabilistic neural nets by importance sampling. In AISTATS’03, 2003. [3] P.F. Brown, R.L. Mercer, V.J. Della Pietra, and J.C. Lai. Class-based n-gram models of natural language. Computational Linguistics, 18(4):467–479, 1992. [4] Stanley F. Chen and Joshua Goodman. An empirical study of smoothing techniques for language modeling. In Proceedings of the Thirty-Fourth Annual Meeting of the Association for Computational Linguistics, pages 310–318, San Francisco, 1996. [5] Ahmad Emami, Peng Xu, and Frederick Jelinek. Using a connectionist model in a syntactical based language model. In Proceedings of ICASSP, volume 1, pages 372–375, 2003. [6] C. Fellbaum et al. WordNet: an electronic lexical database. Cambridge, Mass: MIT Press, 1998. [7] J. Goodman. A bit of progress in language modeling. Technical report, Microsoft Research, 2000. [8] John G. McMahon and Francis J. Smith. Improving statistical language model performance with automatically generated word hierarchies. Computational Linguistics, 22(2):217–247, 1996. [9] A. Mnih and G. Hinton. Three new graphical models for statistical language modelling. Proceedings of the 24th international conference on Machine learning, pages 641–648, 2007. [10] Frederic Morin and Yoshua Bengio. Hierarchical probabilistic neural network language model. In Robert G. Cowell and Zoubin Ghahramani, editors, AISTATS’05, pages 246–252, 2005. [11] F. Pereira, N. Tishby, and L. Lee. Distributional clustering of English words. Proceedings of the 31st conference on Association for Computational Linguistics, pages 183–190, 1993. [12] Holger Schwenk and Jean-Luc Gauvain. Connectionist language modeling for large vocabulary continuous speech recognition. In Proceedings of the International Conference on Acoustics, Speech and Signal Processing, pages 765–768, 2002. 8
2008
46
3,533
Supervised Exponential Family Principal Component Analysis via Convex Optimization Yuhong Guo Computer Sciences Laboratory Australian National University yuhongguo.cs@gmail.com Abstract Recently, supervised dimensionality reduction has been gaining attention, owing to the realization that data labels are often available and indicate important underlying structure in the data. In this paper, we present a novel convex supervised dimensionality reduction approach based on exponential family PCA, which is able to avoid the local optima of typical EM learning. Moreover, by introducing a sample-based approximation to exponential family models, it overcomes the limitation of the prevailing Gaussian assumptions of standard PCA, and produces a kernelized formulation for nonlinear supervised dimensionality reduction. A training algorithm is then devised based on a subgradient bundle method, whose scalability can be gained using a coordinate descent procedure. The advantage of our global optimization approach is demonstrated by empirical results over both synthetic and real data. 1 Introduction Principal component analysis (PCA) has been extensively used for data analysis and processing. It provides a closed-form solution for linear unsupervised dimensionality reduction through singular value decomposition (SVD) on the data matrix [8]. Probabilistic interpretations of PCA have also been provided in [9, 16], which formulate PCA using a latent variable model with Gaussian distributions. To generalize PCA to better suit non-Gaussian data, many extensions to PCA have been proposed that relax the assumption of a Gaussian data distribution. Exponential family PCA is the most prominent example, where the underlying dimensionality reduction principle of PCA is extended to the general exponential family [4, 7, 13]. Previous work has shown that improved quality of dimensionality reduction can be obtained by using exponential family models appropriate for the data at hand [4, 13]. Given data from a non-Gaussian distribution these techniques are better able than PCA to capture the intrinsic low dimensional structure. However, most existing non-Gaussian dimensionality reduction methods rely on iterative local optimization procedures and thus suffer from local optima, with the sole exception of [7] which shows a general convex form can be obtained for dimensionality reduction with exponential family models. Recently, supervised dimensionality reduction has begun to receive increased attention. As the goal of dimensionality reduction is to identify the intrinsic structure of a data set in a low dimensional space, there are many reasons why supervised dimensionality reduction is a meaningful topic to study. First, data labels are almost always assigned based on some important intrinsic property of the data. Such information should be helpful to suppress noise and capture the most useful aspects of a compact representation of the data. Moreover, there are many high dimensional data sets with label information available, e.g., face and digit images, and it is unwise to ignore them. A few supervised dimensionality reduction methods based on exponential family models have been proposed in the literature. For example, a supervised probabilistic PCA (SPPCA) model was proposed in [19]. SPPCA extends probabilistic PCA by assuming that both features and labels have Gaussian distributions and are generated independently from the latent low dimensional space through linear transformations. The model is learned by maximizing the marginal likelihood of the observed data using an alternating EM procedure. A more general supervised dimensionality reduction approach with generalized linear models (SDR GLM) was proposed in [12]. SDR GLM views both features and labels as exponential family random variables and optimizes a weighted linear combination of their conditional likelihood given latent low dimensional variables using an alternating EM-style procedure with closed-form update rules. SDR GLM is able to deal with different data types by using different exponential family models. Similar to SDR GLM, the linear supervised dimensionality reduction method proposed in [14] also takes advantage of exponential family models to deal with different data types. However, it optimizes the conditional likelihood of labels given observed features within a mixture model framework using an EM-style optimization procedure. Beyond the PCA framework, many other supervised dimensionality reduction methods have been proposed in the literature. Linear (fisher) discriminant analysis (LDA) is a popular alternative [5], which maximizes between-class variance and minimizes within-class variance. Moreover, a kernelized fisher discriminant analysis (KDA) has been studied in [10]. Another notable nonlinear supervised dimensionality reduction approach is the colored maximum variance unfolding (MVU) approach proposed in [15], which maximizes the variance aligning with the side information (e.g., label information), while preserving the local distance structures from the data. However, colored MVU has only been evaluated on training data. In this paper, we propose a novel supervised exponential family PCA model (SEPCA). In the SEPCA model, observed data x and its label y are assumed to be generated from the latent variables z via conditional exponential family models; dimensionality reduction is conducted by optimizing the conditional likelihood of the observations (x, y). By exploiting convex duality of the sub-problems and eigenvector properties, a solvable convex formulation of the problem can be derived that preserves solution equivalence to the original. This convex formulation allows efficient global optimization algorithms to be devised. Moreover, by introducing a sample-based approximation to exponential family models, SEPCA does not suffer from the limitations of implicit Gaussian assumptions and is able to be conveniently kernelized to achieve nonlinearity. A training algorithm is then devised based on a subgradient bundle method, whose scalability can be gained through a coordinate descent procedure. Finally, we present a simple formulation to project new testing data into the embedded space. This projection can be used for other supervised dimensionality reduction approach as well. Our experimental results over both synthetic and real data suggest that a more global, principled probabilistic approach, SEPCA, is better able to capture subtle structure in the data, particularly when good label information is present. The remainder of this paper is organized as follows. First, in Section 2 we present the proposed supervised exponential family PCA model and formulate a convex nondifferentiable optimization problem. Then, an efficient global optimization algorithm is presented in Section 3. In Section 4, we present a simple projection method for new testing points. We then present the experimental results in Section 5. Finally, in Section 6 we conclude the paper. 2 Supervised Exponential Family PCA We assume we are given a t × n data matrix, X, consisting of t observations of n-dimensional feature vectors, Xi:, and a t×k indicator matrix, Y , with each row to indicate the class label for each observation Xi:; thus Pk j=1 Yij = 1. For simplicity, we assume features in X are centered; that is, their empirical means are zeros. We aim to recover a d-dimensional re-representation, a t×d matrix Z, of the data (d < n). This is typically viewed as discovering a latent low dimensional manifold in the high dimensional feature space. Since the label information Y is exploited in the discovery process, this is called supervised dimensionality reduction. For recovering Z, a key restriction that one would like to enforce is that the features used for coding, Z:j, should be linearly independent; that is, one would like to enforce the constraint Z⊤Z = I, which ensures that the codes are expressed by orthogonal features in the low dimensional representation. Given the above setup, in this paper, we are attempting to address the problem of supervised dimensionality reduction using a probabilistic latent variable model. Our intuition is that the important intrinsic structure (underlying feature representation) of the data should be able to accurately generate/predict the original data features and labels. In this section, we formulate the low-dimensional principal component discovering problem as a conditional likelihood maximization problem based on exponential family model representations, which can be reformulated into an equivalent nondifferentiable convex optimization problem. We then exploit a sample-based approximation to unify exponential family models for different data types. 2.1 Convex Formulation of Supervised Exponential Family PCA As with the generalized exponential family PCA [4], we attempt to find low-dimensional representation by maximizing the conditional likelihood of the observation matrix X and Y given the latent matrix Z, log P(X, Y |Z) = log P(X|Z) + log P(Y |Z). Using the general exponential family representation, a regularized version of this maximization problem can be formulated as max Z:Z⊤Z=I max W,Ω,b log P(X|Z, W) −β 2 tr ¡ WW ⊤¢ + log P(Y |Z, Ω, b) −β 2 ¡ tr ¡ ΩΩ⊤¢ + b⊤b ¢ = max Z:Z⊤Z=I max W,Ω,b tr ¡ ZWX⊤¢ − X i (A(Zi:, W) −log P0(Xi:)) −β 2 tr ¡ WW ⊤¢ (1) +tr ¡ ZΩY ⊤¢ + 1⊤Y b − X i A(Zi:, Ω, b) −β 2 ¡ tr ¡ ΩΩ⊤¢ + b⊤b ¢ where W is a d × n parameter matrix for conditional model P(X|Z); Ωis a d × k parameter matrix for conditional model P(Y |Z) and b is a k×1 bias vector; 1 denotes the vector of all 1s; A(Zi:, W) and A(Zi:, Ω, b) are the log normalization functions to ensure valid probability distributions: A(Zi:, W) = log Z exp (Zi:Wx) P0(x) dx . (2) A(Zi:, Ω, b) = log k X ℓ=1 exp ¡ Zi:Ω1ℓ+ 1⊤ ℓb ¢ (3) where 1ℓdenotes a zero vector with a single 1 in the ℓth entry. Note that the class variable y is discrete, thus maximizing log P(Y |Z, Ω, b) is a discriminative classification training. In fact, the second part of the objective function in (1) is simply a multi-class logistic regression. That is why we have incorporated an additional bias term b into the model. Theorem 1 The optimization problem (1) is equivalent to min U x,U y max M:I⪰M⪰0, tr(M)=d X i (A∗(U x i:) + log P0(Xi:)) + 1 2β tr ¡ (X−U x)(X−U x)⊤M ¢ + X i A∗(U y i:) + 1 2β tr ¡ (Y −U y)(Y −U y)⊤(M + E) ¢ (4) where E is a t × t matrix with all 1s; U x is a t × n matrix; U y is a t × k matrix; A∗(U x i:) and A∗(U y i:) are the Fenchel conjugates of A(Zi:, W) and A(Zi:, Ω, b) respectively; M = ZZ⊤and Z can be recovered by taking the top d eigenvectors of M; and the model parameters W, Ω, b can be recovered by W = 1 β Z⊤(X −U x), Ω= 1 β Z⊤(Y −U y), b = 1 β (Y −U y)⊤1 Proof: The proof is simple and based on standard results. Due to space limitation, we only provide a summarization of the key steps here. There are three steps. The first step is to derive the Fenchel conjugate dual for each log partition function, A(Z, .), following [18, Section 3.3.3]; which can be used to yield max Z:Z⊤Z=I min U x,U y X i (A∗(U x i:) + log P0(Xi:)) + 1 2β tr ¡ (X−U x)(X−U x)⊤ZZ⊤¢ + X i A∗(U y i:) + 1 2β tr ¡ (Y −U y)(Y −U y)⊤(ZZ⊤+ E) ¢ (5) that is equivalent to the original problem (1). The second step is based on exploiting the strong min-max property [2] and the relationships between different constraint sets {M : M = ZZ⊤for some Z such that Z⊤Z = I} ⊆{M : I ⪰M ⪰0, tr(M) = d}, which allows one to further show the optimization (4) is an upper bound relaxation of (5). The final equivalence proof is based on the result of [11], which suggests the substitution of ZZ⊤with matrix M does not produce relaxation gap. Note that (4) is a min-max optimization problem. Moreover, for each fixed M, the outer minimization problem is obviously convex, since the Fenchel conjugates, A∗(U x i:) and A∗(U y i:), are convex functions of U x and U y respectively [2]; that is, the objective function for the outer minimization is a pointwise supremum over an infinite set of convex functions. Thus the overall min-max optimization is convex [3], but apparently not necessarily differentiable. We will address the nondifferentiable training issue in Section 3. 2.2 Sample-based Approximation In the previous section, we have formulated our supervised exponential family PCA as a convex optimization problem (4). However, before attempting to devise a training algorithm to solve it, we have to provide some concrete forms for the Fenchel conjugate functions A∗(U x i:) and A∗(U y i:). For different exponential family models, the Fenchel conjugate functions A∗are different; see [18, Table 2]. For example, since the y variable in our model is a discrete class variable, it takes a multinomial distribution. Thus the Fenchel conjugate function A∗(U y i:) is given by A∗(U y i:) = A∗(Θy i:) = tr ³ Θy i: log Θy⊤ i: ´ , where Θy ≥0, Θy1 = 1 (6) The specific exponential family model is determined by the data type and distribution. PCA and SPPCA use Gaussian models, thus their performances might be degraded when the data distribution is non-Gaussian. However, it is tedious and sometimes hard to choose the most appropriate exponential family model to use for each specific application problem. Moreover, the log normalization function A and its Fenchel conjugate A∗might not be easily computable. For these reasons, we propose to use a sample-based approximation to the integral (2) and achieve an empirical approximation to the true underlying exponential family model as follows. If one replaces the integral definition (2) with an empirical definition, A(Zi:, W) = log P j exp ¡ Zi:WX⊤ j: ¢ /t, then the conjugate function can be given by A∗(U x i:) = A∗(Θx i:) = tr ¡ Θx i: log Θx⊤ i: ¢ −log(1/t), where Θx ≥0, Θx1 = 1 (7) With this sample-based approximation, problem (4) can be expressed as min Θx,Θy max M:I⪰M⪰0, tr(M)=d tr (Θx log Θx) + 1 2β tr ¡ (I−Θx)K(I−Θx)⊤M ¢ (8) + tr (Θy log Θy) + 1 2β tr ¡ (Y −Θy)(Y −Θy)⊤(M + E) ¢ subject to Θx ≥0, Θx1 = 1; Θy ≥0, Θy1 = 1 (9) One benefit of working with this sample-based approximation is that it is automatically kernelized, K = XX⊤, to enable non-linearity to be conveniently introduced. 3 Efficient Global Optimization The optimization (8) we derived in the previous section is a convex-concave min-max optimization problem. The inner maximization of (8) is a well known problem with a closed-form solution [11]: M ∗= Z∗Z∗⊤and Z∗= Qd max ¡ (I−Θx)K(I−Θx)⊤+ (Y −Θy)(Y −Θy)⊤¢ , where Qd max(D) denotes the matrix formed by the top d eigenvectors of D. However, the overall outer minimization problem is nondifferentiable with respect to Θx and Θy. Thus the standard first-order or secondorder optimization techniques that rely on the standard gradients can not be applied here. In this section, we deploy a bundle method to solve this nondifferentiable min-max optimization. 3.1 Bundle Method for Min-Max Optimization The bundle method is an efficient subgradient method for nondifferentiable convex optimization; it relies on the computation of subgradient terms of the objective function. A vector g is a subgradient of function f at point x, if f(y) ≥f(x) + g⊤(y −x), ∀y. To adapt standard bundle methods to our specific min-max problem, we need to first address the critical issue of subgradient computation. Proposition 1 Consider a joint function h(x, y) defined over x ∈X and y ∈Y, satisfying: (1) h(·, y) is convex for all y ∈Y; (2) h(x, ·) is concave for all x ∈X. Let f(x) = maxy h(x, y), and q(x0) = arg maxy h(x0, y). Assume that g is a gradient of h(·, q(x0)) at x = x0, then g is a subgradient of f(x) at x = x0. Proof: f(x) = max y h(x, y) ≥h(x, q(x0)) ≥ h(x0, q(x0)) + g⊤(x −x0) (since h(·, y) is convex for all y ∈Y) = f(x0) + g⊤(x −x0) (by the definitions of f(x) and q(x0)) Thus g is a subgradient of f(x) at x = x0 according to the definition of subgradient. According to Proposition 1, the subgradients of our outer minimization objective function f in (8) over Θx and Θy can be given by ∂Θxf ∋ ¡ log Θx + 1 −1 β M ∗(I −Θx)K ¢ , ∂Θyf ∋ ¡ log Θy + 1 −1 β M ∗(Y −Θy) ¢ (10) where M ∗is the optimal inner maximization solution at the current point [Θx, Θy]. Algorithm 1 illustrates the bundle method we developed to solve the infinite min-max optimization (8), where the linear constraints (9) over Θx and Θy can be conveniently incorporated into the quadratic bound optimization. One important issue in this algorithm is how to manage the size of the linear lower bound constraints formed from the active set B (defined in Algorithm 1), as it incrementally increases with new points being explored. To solve this problem, we noticed the Lagrangian dual parameters α for the lower bound constraints obtained by the quadratic optimization in step 1 is a sparse vector, indicating that many lower bound constraints can be turned off. Moreover, any constraint that is turned off will mostly stay off in the later steps. Therefore, for the bundle method we developed, whenever the size of B is larger than a given constant b, we will keep the active points of B that correspond to the first b largest α values, and drop the remaining ones. 3.2 Coordinate Descent Procedure An important factor affecting the running efficiency is the size of the problem. The convex optimization (8) works in the dual parameter space, where the size of the parameters Θ = {Θx, Θy}, t × (t + k), depends only on the number of training samples, t, not on the feature size, n. For high dimensional small data sets (n ≫t), our dual optimization is certainly a good option. However, with the increase of t, our problem size will increase in an order of O(t2). It might soon become too large to handle for the quadratic optimization step of the bundle method. On the other hand, the optimization problem (8) possesses a nice semi-decomposable structure: one equality constraint in (9) involves only one row of the Θ; that is, the Θ can be separated into rows without affecting the equality constraints. Based on this observation, we develop a coordinate descent procedure to obtain scalability of the bundle method over large data sets. Specifically, we put an outer loop above the bundle method. Within each of this outer loop iteration, we randomly separate the Θ parameters into m groups, with each group containing a subset rows of Θ; and we then use bundle method to sequentially optimize each subproblem defined on one group of Θ parameters while keeping the remaining rows of Θ fixed. Although coordinate descent with a nondifferentiable convex objective is not guaranteed to converge to a minimum in general [17], we have found that this procedure performs quite well in practice, as shown in the experimental results. 4 Projection for Testing Data One important issue for supervised dimensionality reduction is to map new testing data into the dimensionality-reduced principal dimensions. We deploy a simple procedure for this purpose. After Algorithm 1 Bundle Method for Min-Max Optimization in (8) Input: ¯δ > 0, m ∈(0, 1), b ∈IN, µ ∈IR Initial: Find an initial point θ∗satisfying the linear constraints in (9); compute f(θ∗). Let ℓ= 1, θℓ= θ∗, compute gℓ∈∂θℓf by (10); eℓ= f(θ∗) −f(θℓ) −gℓ⊤(θ∗−θℓ). Let B = {(eℓ, gℓ)}, ˆε = Inf, ˆg = 0; ℓ= ℓ+ 1. repeat 1. Solve quadratic minimization for solution ˆθ, and Lagrangian dual parameters α w.r.t. the lower bound linear constraints in B [1]: ˆθ = arg min θ ψℓ(θ) + µ 2 ∥θ −θ∗∥2, subject to the linear constraints in (9) where ψℓ(θ) = f(θ∗) + max © −ˆε + ˆg⊤(θ −θ∗), max (ei,gi)∈B{−ei + ˆgi⊤(θ −θ∗)} ª 2. Define δℓ= f(θ∗) −[ψℓ(ˆθ) + µ 2 ∥ˆθ −θ∗∥2 ≥0. If δℓ< ˆδ, return. 3. Conduct line search to minimize f(θℓ) with θℓ= γθ∗+ (1 −γ)ˆθ, for 0 < γ < 1. 4. Compute gℓ∈∂θℓf by (10); eℓ= f(θ∗)−f(θℓ)−gℓ⊤(θ∗−θℓ); update B = B∪{(eℓ, gℓ)}. 5. If f(θ∗) −f(θℓ) ≥mδℓ, then take a serious step: (1) update: ei = ei + f(θℓ) −f(θ∗) + gi⊤(θ∗−θℓ); (2) update the aggregation: ˆg = P i αigi, ˆε = P i αiei; (3) update the stored solution: θ∗= θℓ, f(θ∗) = f(θℓ). 6. If |B| > b, reduce B set according to α. 7. ℓ= ℓ+ 1. until maximum iteration number is reached training, we obtain a low-dimensional representation Z for X, where Z can be viewed as a linear projection of X in some transformed space ψ(X) through a parameter matrix U; such that Z = ψ(X)U = ψ(X)ψ(X)⊤K+ψ(X)U, where K+ denotes the pseudo inverse of K = ψ(X)ψ(X)⊤. Then a new testing sample x∗can be projected by z∗= ψ(x∗)ψ(X)⊤K+ψ(X)U = k(x∗, X)K+Z (11) 5 Experimental Results In order to evaluate the performance of the proposed supervised exponential family PCA (SEPCA) approach, we conducted experiments over both synthetic and real data, and compared to supervised dimensionality reduction with generalized linear models (SDR GLM), supervised probabilistic PCA (SPPCA), linear discriminant analysis (LDA), and colored maximum variance unfolding (MVU). The projection procedure (11) is used for colored MVU as well. In all the experiments, we used µ = 1 for Algorithm 1, and used α = 0.0001 for SDR GLM as suggested in [12]. 5.1 Experiments on Synthetic Data Two synthetic experiments were conducted to compare the five approaches under controlled conditions. The first synthetic data set is formed by first generating four Gaussian clusters in a twodimensional space, with each corresponding to one class, and then adding the third dimension to each point by uniformly sampling from a fixed interval. This experiment attempts to compare the performance of the five approaches in the situation where the data distribution does not satisfy the Gaussian assumption. Figure 1 shows the projection results for each approach in a two dimensional space for 120 testing points after being trained on a set with 80 points. In this case, SEPCA and LDA outperform all the other three approaches. The second synthetic experiment is designed to test the capability of performing nonlinear dimensionality reduction. The synthetic data is formed by first generating two circles in a two dimensional space (one circle is located inside the other one), with each circle corresponding to one class, and then the third dimension sampled uniformly from a fixed interval. As SDR GLM does not provide a nonlinear form, we conducted the experiment with only the remaining four approaches. For LDA, we used its kernel variant, KDA. A Gaussian kernel with σ = 1 was used for SEPCA, SPPCA and KDA. Figure 2 shows the projection results for each approach in a two dimensional space for 120 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 −0.25 −0.2 −0.15 −0.1 −0.05 0 0.05 0.1 0.15 0.2 0.25 SEPCA −30 −20 −10 0 10 20 30 −30 −20 −10 0 10 20 30 40 50 60 SDR−GLM −1 −0.5 0 0.5 1 1.5 −1.5 −1 −0.5 0 0.5 1 SPPCA −4 −3 −2 −1 0 1 2 −3 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 LDA −7 −6 −5 −4 −3 −2 −1 0 1 2 x 10 −7 −8 −6 −4 −2 0 2 4 Colored−MVU Figure 1: Projection results on test data for synthetic experiment 1. Each color indicates one class. −0.25 −0.2 −0.15 −0.1 −0.05 0 0.05 0.1 0.15 −0.2 −0.15 −0.1 −0.05 0 0.05 0.1 0.15 SEPCA −20 −15 −10 −5 0 5 x 10 −3 −0.02 −0.015 −0.01 −0.005 0 0.005 0.01 SPPCA −0.35 −0.3 −0.25 −0.2 −0.15 −0.1 −0.05 0 −0.05 −0.04 −0.03 −0.02 −0.01 0 0.01 0.02 KDA 5.85 5.9 5.95 6 6.05 6.1 6.15 6.2 6.25 6.3 6.35 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 Colored−MVU Figure 2: Projection results on test data for synthetic experiment 2. Each color indicates one class. testing points after being trained on a set with 95 points. Again, SEPCA and KDA achieve good class separations and outperform the other two approaches. 5.2 Experiments on Real Data To better characterize the performance of dimensionality reduction in a supervised manner, we conducted some experiments on a few high dimensional multi-class real world data sets. The left side of Table 1 provides the information about these data sets. Our experiments were conducted in the following way. We randomly selected 3∼5 examples from each class to form the training set and used the remaining examples as the test set. For each approach, we first learned the dimensionality reduction model on the training set. Moreover, we also trained a logistic regression classifier using the projected training set in the reduced low dimensional space. (Note, for SEPCA, a classifier was trained simultaneously during the process of dimensionality reduction optimization.) Then the test data were projected into the low dimensional space according to each dimensionality reduction model. Finally, the projected test set for each approach were classified using each corresponding logistic regression classifier. The right side of Table 1 shows the classification accuracies on the test set for each approach. To better understand the quality of the classification using projected data, we also included the standard classification results, indicated as ’FULL’, using the original high dimensional data. (Note, we are not able to obtain any result for SDR GLM on the newsgroup data as it is inefficient for very high dimensional data.) The results reported here are averages over 20 repeated runs, and the projection dimension d = 10. Still the proposed SEPCA presents the best performance among the compared approaches. But different from the synthetic experiments, LDA does not work well on these real data sets. The results on both synthetic and real data show that SEPCA outperforms the other four approaches. This might be attributed to its adaptive exponential family model approximation and its global optimization, while SDR GLM and SPPCA apparently suffer from local optima. 6 Conclusions In this paper, we propose a supervised exponential family PCA (SEPCA) approach, which can be solved efficiently to find global solutions. Moreover, SEPCA overcomes the limitation of the Gaussian assumption of PCA and SPPCA by using a data adaptive approximation for exponential family models. A simple, straightforward projection method for new testing data has also been constructed. Empirical study suggests that this SEPCA outperforms other supervised dimensionality reduction approaches, such as SDR GLM, SPPCA, LDA and colored MVU. Table 1: Data set statistics and test accuracy results (%) SDR colored Dataset #Data #Dim #Class FULL SEPCA GLM SPPCA LDA MVU Yale 165 4096 15 65.3 64.4 58.8 51.6 31.0 21.1 YaleB 2414 1024 38 47.0 20.5 19.0 9.8 6.2 2.8 11 Tumor 174 12533 11 77.6 88.9 63.5 63.0 23.7 40.2 Usps3456 120 256 4 82.1 79.7 77.9 78.5 74.3 75.8 Newsgroup 19928 25284 20 32.1 16.9 – 6.9 10.0 10.4 References [1] A. Belloni. Introduction to bundle methods. Technical report, MIT, 2005. [2] J. Borwein and A. Lewis. Convex Analysis and Nonlinear Optimization. Springer, 2000. [3] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge U. Press, 2004. [4] M. Collins, S. Dasgupta, and R. Schapire. A generalization of principal component analysis to the exponential family. In Advances in Neural Information Processing Systems (NIPS), 2001. [5] R. Fisher. The use of multiple measurements in taxonomic problems. Annals of Eugenics, 7:179–188, 1936. [6] Y. Guo and D. Schuurmans. Convex relaxations of latent variable training. In Advances in Neural Information Processing Systems (NIPS), 2007. [7] Y. Guo and D. Schuurmans. Efficient global optimization for exponential family PCA and low-rank matrix factorization. In Allerton Conf. on Commun., Control, and Computing, 2008. [8] I. Jolliffe. Principal Component Analysis. Springer Verlag, 2002. [9] N. Lawrence. Probabilistic non-linear principle component analysis with gaussian process latent variable models. Journal of Machine Learning Research, 6:1783–1816, 2005. [10] S. Mika, G. Ratsch, J. Weston, B. Scholkopf, and K. Muller. Fisher discriminant analysis with kernels. In IEEE Neural Networks for Signal Processing Workshop, 1999. [11] M. Overton and R. Womersley. Optimality conditions and duality theory for minimizing sums of the largest eigenvalues of symmetric matrices. Math. Prog., 62:321–357, 1993. [12] I. Rish, G. Grabarnilk, G. Cecchi, F. Pereira, and G. Gordon. Closed-form supervised dimensionality reduction with generalized linear models. In Proceedings of International Conference on Machine Learning (ICML), 2008. [13] Sajama and A. Orlitsky. Semi-parametric exponential family PCA. In Advances in Neural Information Processing Systems (NIPS), 2004. [14] Sajama and A. Orlitsky. Supervised dimensionality reduction using mixture models. In Proceedings of the International Conference on Machine Learning (ICML), 2005. [15] L. Song, A. Smola, K. Borgwardt, and A. Gretton. Colored maximum variance unfolding. In Advances in Neural Information Processing Systems (NIPS), 2007. [16] M. Tipping and C. Bishop. Probabilistic principal component analysis. Journal of the Royal Statistical Society, B, 6(3):611–622, 1999. [17] P. Tseng. Convergence of a block coordinate descent method for nondifferentiable minimization. Journal of Optimization Theory and Applications, 109:457–494, 2001. [18] M. Wainwright and M. Jordan. Graphical models, exponential families, and variational inference. Technical Report TR-649, UC Berkeley, Dept. Statistics, 2003. [19] S. Yu, K. Yu, V. Tresp, H. Kriegel, and M. Wu. Supervised probabilistic principal component analysis. In Proceedings of 12th ACM SIGKDD International Conf. on KDD, 2006.
2008
47
3,534
Stochastic Relational Models for Large-scale Dyadic Data using MCMC Shenghuo Zhu Kai Yu Yihong Gong NEC Laboratories America, Cupertino, CA 95014, USA {zsh, kyu, ygong}@sv.nec-labs.com Abstract Stochastic relational models (SRMs) [15] provide a rich family of choices for learning and predicting dyadic data between two sets of entities. The models generalize matrix factorization to a supervised learning problem that utilizes attributes of entities in a hierarchical Bayesian framework. Previously variational Bayes inference was applied for SRMs, which is, however, not scalable when the size of either entity set grows to tens of thousands. In this paper, we introduce a Markov chain Monte Carlo (MCMC) algorithm for equivalent models of SRMs in order to scale the computation to very large dyadic data sets. Both superior scalability and predictive accuracy are demonstrated on a collaborative filtering problem, which involves tens of thousands users and half million items. 1 Stochastic Relational Models Stochastic relational models (SRMs) [15] are generalizations of Gaussian process (GP) models [11] to the relational domain, where each observation is a dyadic datum, indexed by a pair of entities. They model dyadic data by a multiplicative interaction of two Gaussian process priors. Let U be the feature representation (or index) space of a set of entities. A pair-wise similarity in U is given by a kernel (covariance) function Σ : U × U →R. A Gaussian process (GP) defines a random function f : U →R, whose distribution is characterized by a mean function and the covariance function Σ, denoted by f ∼N∞(0, Σ)1, where, for simplicity, we assume the mean to be the constant zero. GP complies with the intuition regarding the smoothness — if two entities ui and uj are similar according to Σ, then f(ui) and f(uj) are similar with a high probability. A domain of dyadic data must involve another set of entities, let it be represented (or indexed) by V. In a similar way, this entity set is associated with another kernel function Ω. For example, in a typical collaborative filtering domain, U represents users while V represents items, then, Σ measures the similarity between users and Ωmeasures the similarity between items. Being the relation between a pair of entities from different sets, a dyadic variable y is indexed by the product space U × V. Then an SRM aims to model y(u, v) by the following generative process, Model 1. The generative model of an SRM: 1. Draw kernel functions Σ ∼IW∞(δ, Σ◦), and Ω∼IW∞(δ, Ω◦); 2. For k = 1, . . . , d: draw random functions fk ∼N∞(0, Σ), and gk ∼N∞(0, Ω); 1We denote an n dimensional Gaussian distribution with a covariance matrix Σ by Nn(0, Σ). Then N∞(0, Σ) explicitly indicates that a GP follows an “infinite dimensional” Gaussian distribution. 1 3. For each pair (u, v): draw y(u, v) ∼p(y(u, v)|z(u, v), γ), where z(u, v) = 1 √ d d X k=1 fk(u)gk(v) + b(u, v). In this model, IW∞(δ, Σ◦) and IW∞(δ, Ω◦) are hyper priors, whose details will be introduced later. p(y|z, γ) is the problem-specific noise model. For example, it can follow a Gaussian noise distribution y ∼N1(z, γ) if y is numerical, or, a Bernoulli distribution if y is binary. Function b(u, v) is the bias function over the U × V. For simplicity, we assume b(u, v) = 0. In the limit d →∞, the model converges to a special case where fk and gk can be analytically marginalized out and z becomes a Gaussian process z ∼N∞(0, Σ ⊗Ω) [15], with the covariance between pairs being a tensor kernel K ((ui, vs), (uj, vt)) = Σ(ui, uj)Ω(vs, vt). In anther special case, if Σ and Ωare both fixed to be Dirac delta functions, and U, V are finite sets, it is easy to see that the model reduces to probabilistic matrix factorization. The hyper prior IW∞(δ, Σ◦) is called inverted Wishart Process that generalizes the finite ndimensional inverted Wishart distribution [2] IWn(Σ|δ, Σ◦) ∝|Σ|−1 2 (δ+2n) etr −1 2Σ−1Σ◦ , where δ is the degree-of-freedom parameter, and Σ◦is a positive definite kernel matrix. We note that the above definition is different from the popular formulation [3] or [4] in the machine learning community. The advantage of this new notation is demonstrated by the following theorem [2]. Theorem 1. Let A ∼IWm(δ, K), A ∈R+, K ∈R+, and A and K be partitioned as A =  A11, A12 A21, A22  , K =  K11, K12 K21, K22  where A11 and K11 are two n × n sub matrices, n < m, then A11 ∼IWn(δ, K11). The new formulation of inverted Wishart is consistent under marginalization. Therefore, similar to the way of deriving GPs from Gaussian distributions, we define a distribution of infinite-dimensional kernel functions, denoted by Σ ∼IW∞(δ, Σ◦), such that any sub kernel matrix of size m × m follows Σ ∼IWm(δ, Σ◦), where both Σ and Σ◦are positive definite kernel functions. In case when U and V are sets of entity indices, SRMs let Σ◦and Ω◦both be Dirac delta functions, i.e., any of their sub kernel matrices is an identity matrix. Similar to GP regression/classification, the major application of SRMs is supervised prediction based on observed relational values and input features of entities. Formally, let YI = {y(u, v)|(u, v) ∈I} be the set of noisy observations, where I ⊂U × V, the model aims to predict the noise-free values ZO = {z(u, v)|(u, v) ∈O} on O ⊂U × V. As our computation is always on a finite set containing both I and O, from now on, we only consider the finite subset U0 × V0, a finite support subset of U × V that contains I ∪O. Accordingly we let Σ be the covariance matrix of Σ on U0, and Ωbe the covariance matrix of Ωon V0. Previously a variational Bayesian method was applied to SRMs [15], which computes the maximum a posterior estimates of Σ and Ω, given YI, and then predicts ZO based on the estimated Σ and Ω. There are two limitations of this empirical Bayesian approach: (1) The variational method is not a fully Bayesian treatment. Ideally we wish to integrate Σ and Ω; (2) The more critical issue is, the algorithm has the complexity O(m3 + n3), with m = |U0| and n = |V0|, is not scalable to a large relational domain where m or n exceeds several thousands. In this paper we will introduce a fully Bayesian inference algorithm using Markov chain Monte Carlo sampling. By deriving equivalent sampling processes, we show the algorithms can be applied to a dataset, which is 103 times larger than the previous work [15], and produce an excellent accuracy. In the rest of this paper, we present our algorithms for Bayesian inference of SRMs in Section 2. Some related work is discussed in Section 3, followed by experiment results of SRMs in Section 4. Section 5 concludes. 2 2 Bayesian Models and MCMC Inference In this paper, we tackle the scalability issue with a fully Bayesian paradigm. We estimate the expectation of ZO directly from YI using Markov-chain Monte Carlo (MCMC) algorithm (specifically, Gibbs sampling), instead of evaluating that from estimated Σ or Ω. Our contribution is in how to make the MCMC inference more efficient for large scale data. We first introduce some necessary notation here. Bold capital letters, e.g. X, indicate matrices. I(m) is an identity matrix of size m × m. Nd, Nm,d, IWm, χ−2 are the multivariate normal distribution, the matrix-variate normal distribution, the inverse-Wishart distribution, and the inverse chi-square distribution, respectively. 2.1 Models with Non-informative Priors Let r = |I|, m = |U0| and n = |V0|. It is assumed that d ≪min(m, n), and the observed set, I, is sparse, i.e. r ≪mn. First, we consider the case of Σ◦= αI(m) and Ω◦= βI(n). Let {fk} on U0 denoted by matrix variate F of size m × d, {gk} on V0 denoted by matrix variate G of size n × d. Then the generative model is written as Model 2 and depicted in Figure 1. Σ I(d) Ω I(d) F G Z Y s2 Figure 1: Model 2 Model 2. The generative model of a matrix-variate SRM: 1. Draw Σ ∼IWm(δ, αI(m)) and Ω∼IWn(δ, βI(n)); 2. Draw F|Σ ∼Nm,d(0, Σ ⊗I(d)) and G|Ω∼Nn,d(0, Ω⊗I(d)); 3. Draw s2 ∼χ−2(ν, σ2) ; 4. Draw Y|F, G, s2 ∼Nm,n(Z, s2I(m) ⊗I(n)), where Z = FG⊤. where Nm,d is the matrix-variate normal distribution of size m × d; α, β, δ, ν and σ2 are scalar parameters of the model. A slight difference between this finite model and Model 1 is that the coefficient 1/ √ d is ignored for simplicity because this coefficient can be absorbed by α or β. As we can explicitly compute Pr(Σ|F), Pr(Ω|G), Pr(F|YI, G, Σ, s2), Pr(G|YI, F, Ω, s2), Pr(s2|YI, F, G), we can apply Gibbs sampling algorithm to compute ZO. However, the computational time complexity is at least O(m3 + n3), which is not practical for large scale data. 2.2 Gibbs Sampling Method To overcome the inefficiency in sampling large covariance matrices, we rewrite the sampling process using the property of Theorem 2 to take the advantage of d ≪min(m, n). αI(m) Σ I(d) F → αI(d) K I(m) F Figure 2: Theorem 2 Theorem 2. If 1. Σ ∼IWm(δ, αI(m)) and F|Σ ∼Nm,d(0, Σ ⊗I(d)), 2. K ∼IWd(δ, αI(d)) and H|K ∼Nm,d(0, I(m) ⊗K), then, matrix variates, F and H, have the same distribution. Proof sketch. Matrix variate F follows a matrix variate t distribution, t(δ, 0, αI(m), I(d)), which is written as p(F) ∝|I(m) + (αI(m))−1F(I(d))−1F⊤|−1 2 (δ+m+d−1) = |I(m) + α−1FF⊤|−1 2 (δ+m+d−1) Matrix variate H follows a matrix variate t distribution, t(δ, 0, I(m), αI(d)), which can be written as p(H) ∝|I(m) + (I(m))−1H(αI(d))−1H⊤|−1 2 (δ+m+d−1) = |I(m) + α−1HH⊤|−1 2 (δ+m+d−1) Thus, matrix variates, F and H, have the same distribution. 3 This theorem allows us to sample a smaller covariance matrix K of size d × d on the column side instead of sampling a large covariance matrix Σ of size m × m on the row side. The translation is depicted in Figure 2. This theorem applies to G as well, thus we rewrite the model as Model 3 (or Figure 3). A similar idea was used in our previous work [16]. K I(m) R I(n) F G Z Y s2 Figure 3: Model 3 Model 3. The alternative generative model of a matrix-variate SRM: 1. Draw K ∼IWd(δ, αI(d)) and R ∼IWd(δ, βI(d)); 2. Draw F|K ∼Nm,d(0, I(m) ⊗K), and G|R ∼Nn,d(0, I(n) ⊗R), 3. Draw s2 ∼χ−2(ν, σ2) ; 4. Draw Y|F, G, s2 ∼Nm,n(Z, s2I(m) ⊗I(n)), where Z = FG⊤. Let column vector f i be the i-th row of matrix F, and column vector gj be the j-th row of matrix G. In Model 3, {f i} are independent given K, G and s2. Similar independence applies to {gj} as well. The conditional posterior distribution of K, R, {f i}, {gj} and s2 can be easily computed, thus the Gibbs sampling for SRM is named BSRM (for Bayesian SRM). We use Gibbs sampling to compute the mean of ZO, which is derived from the samples of FG⊤. Because of the sparsity of I, each iteration in this sampling algorithm can be computed in O(d2r + d3(m + n)) time complexity2, which is a dramatic reduction from the previous time complexity O(m3 + n3) . 2.3 Models with Informative Priors An important characteristic of SRMs is that it allows the inclusion of certain prior knowledge of entities into the model. Specifically, the prior information is encoded as the prior covariance parameters, i.e. Σ◦and Ω◦. In the general case, it is difficult to run sampling process due to the size of Σ◦ and Ω◦. We assume that Σ◦and Ω◦have a special form, i.e. Σ◦= F◦(F◦)⊤+ αI(m), where F◦is an m × p matrix, and Ω◦= G◦(G◦)⊤+ βI(n), where G◦is an n × q matrix, and the magnitude of p and q is about the same as or less than that of d. This prior knowledge can be obtained from some additional features of entities. Although such an informative Σ◦prevents us from directly sampling each row of F independently, as we do in Model 3, we can expand matrix F of size m × d to (F, F◦) of size m × (d + p), and derive an equivalent model, where rows of F are conditionally independent given F◦. Figure 4 illustrates this transformation. Σ0 Σ I(d) F → αI(d+p) K I(m) (F,F0) Figure 4: Theorem 3 Theorem 3. Let δ > p, Σ◦= F◦(F◦)⊤+ αI(m), where F◦is an m × p matrix. If 1. Σ ∼IWm(δ, Σ◦) and F|Σ ∼Nm,d(0, Σ ⊗I(d)), 2. K =  K11 K12 K21 K22  ∼IWd+p(δ −p, αI(d+p)) and H|K ∼Nm,d(F◦K−1 22 K21, I(m) ⊗K11·2), where K11·2 = K11 −K12K−1 22 K21, then F and H have the same distribution. Proof sketch. Consider the distribution (H1, H2)|K ∼Nm,d+p(0, I(m) ⊗K). (1) Because H1|H2 ∼Nm,d(H2K−1 22 K21, I(m) ⊗K11·2), p(H) = p(H1|H2 = F◦). On the other hand, we have a matrix-variate t distribution, (H1, H2) ∼tm,d+p(δ −p, 0, αI(m), I(d+p)). By Theorem 4.3.9 in [4], we have H1|H2 ∼tm,d(δ, 0, αI(m) + H2H⊤ 2 , I(d)) = tm,d(δ, 0, Σ◦, I(d)), which implies p(F) = p(H1|H2 = F◦) = p(H). 2|Y −FG⊤|2 I can be efficiently computed in O(dr) time. 4 The following corollary allows us to compute the posterior distribution of K efficiently. Corollary 4. K|H ∼IWd+p(δ + m, αI(d+p) + (H, F◦)⊤(H, F◦)). Proof sketch. Because normal distribution and inverse Wishart distribution are conjugate, we can derive the posterior distribution K from Eq. (1). Thus, we can explicitly sample from the conditional posterior distributions, as listed in Algorithm 1 (BSRM/F for BSRM with features) in Appendix. We note that when p = q = 0, Algorithm 1 (BSRM/F) reduces to the exact algorithm for BSRM. Each iteration in this sampling algorithm can be computed in O(d2r + d3(m + n) + dpm + dqn) time complexity. 2.4 Unblocking for Sampling Implementation Blocking Gibbs sampling technique is commonly used to improve the sampling efficiency by reducing the sample variance according to the Rao-Blackwell theorem (c.f. [9]). However, blocking Gibbs sampling is not necessary to be computationally efficient. To improve the computational efficiency of Algorithm 1, we use unblocking sampling to reduce the major computational cost is Step 2 and Step 4. We consider sampling each element of F conditionally. The sampling process is written as Step 4 and Step 9 of Algorithm 2, which is called BSRM/F with conditional Gibss sampling. We can reduce the computational cost of each iteration to O(dr + d2(m + n) + dpm + dqn), which is comparable to other low-rank matrix factorization approaches. Though such a conditional sampling process increases the sample variance comparing to Algorithm 1, we can afford more samples within a given amount of time due to its faster speed. Our experiments show that the overall computational cost of Algorithm 2 is usually less than that of Algorithm 1 when achieving the same accuracy. Additionally, since {f i} are independent, we can parallelize the for loops in Step 4 and Step 9 of Algorithm 2. 3 Related Work SRMs fall into a class of statistical latent-variable relational models that explain relations by latent factors of entities. Recently a number of such models were proposed that can be roughly put into two groups, depending on whether the latent factors are continuous or discrete: (1) Discrete latent-state relational models: a large body of research infers latent classes of entities and explains the entity relationship by the probability conditioned on the joint state of participating entities, e.g., [6, 14, 7, 1]. In another work [10], binary latent factors are modeled; (2) Continuous latent-variable relational models: many such models assume relational data underlain by multiplicative effects between latent variables of entities, e.g. [5]. A simple example is matrix factorization, which recently has become very popular in collaborative filtering applications, e.g., [12, 8, 13]. The latest Bayesian probabilistic matrix factorization [13] reported the state-of-the-art accuracy of matrix factorization on Netflix data. Interestingly, the model turns out to be similar to our Model 3 under the non-informative prior. This paper reveals the equivalence between different models and offers a more general Bayesian framework that allows informative priors from entity features to play a role. The framework also generalizes Gaussian processes [11] to a relational domain, where a nonparametric prior for stochastic relational processes is described. 4 Experiments Synthetic data: We compare BSRM under noninformative priors against two other algorithms: the fast max-margin matrix factorization (fMMMF) in [12] with a square loss, and SRM using variational Bayesian approach (SRM-VB) in [15]. We generate a 30 × 20 random matrix (Figure 5(a)), then add Gaussian noise with σ2 = 0.1 (Figure 5(b)). The root mean squared noise is 0.32. We select 70% elements as the observed data and use the rest of the elements for testing. The reconstruction matrix and root mean squared errors (RMSEs) of predictions on the test elements are shown in Figure 5(c)-5(e). BSRM outperforms the variational approach of SRMs and fMMMF. Note that because of the log-determinant penalty of the inverse Wishart prior, SRM-VB enforces the rank to be smaller, thus the result of SRM-VB looks smoother than that of BSRM. 5 2 4 6 8 10 12 14 16 18 20 5 10 15 20 25 30 (a) Original Matrix 2 4 6 8 10 12 14 16 18 20 5 10 15 20 25 30 (b) With Noise(0.32) 2 4 6 8 10 12 14 16 18 20 5 10 15 20 25 30 (c) fMMMF (0.27) 2 4 6 8 10 12 14 16 18 20 5 10 15 20 25 30 (d) SRM-VB(0.22) 2 4 6 8 10 12 14 16 18 20 5 10 15 20 25 30 (e) BSRM(0.19) Figure 5: Experiments on synthetic data. RMSEs are shown in parentheses. User Mean Movie Mean fMMMF [12] VB [8] RMSE 1.425 1.387 1.186 1.165 MAE 1.141 1.103 0.943 0.915 Table 1: RMSE (root mean squared error) and MAE (mean absolute error) of the experiments on EachMovie data. All standard errors are 0.001 or less. EachMovie data: We test the accuracy and the efficiency of our algorithms on EachMovie. The dataset contains 74, 424 users’ 2, 811, 718 ratings on 1, 648 movies, i.e. about 2.29% are rated by zero-to-five stars. We put all the ratings into a matrix, and randomly select 80% as observed data to predict the remaining ratings. The random selection was carried out 10 times independently. We compare our approach against several competing methods: 1) User Mean, predicting ratings by the sample mean of the same user’s ratings; 2) Move Mean, predicting rating by the sample mean of ratings on the same movie; 3) fMMMF [12]; 4) VB introduced in [8], which is a probabilistic lowrank matrix factorization using variational approximation. Because of the data size, we cannot run the SRM-VB of [15]. We test the algorithms BSRM and BSRM/F, both following Algorithm 2, which run without and with features, respectively. The features used in BSRM/F are generated from the PCA result of the binary indicator matrix that indicates whether the user rates the movie. The 10 top factors of both the user side and the movie side are used as features, i.e. p = 10, q = 10. We run the experiments with different d = 16, 32, 100, 200, 300. The hyper parameters are set to some trivial values, δ = p + 1 = 11, α = β = 1, σ2 = 1, and ν = 1. The results are shown in Table 1 and 2. We find that the accuracy improves as the number of d is increased. Once d is greater than 100, the further improvement is not very significant within a reasonable amount of running time. rank (d) 16 32 100 200 300 BSRM RMSE 1.0983 1.0924 1.0905 1.0903 1.0902 MAE 0.8411 0.8321 0.8335 0.8340 0.8393 BSRM/F RMSE 1.0952 1.0872 1.0848 1.0846 1.0852 MAE 0.8311 0.8280 0.8289 0.8293 0.8292 Table 2: RMSE (root mean squared error) and MAE (mean absolute error) of experiments on EachMovie data. All standard errors are 0.001 or less. To compare the overall computational efficiency of the two Gibbs sampling procedures, Algorithm 1 1.08 1.1 1.12 1.14 1.16 1.18 1.2 0 1000 2000 3000 4000 5000 6000 7000 8000 RMSE Running time (sec) burn-in ends burn-in ends Algorithm 1 Algorithm 2 Figure 6: Time-Accuracy of Algorithm 1 and 2 and Algorithm 2, we run both algorithms and record the running time and accuracy in RMSE. The dimensionality d is set to be 100. We compute the average ZO and evaluate it after a certain number of iterations. The evaluation results are shown in Figure 6. We run both algorithms for 100 iterations as the burn-in period, so that we can have an independent start sample. After the burn-in period, we restart to compute the averaged ZO and evaluate them, therefore there are abrupt points at 100 iterations in both cases. The results show that the overall accuracy of Algorithm 2 is better at any given time. 6 Netflix data: We also test the algorithms on the large collection of user ratings from netflix.com. The dataset consists of 100, 480, 507 ratings from 480, 189 users on 17, 770 movies. In addition, Netflix also provides a set of validation data with 1, 408, 395 ratings. In order to evaluate the prediction accuracy, there is a test set containing 2, 817, 131 ratings whose values are withheld and unknown for all the participants. The features used in BSRM/F are generated from the PCA result of a binary matrix that indicates whether or not the user rated the movie. The top 30 user-side factors are used as features, none of movie-side factors are used, i.e. p = 30, q = 0. The hyper parameters are set to some trivial values, δ = p + 1 = 31, α = β = 1, σ2 = 1, and ν = 1. The results on the validation data are shown in Table 3. The submitted result of BSRM/F(400) achieves RMSE 0.8881 on the test set. The running time is around 21 minutes per iteration for 400 latent dimensions on an Intel Xeon 2GHz PC. BSRM BSRM/F VB[8] BPMF [13] 100 200 400 100 200 400 RMSE 0.9141 0.8920 0.8930 0.8910 0.8895 0.8926 0.8880 0.8874 Table 3: RMSE (root mean squared error) of experiments on Netflix data. 5 Conclusions In this paper, we study the fully Bayesian inference for stochastic relational models (SRMs), for learning the real-valued relation between entities of two sets. We overcome the scalability issue by transforming SRMs into equivalent models, which can be efficiently sampled. The experiments show that the fully Bayesian inference outperforms the previously used variational Bayesian inference on SRMs. In addition, some techniques for efficient computation in this paper can be applied to other large-scale Bayesian inferences, especially for models involving inverse-Wishart distributions. Acknowledgment: We thank the reviewers and Sarah Tyler for constructive comments. References [1] E. Airodi, D. Blei, S. Fienberg, and E. P. Xing. Mixed membership stochastic blockmodels. In Journal of Machine Learning Research, 2008. [2] A. P. Dawid. Some matrix-variate distribution theory: notational considerations and a Bayesian application. Biometrika, 68:265–274, 1981. [3] A. Gelman, J. B. Carlin, H. S. Stern, and D. B. Rubin. Bayesian Data Analysis. Chapman & Hall, New York, 1995. [4] A. K. Gupta and D. K. Nagar. Matrix Variate Distributions. Chapman & Hall/CRC, 2000. [5] P. Hoff. Multiplicative latent factor models for description and prediction of social networks. Computational and Mathematical Organization Theory, 2007. [6] T. Hofmann. Latent semantic models for collaborative filtering. ACM Trans. Inf. Syst., 22(1):89–115, 2004. [7] C. Kemp, J. B. Tenenbaum, T. L. Griffiths, T. Yamada, and N. Ueda. Learning systems of concepts with an infinite relational model. In Proceedings of the 21st National Conference on Artificial Intelligence (AAAI), 2006. [8] Y. J. Lim and Y. W. Teh. Variational Bayesian approach to movie rating prediction. In Proceedings of KDD Cup and Workshop, 2007. [9] J. S. Liu. Monte Carlo Strategies in Scientific Computing. Springer, 2001. [10] E. Meeds, Z. Ghahramani, R. Neal, and S. T. Roweis. Modeling dyadic data with binary latent factors. In Advances in Neural Information Processing Systems 19, 2007. [11] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006. [12] J. D. M. Rennie and N. Srebro. Fast maximum margin matrix factorization for collaborative prediction. In ICML, 2005. 7 [13] R. Salakhutdinov and A. Mnih. Bayeisna probabilistic matrix factorization using Markov chain Monte Carlo. In The 25th International Conference on Machine Learning, 2008. [14] Z. Xu, V. Tresp, K. Yu, and H.-P. Kriegel. Infinite hidden relational models. In Proceedings of the 22nd International Conference on Uncertainty in Artificial Intelligence (UAI), 2006. [15] K. Yu, W. Chu, S. Yu, V. Tresp, and Z. Xu. Stochastic relational models for discriminative link prediction. In Advances in Neural Information Processing Systems 19 (NIPS), 2006. [16] S. Zhu, K. Yu, and Y. Gong. Predictive matrix-variate t models. In J. Platt, D. Koller, Y. Singer, and S. Roweis, editors, NIPS ’07: Advances in Neural Information Processing Systems 20, pages 1721–1728. MIT Press, Cambridge, MA, 2008. Appendix Before presenting the algorithms, we introduce the necessary notation. Let Ii = {j|(i, j) ∈I} and Ij = {i|(i, j) ∈I}. A matrix with subscripts indicates its submatrix, which consists its entries at the given indices in the subscripts, for example, XIj,j is a subvector of the j-th column of X whose row indices are in set Ij, X·,j is the j-th column of X (· indicates the full set). Xi,j denotes the (i, j)-th entry of X. |X|2 I is the squared sum of elements in set I, i.e. P (i,j)∈I X2 i,j. We fill the unobserved elements in Y with 0 for simplicity in notation Algorithm 1 BSRM/F: Gibbs sampling for SRM with features 1: Draw K ∼IWd+p(δ + m, αI(d+p) + (F, F◦)⊤(F, F◦)); 2: For each i ∈U0, draw f i ∼Nd(K(i)(s−2G⊤Y⊤ i,· + K−1 11·2K12K−1 22 f ◦ i ), K(i)), where K(i) = s−2(GIi,·)⊤GIi,· + K−1 11·2 −1; 3: Draw R ∼IWd+q(δ + n, βI(d+q) + (G, G◦)⊤(G, G◦)); 4: For each j ∈V0, draw gj ∼Nd(R(j)(s−2F⊤Y·,j + R−1 11·2R12R−1 22 g◦ j), R(j)), where R(j) = s−2(FIj,·)⊤FIj,· + R−1 11·2 −1; 5: Draw s2 ∼χ−2(ν + r, σ2 + |Y −FG⊤|2 I ). Algorithm 2 BSRM/F: Conditional Gibbs sampling for SRM with features 1: ∆i,j ←Yi,j −P k Fi,kGj,k, for (i, j) ∈I; 2: Draw Φ ∼Wd+p(δ + m + d + p −1, (αI(d+p) + (F, F◦)⊤(F, F◦))−1); 3: for each (i, k) ∈U0 × {1, · · · , d} do 4: Draw f ∼N1(φ−1(s−2∆i,IiGIi,k −Fi,·Φ·,k), φ−1), where φ = s−2(GIi,k)⊤GIi,k +Φk,k; 5: Update Fi,k ←Fi,k + f, and ∆i,j ←∆i,j −fGj,k, for j ∈Ii; 6: end for 7: Draw Ψ ∼Wd+q(δ + n + d + q −1, (βI(d+q) + (G, G◦)⊤(G, G◦))−1); 8: for each (j, k) ∈V0 × {1, · · · , d} do 9: Draw g ∼N1(ψ−1(s−2∆⊤ Ij,jFIj,k−Gj,·Ψ·,k), ψ−1), where ψ = s−2(FIj,k)⊤FIj,k+Ψk,k; 10: Update Gj,k ←Gj,k + g and ∆i,j ←∆i,j −gFi,k, for i ∈Ij; 11: end for 12: Draw s2 ∼χ−2(ν + r, σ2 + |∆|2 I ). 8
2008
48
3,535
Online Models for Content Optimization Deepak Agarwal, Bee-Chung Chen, Pradheep Elango, Nitin Motgi, Seung-Taek Park, Raghu Ramakrishnan, Scott Roy, Joe Zachariah Yahoo! Inc. 701 First Avenue Sunnyvale, CA 94089 Abstract We describe a new content publishing system that selects articles to serve to a user, choosing from an editorially programmed pool that is frequently refreshed. It is now deployed on a major Yahoo! portal, and selects articles to serve to hundreds of millions of user visits per day, significantly increasing the number of user clicks over the original manual approach, in which editors periodically selected articles to display. Some of the challenges we face include a dynamic content pool, short article lifetimes, non-stationary click-through rates, and extremely high traffic volumes. The fundamental problem we must solve is to quickly identify which items are popular (perhaps within different user segments), and to exploit them while they remain current. We must also explore the underlying pool constantly to identify promising alternatives, quickly discarding poor performers. Our approach is based on tracking per article performance in near real time through online models. We describe the characteristics and constraints of our application setting, discuss our design choices, and show the importance and effectiveness of coupling online models with a randomization procedure. We discuss the challenges encountered in a production online content-publishing environment and highlight issues that deserve careful attention. Our analysis of this application also suggests a number of future research avenues. 1 Introduction The web has become the central distribution channel for information from traditional sources such as news outlets as well as rapidly growing user-generated content. Developing effective algorithmic approaches to delivering such content when users visit web portals is a fundamental problem that has not received much attention. Search engines use automated ranking algorithms to return the most relevant links in response to a user’s keyword query; likewise, online ads are targeted using automated algorithms. In contrast, portals that cater to users who browse a site are typically programmed manually. This is because content is harder to assess for relevance, topicality, freshness, and personal preference; there is a wide range in the quality; and there are no reliable quality or trust metrics (such as, say, PageRank or Hub/Authority weights for URLs). Manual programming of content ensures high quality and maintains the editorial “voice” (the typical mix of content) that users associate with the site. On the other hand, it is expensive to scale as the number of articles and the number of site pages we wish to program grow. A data-driven machine learning approach can help with the scale issue, and we seek to blend the strengths of the editorial and algorithmic approaches by algorithmically optimizing content programming within high-level constraints set by editors. The system we describe is currently deployed on a major Yahoo! portal, and serves several hundred million user visits per day. The usual machine-learning approach to ranking articles shown to users uses feature-based models, trained using ”offline data” (data collected in the past). After making a significant effort of feature engineering by looking at user demogrpahics, past activities on the site, various article categories, keywords and entities in articles, etc., we concluded that it is difficult to build good models based solely on offline data in our scenario. Our content pool is small but changing rapidly; article lifetimes are short; and there is wide variability in article performance sharing a common set of feature values. Thus, we take the approach of tracking per-article performance by online models, which are initialized using offline data and updated continuously using real time data. This online aspect opens up new modeling challenges in addition to classical feature based predition, as we discuss in this paper. 2 Problem Description We consider the problem of optimizing content displayed in a module that is the focal point on a major Yahoo! portal; the page also provides several other services (e.g., Mail, Weather) and content links. The module is a panel with four slots labelled F1, F2, F3, F4. Slot F1, which accounts for a large fraction of clicks, is prominent, and an article displayed on F1 receives many more clicks than when it is displayed at F2, F3 or F4. The pool of available articles is created by trained human editors, and refreshed continually. At any point in time, there are 16 live articles in the pool. A few new articles programmed by editors get pushed into the system periodically (every few hours) and replace some old articles. The editors keep up with important new stories (e.g., breaking news) and eliminate irrelevant and fading stories, and ensure that the pool of articles is consistent with the “voice” of the site (i.e., the desired nature and mix of content). There is no personalization in the editorially programmed system; at a given time, the same articles are seen by all users visiting the page. We consider how to choose the best set of four articles to display on the module to a given user. Since the mix of content in the available pool already incorporates constraints like voice, topicality, etc., we focus on choosing articles to maximize overall click-through rate (CTR), which is the total number of clicks divided by total number of views for a time interval. To simplify our presentation, we only describe learning from click feedback obtained from the most important F1 position; our framework (and system) can use information from other positions as well. 2.1 System Challenges Our setting poses many challenges, the most important of which are the following: • Highly dynamic system characteristics: Articles have short lifetimes (6-8 hours), the pool of available articles is constantly changing, the user population is dynamic, and each article has different CTRs at different times of day or when shown in different slots in our module. We found that fast reaction to user feedback through dynamic models based on clicks and page views is crucial for good performance. We discuss an alternate and commonly pursued approach of ranking articles based on offline feature-driven models in Section 2.2. • Scalability: The portal receives thousands of page views per second and serves hundreds of millions of user visits per day. Data collection, model training and article scoring (using the model) are subject to tight latency requirements. For instance, we only get a few milliseconds to decide on the appropriate content to show to a user visiting the portal. A significant effort was required to build a scalable infrastructure that supports near real-time data collection.1 Events (users’ clicks and page views) are collected from a large number of front-end web servers and continuously transferred to data collection clusters, which support event buffering to handle the time lag between the user viewing a page and then clicking articles on the page. The event stream is then fed to the modeler cluster which runs learning algorithms to update the models. Periodically, the front-end web servers pull the updated models and serve content based on the new models. A complete cycle of data collection, model update, and model delivery takes a few minutes. 1The data collected is anonymized, making it impossible to relate the activity to individual users. 37 50 62 75 87 100 Scaled CTR 21:30 21:45 22:00 22:15 22:30 22:45 Regular CTR CTR after removing clickers’ repeated views 25 50 75 100 Scaled CTR 22:00 00:00 02:00 04:00 06:00 08:00 10:00 12:00 14:00 16:00 18:00 20:00 22:00 00:00 02:00 04:00 06:00 08:00 10:00 (a) CTR decay (b) CTR time-of-day variation Figure 1: CTR curves of a typical article in two buckets. (a) shows the article’s CTR decay when shown continuously in a bucket at position F1; (b) shows the article’s CTR in the random bucket. 2.2 Machine Learning Challenges A serving scheme is an automated or manual algorithm that decides which article to show at different positions of our module for a given user. Prior to our system, articles were chosen by human editors; we refer to this as the editorial serving scheme. A random sample of the user population is referred to as a bucket. We now discuss the issues that make it tricky to build predictive models in this setting. We tried the usual approach of building offline models based on retrospective data collected while using the editorial serving scheme. User features included Age, Gender, Geo-location and Inferred interests based on user visit patterns. For articles, we used features based on URL, article category (e.g., Sports, Entertainment) and title keywords. However, this approach performed poorly. The reasons include a wide variability in CTR for articles having a same set of feature values, dramatical changes of article CTR over time, and the fact that retrospective data collected from non-randomized serving schemes are confounded with factors that are hard to adjust for (see Section 5). Also, our initial studies revealed high variability in CTRs for articles sharing some common features (e.g., Sports articles, Entertainment articles). We achieved much better performance by seeking quick convergence (using online models) to the best article for a given user (or user segment); a lost opportunity (failure to detect the best article quickly) can be costly and the cost increases with the margin (difference between the best and selected articles). We now discuss some of the challenges we had to address. Non-stationary CTRs: The CTR of an article is strongly dependent on the serving scheme used (especially, how much F1 exposure it receives) and it may change dramatically over time. Hence, learning techniques that assume process stationarity are inapplicable. In order to ensure webpage stability, we consider serving schemes that don’t alter the choice of what to show a user in a given slot until a better choice is identified. Figure 1 (a) shows the CTR curve of a typical article subject to such a serving scheme. The decay is due to users getting exposed to an article more than once. Exposure to an article happens in different ways and to different degrees. A user may get exposed to an article when he/she sees a descriptive link, or clicks on it and reads the article. A user may also click multiple “see also” links associated with each article which may perhaps be a stronger form of exposure. In our analysis, we consider such related clicks to be a single click event. View exposure is more noisy since our module is only one of many content pieces shown on the portal. A user may be looking at the Weather module when visiting the portal or he may have looked at the article title in the link and not liked it. Hence, explaining the decay precisely in terms of repeat exposure is difficult. For instance, not showing an article to a user after one page view containing the link may be suboptimal since he may have overlooked the link and may click on it later. In fact, a large number of clicks on articles occur after the first page view and depends on how a user navigates the portal. Instead of solving the problem by imposing serving constraints per user, we build a component in our dynamic model that tracks article CTR decay over time. We still impose reasonable serving constraints to provide good user experience—we do not show the same article to a user x minutes (x = 60 worked well) after he/she first saw the article. In addition to decay, the CTR of an article also changes by time of day and day of week. Figure 1 (b) shows the CTR of a typical article when served using a randomized serving scheme (articles served in a round-robin fashion to a randomly chosen user population). The randomization removes any serving bias and provides an unbiased estimate of CTR seasonality. It is evident that CTRs of articles vary dramatically over time, this clearly shows the need to adjust for time effects (e.g., diurnal patterns, decay) to obtain an adjusted article score when deciding to rank articles. In our current study, we fitted a global time of day curve at 5 minute resolution to data obtained from randomized serving scheme through a periodic (weekly) adaptive regression spline. However, there are still interactions that occur at an article level which were difficult to estimate offline through article features. Per-article online models that put more weight on recent observations provide an effective self adaptive mechanism to automatically account for deviations from the global trend when an article is pushed into the system. Strong Serving Bias: A model built using data generated from a serving scheme is biased by that scheme. For example, if a serving scheme decides not to show article A to any user, any model built using this data would not learn the popularity of A from users’ feedback. In general, a serving scheme may heavily exploit some regions in the feature space and generate many data points for those regions, but few data points for other regions. Models built using such data learn very little about the infrequently sampled regions. Moreover, every non-randomized serving scheme introduces confounding factors in the data; adjusting such factors to obtain unbiased article scores is often difficult. In fact, early experiments that updated models using data from editorial bucket to serve in our experimental buckets performed poorly. This bias also affects empirical evaluations or comparisons of learning algorithms based on retrospective data, as we discuss later in Section 5. Interaction with the editorial team: The project involved considerable interaction with human editors who have been manually and successfully placing articles on the portal for many years. Understanding how that experience can be leveraged in conjuction with automated serving schemes was a major challenge, both technically and culturally (in that editors had to learn what ML algorithms could do, and we had to learn all the subtle considerations in what to show). The result is our framework, wherein editors control the pool and set policies via constraints on what can be served, and the serving algorithm chooses what to show on a given user visit. 3 Experimental Setup and Framework Experimental Setup: We created several mutually exclusive buckets of roughly equal sizes from a fraction of live traffic, and served traffic in each bucket using one of our candidate serving schemes. All usual precautions were taken in the bucket creation process to ensure statistical validity of results. We also created a control bucket that ran the editorial serving scheme. In addition, we created a separate bucket called the random bucket, which serves articles per visit in a round-robin fashion. Framework: Our framework consists of several components that are described below. • Batch Learning: Due to time lag between a view and subsequent clicks (approximately 2-10 minutes) and engineering constraints imposed by high data volumes, updates to our models occur every 5 minutes. Such constraints can be added by editors, and the serving algorithm must satisfy them. • Business Logic, Editorial overrides: Despite moving towards an algorithmic approach, there are instances where the editorial team has to override the recommendations produced by our machine learning algorithms. For instance, a breaking news story is shown immediately at the F1 position, a user visiting the portal after 60 minutes should not see the same article he saw during his earlier visits, etc. Such constraints can be added by editors, and the serving algo must satisfy them. • Online models to track CTR: We build online models to track article CTR for various user segments separately in each bucket. Articles that are currently the best are shown in the serving bucket; others are explored in the random bucket until their scores are better than the current best articles; at this point they get promoted to the serving bucket. In our serving bucket, we serve the same article at the F1 position in a given 5-minute window (except for overrides by business rules). Separate online models are tracked for articles at the F1 position in each bucket. Articles promoted to the F1 position in the serving bucket are subsequently scored by their model in the serving bucket; articles not playing at the F1 position in the serving bucket are of course scored by their model in the random bucket. • Explore/Exploit Strategy: The random bucket is used for two purposes: (a) It provides a simple explore-exploit strategy that does random exploration with a small probability P, and serves best articles ranked by their estimated scores from online models with a large probability 1 −P. In addition, it helps us estimate systematic effects (e.g., diurnal CTR pattern) without the need to build elaborate statistical models that adjust for serving bias in other non-random buckets. Thus far, we have collected about 8 months of data from this continuously running random bucket; this has proved extremely useful in studying various offline models, running offline evaluations and conducting simulation studies. The randomization procedure adopted is simple but proved effective in our setting. Our setting is different from the ones studied in classical explore-exploit literature; developing better strategies is a focus of our ongoing work. (See Section 6.) 4 Online Models Tracking article CTR in an online fashion is a well studied area in time series with several methods available in the literature [3][7]; but the application to content optimization has not been carefully studied. We provide a description of three dynamic models that are currently used in our system. 4.1 Estimated Most Popular: EMP This model tracks the log-odds of CTR per article at the F1 position over time but does not use any user features. The subscript t in our notation refers to the tth interval after the article is first displayed in the bucket. Let ct, nt denote the number of clicks and views at time t, we work with the the empirical logistic transform defined as yt = log(ct + 0.5) −log(nt −ct + 0.5), approximately Gaussian for large nt with variance wt = (ct + 0.5)−1 + (nt −ct + 0.5)−1 [6]. In our scenario, we get roughly 300 −400 observations at the F1 position per article in a 5-minute interval in the random bucket, hence the above transformation is appropriate for EMP and SS with few tens of user segments. Given that there may be a decay pattern in log-odds of CTR at the F1 position with increasing article lifetime, we fit a dynamic linear growth curve model which is given by yt = ot + µt + ǫt ∼N(0, V wt) (1) µt = µt−1 + βt + δµt ∼N(0, σ2 µt) βt = βt−1 + δβt ∼N(0, σ2 βt) In Equation 1, ot is a constant offset term obtained from an offline model (e.g. hour-of-day correction), µt is the mean of yt at time t and βt has the interpretation of incremental decay in the level of the series over the time interval from t −1 to t, evolving during that interval according to the addition of the stochastic element δβt. The evolution errors δµt and δβt are assumed to be uncorrelated. Model parameters are initialized by observing values at t = 1 for an article in random bucket, actual tracking begins at t = 2. In general, the initialization takes place through a feature based offline model built using retrospective data. To provide more intuition on how the state parameters θt = (µt, βt) evolve, assume the evolutions δµt and δβt are zero at each time point. Then, µt = µ0 + tβ0, a linear trend fitted to the yt values through weighted least squares. The addition of non-zero evolution makes this straight line dynamic and helps in tracking decay over time. In fact, the values of state evolution variance components σ2 µt and σ2 βt relative to noise variance V wt determine the amount of temporal smoothing that occurs for the model; large relative values smooth more by using a larger history to predict the future. Model fitting is conducted through a Kalman filter based on a discounting concept as explained in [7]. Details are discussed in [1]. 4.2 Saturated Segmented Model: SS This model generalizes EMP to incorporate user features. In particular, user covariates are used to create disjoint subsets (segments), a local EMP model is built to track item performance in each user segment. For a small number of user segments, we fit a separate EMP model per user segment for a given item at the F1 position. As the number of user segments grows, data sparseness may lead to high variance estimates in small segments, especially during early part of article lifetime. To address this, we smooth article scores in segments at each time point through a Bayesian hierarchical model. In particular, if (ait, Qit), i = 1, · · · , k, are predicted mean and variances of item score at F1 in k different user segments at time t, we derive a new score as follows: ˜ ait = τ τ + Qit ait + Qit τ + Qit ¯at (2) where ¯at is the EMP model score for the item. The constant τ that controls the amount of “shrinkage” towards the most popular is obtained by the DerSimonian and Laird estimator [10], widely used in meta-analysis. 4.3 Online Logistic Regression: OLR The SS does not provide the flexibility to incorporate lower order interactions when working with a large number of features. For instance, given age, gender and geo-location for users, the SS model considers all possible combinations of the three variables. It is possible that an additive model based on two-factor interations (age-gender, age-geo, and gender-geo) may provide better performance. We use an efficient online logistic regression approach to build such models. The OLR updates parameter for every labelled event, positive or negative. Instead of achieving additivity by empirically transforming the data as in EMP and SS, it posits a Bernoulli likelihood for each event and achieves linearity by parametrizing the log-odds as a linear function of features. However, this makes the problem non-linear; we perform a quadratic approximation through a Taylor series expansion to achieve linearity. Modeling and fitting details are discussed in [1]. 5 Experiments In this section, we present the results of experiments that compare serving schemes based on our three online models (EMP, SS, and OLR) with the current editorial programming approach (which we refer to as ED). We show our online models significantly outperform ED based on bucket testsing the four alternatives concurrently on live traffic on the portal over a month. Then, by offline analysis, we identified the reason personalization (based on user features in OLR or segmentation in SS) did not provide improvement—it is mainly because we did not have sufficiently diverse articles to exploit, although SS and OLR are more predictive models than EMP. Finally, by extensive bucket tests on live traffic (which is an expensive and unusual opportunity for evaluating algorithms), we cast serious doubts on the usefulness of the common practice of comparing serving algorithms based on retrospective data (collected while using another serving scheme), and suggest that, without a random bucket or an effective correction procedure, it is essential to conduct tests on live traffic for statistical validity. Bucket Testing Methodology: After conducting extensive offline analysis and several small pilots with different variations of models (including feature selection), we narrowed the candidates for live-traffic evaluation to the following: (1) EMP, (2) SS with Age × Gender segments, (3) OLR with features terms: Article + Position + Age×ContentCategory + Gender×ContentCategory (geolocation and user behavioral features were also bucket-tested in some other periods of time and provided no statistically significant improvement), and (4) ED. We used each of these schemes to serve traffic in a bucket (a random set of users) for one month; the four buckets ran concurrently. We measure the performance of a scheme by the lifts in terms of CTR relative to the baseline ED scheme. We also obtained significant improvements relative to round-robin serving scheme in the random bucket but do not report it to avoid clutter. Online Model Comparison: Our online models significantly increased CTR over the original manual editorial scheme. Moreover, the increase in CTR was achieved mainly due to increased reach, i.e., we induced more users to click on articles. This provides evidence in favor of a strategy where various constraints in content programming are incorporated by human editors and algorithms are used to place them intelligently to optimize easily measurable metric like CTR. Figure 2 (a) shows the CTR lifts of different algorithms during one month. All three online models (EMP, SS and OLR) are significantly better than ED, with CTR lifts in the range of 30%−60%. This clearly demonstrates the ability of our online models to accurately track CTRs in real-time. Shockingly, the models that are based on user features, SS and OLR, are not statistically different from EMP, indicating that peronalization to our selected user segments does not provide additional lift relative to EMP, although both SS and OLR have better predictive likelihoods relative to EMP on retrospective data analysis. 0 5 10 15 20 25 0 20 40 60 80 100 Day Lift(%) in CTR ED EMP SS OLR ED EMP SS OLR −20 0 20 40 60 80 120 Serving Scheme Lift (%) in Fraction of Clickers 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.0 0.2 0.4 0.6 0.8 1.0 CTR of Highly Polar Articles CTR of Concurrent Best Article Male Articles Female Articles (a) Daily CTR Lift (b) Lift in Fraction of Clickers (c) Polar vs. Non-polar Figure 2: Experimental results: (a) and (b) show bucket test results. (c) on the x-axis is the CTR of a polar article in a segment, on the y-axis is the CTR of the global best article (during the polar article’s lifetime) in the same segment. Refer to text for definition of polar. Figure 2 (b) shows the lift relative to ED in terms of the fraction of clicking users. It shows that the lift achieved by the online models is not confined to a small cohort of users, but reflects conversion of more users to clickers. Analysis of Personalization: We did not expect to find that personalization to user segments provided no additional CTR lift relative to EMP despite the fact that user features were predictive of article CTR. A closer look at the data revealed the main cause to be the current editorial content generation process, which is geared to create candidate articles that are expected to be popular for all users (not for different user segments). In fact, there were articles that have more affinity to some user segments than others—we define these to be articles whose CTR in a given segment was at least twice the article’s overall CTR, and refer to them as polar articles. However, whenever polar articles were in the pool of candidate articles, there was usually a non-polar one in the pool that was more popular than the polar ones across all segments. As a result, we should chose the same non-polar one for all segments. Figure 2 (c) shows, for the gender segmentation, that polar articles almost always co-exist with an article whose overall CTR is greater than even the CTR in the segment of the polar article. For the AgeXGender segmentation, the global-best article was the same as the segment-best article about 65% of the intervals; the maximum expected CTR lift over global ranking was only about 1%. We observe similar patterns for segmentations based on other user features. Retrospective Evaluation Metrics vs. Bucket Tests: It is common practice in existing literature to evaluate a new serving algorithm using predictive metrics obtained by running the algorithm on retrospective data (collected while using another serving scheme). For instance, such an approach has been used extensively in studying ad matching problems [11]. In our setting, this is equivalent to comparing a new serving scheme (e.g., EMP, SS, or OLR) to ED by computing some predictive metric on retrospective data obtained from ED. We found the performance differences obtained using retrospective data do not correlate well to those obtained by runing on live traffic [1]. This finding underscores the need for random bucket data, effective techniques to correct the bias, or a rapid bucket testing infrastructure to compare serving schemes. 6 Related Work Google News personalization [13], which uses collaborative filtering to provide near real-time recommendation of news articles to users, is the most closely related prior work. However, while they select from a vast pool of unedited content aggregated from news sites across the globe, we recommend a subset from a small list of items chosen by editors. On the one hand, this allows us to build per-article models in near real-time; on the other, the editorially controlled mix of items means all articles are of high quality (making it hard to achieve lift by simply eliminating bad articles). Recent work on matching ads to queries [11] and ads to webpages [2] are related. However, their primary emphasis is on constructing accurate feature-based offline models that are updated at longer time intervals (e.g., daily), such models provide good initialization to our online models but perform poorly for reasons discussed in section 2.2. In [9], the authors consider an active exploration strategy to improve search rankings, which is similar in spirit to our randomization procedure. Our problem is also related to the rich literature on multi-armed bandit problems [5][8][14][12]. However, we note that many of the assumptions made in the classical multi-armed bandit and reinforcement learning literature are not satisfied in our setting (dynamic set of articles, short article lifetime, batch-learning, non-stationary CTR, lagged response). In fact, short article lifetimes, dynamism of the content pool and the importance of learning article behaviour very quickly are the major challenges in our scenario. Preliminary experiments performed by obvious and natural modifications to the widely used UCB1 scheme [8] performed poorly. In a recent study [4] of a content aggregation site, digg.com, Wu et al. built a model for story popularity. However, their analysis is based on biased retrospective data, whereas we deployed our models and present results from tests conducted on live traffic. 7 Discussion In this paper, we described content optimization, the problem of selecting articles to present to a user who is intent on browsing for information. There are many variants of the problem, depending on the setting. One variant is selecting from a very large and diverse pool of articles. Examples include recommending RSS feeds or articles from one or more RSS feeds, such as Google’s news aggregation, and segmentation and personalization are likely to be effective. The variant that we addressed involves selecting from a small, homogeneous set of articles; segmentation may not be effective unless the pool of articles is chosen to be diverse, and there is a high premium in quickly estimating and tracking popularity per-article. Our work suggests offline feature based models are not good enough to rank articles in a highly dynamic content publishing system where article pools are small, dynamic and of high quality; lifetimes are short; and the utility metric being measured (e.g., CTR) has a strong dynamic component. In fact, the biased nature of data obtained from a non-randomized serving scheme also underscores the need to obtain some percentage of data from a randomized experimental design. The delicate tradeoffs involved in maximizing utility (e.g., total number of clicks) by quickly converging to the best article for a given user (or user segment) through online models that are effectively initialized through offline feature based models (after adjusting for confounding factors), and performing unbiased exploration through small randomized experiments are the key machine learning challenges in this setting. While we have addressed them sufficiently well to handle small content pools, dealing with larger pools will require significant advances, and is the focus of our current research. References [1] D. Agarwal, B-C.Chen, P. Elango, and et al. Online models for content optimization, Yahoo! Technical Report TR-2008-004. 2008. [2] D. Agarwal, A. Broder, D. Chakrabarti, D. Diklic, V. Josifovski, and M. Sayyadian. Estimating rates of rare events at multiple resolutions. In KDD, pages 16–25, New York, NY, USA, 2007. ACM. [3] B. D. Anderson and J.B.Moore. Optimal Filtering. Dover, 1974. [4] F.Wu and B.A.Huberman. Novelty and collective attention. 104:17599–17601, 2007. [5] J.C.Gittins. Bandit processes and dynamic allocation indices. Journal of the Royal Statistical Society, Series B, 41:148–177, 1979. [6] P. McCullagh and J. A. Nelder. Generalized Linear Models. Chapman & Hall/CRC, 1989. [7] M.West and J.Harrison. Bayesian Forecasting and Dynamic Models. Springer-Verlag, 1997. [8] P.Auer, N.Cesa-Bianchi, and P.Fischer. Finite-time analysis of the multiarmed bandit problem. Machine Learning, 47:235–256, 2002. [9] F. Radlinski and T. Joachims. Active exploration for learning rankings from clickthrough data. In ACM SIGKDD International Conference On Knowledge Discovery and Data Mining (KDD), 2007. [10] R.DerSimonian and N.M.Laird. Meta-analysis in clinical trials. Controlled Clinical Trials, 7, 1986. [11] M. Richardson, E. Dominowska, and R. Ragno. Predicting clicks: estimating the click-through rate for new ads. In WWW, pages 521–530, 2007. [12] P. Sandeep, D. Agarwal, D. Chakrabarti, and V. Josifovski. Bandits for taxonomies: A model-based approach. In In Proc. of the SIAM intl. conf. on Data Mining, 2007. [13] S.Das, D.Data, and A.Garg. Google news personalization:scalable online collaborative filtering. In WWW, Banff, Alberta, Canada, 2007. [14] T.Lai and H.Robbins. Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics, 6:4–22, 1985.
2008
49
3,536
Grouping Contours Via a Related Image Praveen Srinivasan GRASP Laboratory University of Pennsylvania Philadelphia, PA 19104 psrin@seas.upenn.edu Liming Wang Fudan University Shanghai, PRC 200433 wanglm@fudan.edu.cn Jianbo Shi GRASP Laboratory University of Pennsylvania Philadelphia, PA 19104 jshi@cis.upenn.edu Abstract Contours have been established in the biological and computer vision literature as a compact yet descriptive representation of object shape. While individual contours provide structure, they lack the large spatial support of region segments (which lack internal structure). We present a method for further grouping of contours in an image using their relationship to the contours of a second, related image. Stereo, motion, and similarity all provide cues that can aid this task; contours that have similar transformations relating them to their matching contours in the second image likely belong to a single group. To find matches for contours, we rely only on shape, which applies directly to all three modalities without modification, in contrast to the specialized approaches developed for each independently. Visually salient contours are extracted in each image, along with a set of candidate transformations for aligning subsets of them. For each transformation, groups of contours with matching shape across the two images are identified to provide a context for evaluating matches of individual contour points across the images. The resulting contexts of contours are used to perform a final grouping on contours in the original image while simultaneously finding matches in the related image, again by shape matching. We demonstrate grouping results on image pairs consisting of stereo, motion, and similar images. Our method also produces qualitatively better results against a baseline method that does not use the inferred contexts. 1 Introduction Researchers in biological vision have long hypothesized that image contours (ordered sets of edge pixels, or contour points) are a compact yet descriptive representation of object shape. In computer vision, there has been substantial interest in extracting contours from images as well as using object models based on contours for object recognition ([15, 5]), and 3D image interpretation [11]. We examine the problem of grouping contours in a single image aided by a related image, such as stereo pair, a frame from the same motion sequence, or a similar image. Relative motion of contours in one image to their matching contours in the other provides a cue for grouping. The contours themselves are detected bottom-up without a model, and are provided as input to our method. While contours already represent groupings of edges, they typically lack large spatial support. Region segments, on the other hand, have large spatial support, but lack the structure that contours provide. Therefore, additional grouping of contours can give us both qualities. This has important applications for object recognition and scene understanding, since these larger groups of contours are often large pieces of objects. Figure 1 shows a single image in the 1st column, with contours; in the other columns, top row, are different images related by stereo, motion and similarity to the first, shown with their contours. Below each of these images are idealized groupings of contours in the original image. Note that internal contours on cars and buildings are grouped, providing rich, structured shape information over a larger image region. 1 Stereo Motion Similarity Grouping in image 1 using related images Image 1 Image 2 Figure 1: Contours (white) in the image on the left can be further grouped using the contours of a second, related image (top row). The bottom row shows idealized groupings in the original image according to the inter-image relationship. 2 Related Work Stereo, motion, and similar image matching have been studied largely in isolation, and often with different purposes in mind than perceptual grouping. Much of the stereo literature focuses on perpixel depth recovery; however, as [7] noted, stereo can be used for perceptual grouping without requiring precise depth estimation. Motion is often used for estimating optical flow or dense segmentation of images into groups of pixels undergoing similar motion [13]. These approaches to motion and stereo are largely region-based, and therefore do not provide the same internal structure that groups of contours provide. Similar image matching has been used for object recognition [1], but is rarely applied to image segmentation. In work on contours, [12] matched contour points in the context of aerial imagery, but use constraints such as ordering of matches along scanlines that are not appropriate for motion or similar images, and do not provide grouping information. [9] grouped image pixels into contours according to similar motion using optical flow as a local cue. While the result addresses the long-standing aperture problem , it does not extend to large inter-image deformations or matching similar images. [8] grouped and matched image regions across different images and unstable segmentations (as we do with contours), but the regions lack internal structure. [2, 6] used stereo pairs of images to detect depth discontinuities as potential object boundaries. However, these methods will not detect and group group contours in the interior of fronto-parallel surfaces. 3 Grouping Criteria We present definitions and basic criteria for grouping contours. The inputs to our method are: 1. Images: I1,I2; for each image Ii,i  1,2 we also have: 2. A set of points (typically image edges) IP i ;  p  IP i , p  R2. We restrict the set of points to those that lie on image contours, defined next. 3. A set of contours IC i , where the jth contour Cj i  IC i is an ordered subset of points in IP i : Cj i = [pk1,pk2,...,pkn]  IC i . We would like to infer groups G1,..,GnGroups, each with the following attributes: 1. A transformation Ti that aligns a subset of contours (e.g., corresponding to an object) in I1 to I2. T is the set of all Ti. 2. A subset of contours Coni in each image, known as a context, such that the two subsets have similar overall shape. Coni = { Con1 i ,Con2 i } ; Con1 i  { 0,1} | IC 1 | , Con2 i  { 0,1} | IC 2 | . Each Conj i is vector that indicates which contours are in the context for image Ij. Con is the set of all Coni. We further define the following variables on contours Cj 1 = [pk1,...,pkn]: 1. A group label lj; lj = a implies that Cj 1 belongs to group Ga. L = { lj} , set of all labels. 2. Matches Matchj = [qr1,...,qrn], qri  IP 2 , s.t. pk1 matches qrn. Match = { Matchj} , the set of all matches for each contour. 2 All Mismatch Matching Closest Edges No shape match (a) (b) (c) (d) All Mismatch Selected Good Match Matching Closest Edges (e) (f) (g) (h) 1 2 Shape Context Matrix SC * * * SC SC SC = = = = 12 = = Contour selection indicators (a) (b) (c) (d) Figure 2: (left) - matching closest points cannot reject false positives; simply enlarging the feature size rejects true positives; increasing the feature size and selecting correct context fixes the problem and Figure 3: (right); the realized shape context from choosing a subset of contours can be summarized as the multiplication of a shape context matrix M and a binary indicator vector. We would like the groups to possess the following criteria: 1. Good continuation - for contours that overlap significantly, we prefer that they are present in the same group, if they are grouped at all. 2. Common fate by shape: contours with similar transformations mapping them to their matches should be grouped. We also require that each contour point in a grouped contour and its matching point in the second image have similar local shape. Shape in each image is defined with respect to a subset of image contours known as a context; we will explain the importance of choosing the correct context for shape comparison, as first noted in [16]. 3. Maximality/simplicity: We would like to group as many of the contours as possible into as few groups as possible, while still maintaining the similarity of local shape described above. We will encode these criteria in our cost function F(T,Con,L,Match), which we seek to minimize. Our cost function has the following properties, which we develop in the following sections: 1) For fixed contexts Con and transformations T, min Match,L F(T, Con,L,Match) is a Markov random field (MRF) that can be minimized exactly via graph cuts ([3]). This corresponds to a standard computational formulation for graph matching (in this case, there is one graph in each image, over the contours). 2) For fixed matches Match, transformations, T and labels L, F decomposes as the sum over i = { 1,...,nGroups} and we can minimize independently: min Coni Fi( Ti,Coni,  Matchj| l(j)=i) as an integer linear program. This can be easily relaxed to a closely related linear program (LP), allowing for an efficient approximation. This combination of the MRF standard graph matching technique with an LP for inferring context for accurate matching by shape is our main contribution. The layout of our paper is as follows: we explain the problem and importance of selecting contours as context for accurate matching and grouping, outline our computational solution (LP) for inferring Con given T, our technique for choosing T, followed by finding L and matches Match based on the inferred contexts (via graph cuts). Results using our method follow, and we demonstrate improvement over a baseline that lacks that benefits of our context selection procedure. 4 Matching and Context Selection We can evaluate the hypothesis of a particular contour point match by comparing the local shape around the point and around its match. Although local features such as curvature and simple proximity (in the case of roughly aligned contours) have been used for matching ([12]), inconsistencies in the input contours across two images make these them prone to error. Local features exhibit completeness (good score for correctly matching shapes), but not soundness (bad score to not matching shapes). Figure 2 illustrates this distinction. Two aligned sets of contours are shown in a),e). In a), the contours do not match, while in e), a “7” shape is common to both. In b) and f), matching of 3            SC # 1 2 2 1 1 SC # 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 1 ) selc selc t( SCMatchCos || ) selc , selc min( || || selc selc || || ) selc , selc min( || || selc selc || ) , ( ost SelectionC 1 1 1 1 k k k k L k k L k k L L ,λ ,SC SC SC SC SC SC SC SC SC SC selc selc   Input: images w/ roughly aligned contours   bin # 1 2 1 2 1 ) , , ( st BinMatchCo ) , , (t SCMatchCos i i i s s s s   ) , min( | | ) , , ( st BinMatchCo b a b a b a     Trade off between Lower mismatch and High intersection. Lower is better. R b a  , bin # 2 1, R s s  ≈ ≈ 1 1selc1 selc1 SC1 SC1 2 2selc2 selc2 SC2 SC2 1 1 SC 2 1 SC 3 1 SC 1 2 SC 2 2 SC 3 2 SC j i ci j , ]1,0 [ } 1,0 {    Relaxation j i ci j , ]1,0 [ s.t. ) c, ost(c SelectionC 2 1 c, cmin 2 1   Linear program # bin #contours Figure 4: The context selection process for roughly aligned sets of contours. See text for full description. closest points between two roughly aligned shapes finds matches in both examples due to the small support of local features, even though there is no valid match in a). However, increasing the support of the feature does not solve the problem. As an example, we use the shape context, an image feature that has been widely used for shape matching ([1]). Briefly, a shape context provides a log-polar spatial histogram that records the number of points that fall into a particular bin. In Figure 2 c,g), shape contexts (darker bins mean larger bin count) with large spatial support placed according to the rough alignment exhibit high dissimilarity in both cases, failing to find a match in a). The large feature failed because contours in each image that had no match in the other image were used in computing the shape context. Inferring which contours to include and which to omit would give better features, as in Figure 2 d),h). This fixes the completeness problem, while retaining soundness: no combination of contours in the two images in a) can produce matching shapes. Therefore, with rough alignment we can cast the first step of shape matching as context selection: which subset of contours, or context, to use for feature computation. Given the correct context, matching individual contour points is much easier. 4.1 Selection Problem We can neatly summarize the effect of a particular context selection on a shape context as seen in Figure 3. a) shows two contours which we can select from. b) shows the shape contexts for each individual contour. The bin counts for a shape context can be encoded as a vector, represented by the shape contexts and their vector representations alongside. In c), we put the vector form of these shape contexts into a matrix SC, where each column corresponds to one contour. SC has dimensions nBins by nContours, where nBins is the number of bins in the shape context (and the length of the associated vector). The entry SC(i,j) is the bin count for bin i and contour j. For each contour Cj i in an image Ii, we associate an selection indicator variable selj i  { 0,1} , which indicates whether or not the contour is selected; the vector of these indicator variables is seli. Then the shape context bin counts realized by a particular selection of contours is SCseli, simply the multiplication of the matrix SC and the vector seli. d) shows the effect on the shape context histogram of various context selections. 4.2 Shape context matching cost The effectiveness of selection depends significantly on the shape context matching cost. Traditional matching costs (Chi-square, L1, L2) only measure similarity, but selecting no contours in either image gives a perfect matching cost, since in both shape contexts, all bins will be 0. While similarity is important, so is including more contours rather than fewer (maximality). Our shape context matching cost, SCMatchCost(s1,s2, ) in Figure 4, is a linear combination of the L1 distance between shape context vectors s1 and s2 (similarity), and the intersection distance (maximality, one of our original grouping criteria), the L1 norm of min(s1,s2) where min is element4 Image 1 contours transformed Contour selection Selection result (c) (d) (e) SIFT Matches (b) Input images w/ contours (a) Figure 5: For a pair of images, SIFT matches propose different transformations of the contours in image 1 to align with contours in image 2. The selection process is run for each transformation to infer a context suitable for evaluating contour point matches via shape. wise. The intersection term encourages higher bin counts in each shape context and therefore the inclusion of more contours. The parameter  trades off between similarity and maximality; typically   1. 4.3 Computational Solution Our formulation of the problem follows the construction first presented in [16], which studied the role of context selection for object recognition. Figure 4 shows the formulation of the overall selection cost SelectionCost. This minimizes Fi( Ti,Coni,  Matchj| l(j)=i) over Coni. We begin with two input images, where the contours are in rough alignment (by applying known Ti to IC 1 ). Multiple shape contexts are placed in each image on a uniform grid (an approximation of Matchj| l(j)=i, since we initially have no matches). Like-colored (in the figure) shape contexts will be compared across images. Our goal is to select contours in each image to minimize the sum of SCMatchCost for each pair of shape contexts. For each shape context j in each image i, we compute the corresponding shape context matrix SCj i . All the SCj i in a particular image Ii are stacked to form matrix SCi. SCi for each image has been color coded to show the SCj i matrix corresponding to each shape context. We introduce the indicator vectors selc1 = [selc1 1...selcm 1 ] and selc2 = [selc1 1...selcn 1] for images I1,I2. selcj i = 1 implies that contour Cj i is selected. SCiselci is then the realized bin counts for all the shape contexts in image Ii under selection selci. We seek to choose selc1 and selc2 such that SC1selc1  SC2selc2 in a shape sense; entries of SC1selc1 and SC2selc2, or realized bin counts, are in correspondence, so we can score these pairs of bin counts using BinMatchCost. A compact summary of this cost function SelectionCost is shown in Figure 4; its decomposition as the sum of SCMatchCost terms, which are each in turn a sum over BinMatchCost terms is shown. The minimization of SelectionCost over selc1 and selc2 is in fact an integer linear program (L1 distance and min are easily encoded with additional variables and linear constraints). By relaxing each selj i  { 0,1}  [0,1], we obtain a linear program (LP) which can be solved efficiently using standard solvers (e.g. SDPT3). Although other methods exist for solving integer linear programs, such as branch-and-bound, we found that directly discretizing the selj i with a fixed threshold worked well. Then Coni = { selc1,selc2} . 4.4 Multiple Context Selections for Image Matching Now that we have established how to do selection in the case were are given Ti, we now apply it in images where there may be multiple objects that are related across the two images by different alignments. We first need to infer the set of candidate transformations T; for our purposes, we will 5 restrict them to be similarity transforms, although we note that non-linear or piecewise linear (e.g., articulation) transformations could certainly be used. A simple method for proposing transformations in the two images is via SIFT ([10]) feature matches. A SIFT match provides scale, orientation, and translation (a similarity transform). RANSAC with multiple matches can be used to estimate full homographies, similar to [14]. Figure 5 depicts an idealized selection process for two images (only the contours are shown). For groups of SIFT matches that describe similar transformations, a transformation Ti is extracted and warps the contours in image 1 to line up with those of image 2, in c). The selection problem is formulated separately for each set of aligned contours d). The solution vectors of the SelectionCost LP for each Ti provide a context \ {Con1 i , Con2 i } ({selc1, selc2} previously) of matching contours, e). Two correct transforms align the car and person, and the selection result includes the respective contours (rows 1,2 of e). A third, wrong transform results in an empty selection (row 3 of e). We can view the context selection procedure for minimizing Fi as choosing the context of contours so as to best reduce the matching cost of the hypothesized inter-image matches for contours with label i, under the transformation Ti. In a sense, we are optimizing the local features via an LP, which traditional graph matching techniques do not do. The result of this optimization will appear in the unary term of the label/match MRF described next. 5 Graph Cuts for Group Assignment and Matching We previously computed context selections (as solutions to the SelectionCost LP), which found groups of contours in each image that have similar shape, [ Con = { [ Con1, ..., \ ConnGroups} under transformations bT. Given these, we seek to compute L and Match. Some labels in 1, ..., nGroups may not be assigned to any contours, satisfying our simplicity criterion for grouping. Note that a contour Cj 1 need not be selected as context in a particular group a in order to have lj = a. Recall with respect to the original cost function, we seek to optimize: min Match,L F(bT, [ Con, L, Match) We phrase this label assignment problem as inference in a Markov network (MN). The MN encodes the joint distribution over the labels L as a product of potentials: P(L) = 1 Z Q j φ(lj) Q j,k φ(lj, lk) where Z is a normalization constant. The binary potentials φ(lj, lk) encode the preference that overlapping contours Cj 1, Ck 1 have the same label: φ(lj = a, lk = b) = 1 a = b 1 −τ a ̸= b (1) where 0 ≤τ ≤1 controls the penalty of having different labels. This is a simple smoothing potential to encourage continuity. Two contours overlap if they contain at least one point in common. The unary potential φ(lj) encodes how well contour Cj 1 = [pk1, pk2, ..., pkn] can be matched in the second image with respect to the context \ {Con1 a, Con2 a}. The log-unary potential decomposes as the sum of matching costs of the individual points pki to their best match in image I2, with respect to the context \ {Con1 a, Con2 a}: log φ(lj = a) ∝− n X i=1 [min q∈IP 2 MatchCostInContext(pki, q, a)] (2) where MatchCostInContext(p, q, a) = SCMatchCost(SCTa(p) 1 [ Con1 a, SCq 2 [ Con2 a) and SCp 1 and SCq 2 are respectively the shape context matrix computed for a shape context centered at Ta(p) using the contours in image 1 under transformation Ta, and the matrix for a shape context centered at q using the contours in image 2. We compute the exact MAP estimate in the MN using the α −β swap graph cut algorithm ([3]), which can maximize this type of energy. Instead of using all contours image 1 as nodes in the MN, we only allow contours were selected in at least one of the context Con1 i ; likewise, we only permit matches to points in image 2 that appear in a contour selected in at least one Con2 j. This better allows us to deal with contours that appear only in one image and thus cannot be reliably grouped based on relative motion. 6 Similarity Stereo Motion Image 1 Image 2 Our result Baseline result Image 1 Image 2 Our grouping Dense correspondences Similar Motion Stereo Motion Stereo Figure 6: Baseline comparison (top) and additional results (bottom). Top: Columns 1,2: original images with input contours, each colored. Columns 3,4: grouping results for our method and baseline; groups of contours are a single color. In stereo pairs, like colors indicate similar disparity. Bottom: Columns 1,2: original images with input contours, each colored. Column 3: our grouping result. Columns 4,5: matches across images indicated by like colors. Please view in color. 5.1 Baseline Comparison As a baseline comparison, we attempted grouping using an MN that involved no selection information. The binary potential remained the same, while the unary potential φ(lj = a) was a function of the distance of each contour point in contour Cj 1 to its closest match in IP 2 , under the transformation Ta: log φ(lj = a) ∝− n X i=1 [min q∈IP 2 (||Ta(pki) −q||2 L2, occlusionThresh2)] (3) The constant occlusionThresh serves a threshold in case a contour point had no nearby match in IP 2 under the transformation T a. Points which had no match within occlusionThresh distance were marked as occluded for the hypothesis lj = a. If more than half the points in the final assignment l∗ j for a contour were occluded, we marked the entire contour as occluded, and it was not displayed. Since we omitted all selection information, all contours in the 1st image were included in the MN as nodes, and their contour points were allowed to match to any contour point in IP 2 . We again optimized the MN energy with the α −β swap graph cut. Free parameters were tuned by hand to produce the best result possible. 6 Experiments We tested our method and the baseline over stereo, motion and similar image pairs. Input contours in each image were extracted automatically using the method of [15]. SIFT matches were extracted from images, keeping only confident matches as described in [10]; matches proposing similar transformations were pruned to a small set, typically 10-20. Because of the high quality of the inferred contexts, we used large shape contexts (radius 90 pixels, in images of size 400 by 500), which 7 made matching very robust. The shape contexts were augmented with edge orientation bins in addition to the standard radial and angular bins. Shape contexts were placed on a uniform grid atop the registered contours (via Ti) with a spacing 50 pixels in the x and y dimensions. Image pairs were taken from the Caltech 101 dataset [4] and from a stereo rig with 1m baseline mounted on a car from our lab (providing stereo and motion images). The running time of our unoptimized MATLAB implementation was several minutes for each image pair. Figure 6, top block, shows the results of our method and the baseline method on stereo, motion and similar images. We can see that our method provides superior groupings that better respect object boundaries. Groups for stereo image pairs are colored according to disparity. Due to the lack of large context, the baseline method is able to find a good match for a given contour point under almost any group hypothesis lj = a, since in cluttered regions, there are always nearby matches. However, by using a much larger, optimized context, our method exploits large-scale shape information and is better able to infer about occlusion, as well as layer assignment. We present additional results on different images in Figure 6, bottom block, and also show the dense correspondences. Interesting groups found in our results include facades of buildings, people, and a car (top row). 7 Conclusion We introduced the problem of grouping of contours in an image using a related image, such as stereo, motion or similar, as an important step for object recognition and scene understanding. Grouping depends on the ability to match contours across images to determine their relative motion. Selecting a good context for shape evaluation was key to robust simultaneous and grouping of contours across images. A baseline method similar to our proposed method, but without context, produced worse groupings on stereo, motion and similar images. Future work will include trying to learn 3D object models from stereo and motion images, and a probabilistic formulation of the matching framework. Introducing learning to improve the grouping result is also an area of significant interest; some shape configurations are more reliable for matching than others. References [1] S. Belongie, J. Malik, and J. Puzicha. Shape matching and object recognition using shape contexts. IEEE PAMI, 2002. [2] S. Birchfield and C. Tomasi. Depth discontinuities by pixel-to-pixel stereo. In ICCV, 1998. [3] Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. PAMI, 2001. [4] L. Fei-Fei, R. Fergus, and P. Perona. One-shot learning of object categories. PAMI, 2006. [5] V. Ferrari, L. Fevrier, F. Jurie, and C. Schmid. Groups of adjacent contour segments for object detection. PAMI, 2008. [6] M. Gelautz and D. Markovic. Recognition of object contours from stereo images: an edge combination approach. 3D PVT, 2004. [7] W.E.L. Grimson. Why stereo vision is not always about 3d reconstruction. In MIT AI Memo, Technical Report AIM-1435, 1993. [8] V. Hedau, H. Arora, and N. Ahuja. Matching images under unstable segmentation. In CVPR 2008. [9] C. Liu, W. T. Freeman, and E.H. Adelson. Analysis of contour motions. In NIPS, 2006. [10] D. Lowe. Distinctive image features from scale-invariant keypoints. In IJCV, 2003. [11] D.G. Lowe and T.O. Binford. The recovery of three-dimensional structure from image curves. PAMI, 1985. [12] D. Sherman and S. Peleg. Stereo by incremental matching of contours. PAMI, 1990. [13] J.Y.A. Wang and E.H. Adelson. Layered representation for motion analysis. In CVPR, 1993. [14] J. Wills, S. Agarwal, and S. Belongie. A feature-based approach for dense segmentation and estimation of large disparity motion. IJCV, 2006. [15] Q. Zhu, G. Song, and J. Shi. Untangling cycles for contour grouping. In ICCV 2007. [16] Qihui Zhu, Liming Wang, Yang Wu, and Jianbo Shi. Contour context selection for object detection: A set-to-set contour matching approach. In ECCV, 2008. 8
2008
5
3,537
Robust Regression and Lasso Huan Xu Department of Electrical and Computer Engineering McGill University Montreal, QC Canada xuhuan@cim.mcgill.ca Constantine Caramanis Department of Electrical and Computer Engineering The University of Texas at Austin Austin, Texas cmcaram@ece.utexas.edu Shie Mannor Department of Electrical and Computer Engineering McGill University Montreal, QC Canada shie.mannor@mcgill.ca Abstract We consider robust least-squares regression with feature-wise disturbance. We show that this formulation leads to tractable convex optimization problems, and we exhibit a particular uncertainty set for which the robust problem is equivalent to ℓ1 regularized regression (Lasso). This provides an interpretation of Lasso from a robust optimization perspective. We generalize this robust formulation to consider more general uncertainty sets, which all lead to tractable convex optimization problems. Therefore, we provide a new methodology for designing regression algorithms, which generalize known formulations. The advantage is that robustness to disturbance is a physical property that can be exploited: in addition to obtaining new formulations, we use it directly to show sparsity properties of Lasso, as well as to prove a general consistency result for robust regression problems, including Lasso, from a unified robustness perspective. 1 Introduction In this paper we consider linear regression problems with least-square error. The problem is to find a vector x so that the ℓ2 norm of the residual b −Ax is minimized, for a given matrix A ∈Rn×m and vector b ∈Rn. From a learning/regression perspective, each row of A can be regarded as a training sample, and the corresponding element of b as the target value of this observed sample. Each column of A corresponds to a feature, and the objective is to find a set of weights so that the weighted sum of the feature values approximates the target value. It is well known that minimizing the least squared error can lead to sensitive solutions [1, 2]. Many regularization methods have been proposed to decrease this sensitivity. Among them, Tikhonov regularization [3] and Lasso [4, 5] are two widely known and cited algorithms. These methods minimize a weighted sum of the residual norm and a certain regularization term, ∥x∥2 for Tikhonov regularization and ∥x∥1 for Lasso. In addition to providing regularity, Lasso is also known for 1 the tendency to select sparse solutions. Recently this has attracted much attention for its ability to reconstruct sparse solutions when sampling occurs far below the Nyquist rate, and also for its ability to recover the sparsity pattern exactly with probability one, asymptotically as the number of observations increases (there is an extensive literature on this subject, and we refer the reader to [6, 7, 8, 9, 10] and references therein). In many of these approaches, the choice of regularization parameters often has no fundamental connection to an underlying noise model [2]. In [11], the authors propose an alternative approach to reducing sensitivity of linear regression, by considering a robust version of the regression problem: they minimize the worst-case residual for the observations under some unknown but bounded disturbances. They show that their robust least squares formulation is equivalent to ℓ2-regularized least squares, and they explore computational aspects of the problem. In that paper, and in most of the subsequent research in this area and the more general area of Robust Optimization (see [12, 13] and references therein) the disturbance is taken to be either row-wise and uncorrelated [14], or given by bounding the Frobenius norm of the disturbance matrix [11]. In this paper we investigate the robust regression problem under more general uncertainty sets, focusing in particular on the case where the uncertainty set is defined by feature-wise constraints, and also the case where features are meaningfully correlated. This is of interest when values of features are obtained with some noisy pre-processing steps, and the magnitudes of such noises are known or bounded. We prove that all our formulations are computationally tractable. Unlike much of the previous literature, we provide a focus on structural properties of the robust solution. In addition to giving new formulations, and new properties of the solutions to these robust problems, we focus on the inherent importance of robustness, and its ability to prove from scratch important properties such as sparseness, and asymptotic consistency of Lasso in the statistical learning context. In particular, our main contributions in this paper are as follows. • We formulate the robust regression problem with feature-wise independent disturbances, and show that this formulation is equivalent to a least-square problem with a weighted ℓ1 norm regularization term. Hence, we provide an interpretation for Lasso from a robustness perspective. This can be helpful in choosing the regularization parameter. We generalize the robust regression formulation to loss functions given by an arbitrary norm, and uncertainty sets that allow correlation between disturbances of different features. • We investigate the sparsity properties for the robust regression problem with feature-wise independent disturbances, showing that such formulations encourage sparsity. We thus easily recover standard sparsity results for Lasso using a robustness argument. This also implies a fundamental connection between the feature-wise independence of the disturbance and the sparsity. • Next, we relate Lasso to kernel density estimation. This allows us to re-prove consistency in a statistical learning setup, using the new robustness tools and formulation we introduce. Notation. We use capital letters to represent matrices, and boldface letters to represent column vectors. For a vector z, we let zi denote the ith element. Throughout the paper, ai and r⊤ j denote the ith column and the jth row of the observation matrix A, respectively; aij is the ij element of A, hence it is the jth element of ri, and ith element of aj. For a convex function f(·), ∂f(z) represents any of its sub-gradients evaluated at z. 2 Robust Regression with Feature-wise Disturbance We show that our robust regression formulation recovers Lasso as a special case. The regression formulation we consider differs from the standard Lasso formulation, as we minimize the norm of the error, rather than the squared norm. It is known that these two coincide up to a change of the regularization coefficient. Yet our results amount to more than a representation or equivalence theorem. In addition to more flexible and potentially powerful robust formulations, we prove new results, and give new insight into known results. In Section 3, we show the robust formulation gives rise to new sparsity results. Some of our results there (e.g. Theorem 4) fundamentally depend on (and follow from) the robustness argument, which is not found elsewhere in the literature. Then in Section 4, we establish consistency of Lasso directly from the robustness properties of our formulation, thus explaining consistency from a more physically motivated and perhaps more general perspective. 2 2.1 Formulation Robust linear regression considers the case that the observed matrix A is corrupted by some disturbance. We seek the optimal weight for the uncorrupted (yet unknown) sample matrix. We consider the following min-max formulation: Robust Linear Regression: min x∈Rm  max ∆A∈U ∥b −(A + ∆A)x∥2  . (1) Here, U is the set of admissible disturbances of the matrix A. In this section, we consider the specific setup where the disturbance is feature-wise uncorrelated, and norm-bounded for each feature: U ≜ n (δ1, · · · , δm) ∥δi∥2 ≤ci, i = 1, · · · , m o , (2) for given ci ≥0. This formulation recovers the well-known Lasso: Theorem 1. The robust regression problem (1) with the uncertainty set (2) is equivalent to the following ℓ1 regularized regression problem: min x∈Rm n ∥b −Ax∥2 + m X i=1 ci|xi| o . (3) Proof. We defer the full details to [15], and give only an outline of the proof here. Showing that the robust regression is a lower bound for the regularized regression follows from the standard triangle inequality. Conversely, one can take the worst-case noise to be δ∗ i ≜−cisgn(x∗ i )u, where u is given by u ≜  b−Ax∗ ∥b−Ax∗∥2 if Ax∗̸= b, any vector with unit ℓ2 norm otherwise; , from which the result follows after some algebra. If we take ci = c and normalized ai for all i, Problem (3) is the well-known Lasso [4, 5]. 2.2 Arbitrary norm and correlated disturbance It is possible to generalize this result to the case where the ℓ2-norm is replaced by an arbitrary norm, and where the uncertainty is correlated from feature to feature. For space considerations, we refer to the full version ([15]), and simply state the main results here. Theorem 2. Let ∥· ∥a denote an arbitrary norm. Then the robust regression problem min x∈Rm  max ∆A∈Ua ∥b −(A + ∆A)x∥a  ; Ua ≜ n (δ1, · · · , δm) ∥δi∥a ≤ci, i = 1, · · · , m o ; is equivalent to the regularized regression problem minx∈Rm n ∥b −Ax∥a + Pm i=1 ci|xi| o . Using feature-wise uncorrelated disturbance may lead to overly conservative results. We relax this, allowing the disturbances of different features to be correlated. Consider the following uncertainty set: U′ ≜  (δ1, · · · , δm) fj(∥δ1∥a, · · · , ∥δm∥a) ≤0; j = 1, · · · , k , where fj(·) are convex functions. Notice that both k and fj can be arbitrary, hence this is a very general formulation and provides us with significant flexibility in designing uncertainty sets and equivalently new regression algorithms. The following theorem converts this formulation to a convex and tractable optimization problem. Theorem 3. Assume that the set Z ≜{z ∈Rm|fj(z) ≤0, j = 1, · · · , k; z ≥0} has non-empty relative interior. The robust regression problem min x∈Rm  max ∆A∈U′ ∥b −(A + ∆A)x∥a  , 3 is equivalent to the following regularized regression problem min λ∈Rk +,κ∈Rm + ,x∈Rm n ∥b −Ax∥a + v(λ, κ, x) o ; where: v(λ, κ, x) ≜max c∈Rm h (κ + |x|)⊤c − k X j=1 λjfj(c) i . (4) Example 1. Suppose U′ = n (δ1, · · · , δm) ∥δ1∥a, · · · , ∥δm∥a s ≤l; o for a symmetric norm ∥· ∥s, then the resulting regularized regression problem is min x∈Rm n ∥b −Ax∥a + l∥x∥∗ s o ; where ∥· ∥∗ s is the dual norm of ∥· ∥s. The robust regression formulation (1) considers disturbances that are bounded in a set, while in practice, often the disturbance is a random variable with unbounded support. In such cases, it is not possible to simply use an uncertainty set that includes all admissible disturbances, and we need to construct a meaningful U based on probabilistic information. In the full version [15] we consider computationally efficient ways to use chance constraints to construct uncertainty sets. 3 Sparsity In this section, we investigate the sparsity properties of robust regression (1), and equivalently Lasso. Lasso’s ability to recover sparse solutions has been extensively discussed (cf [6, 7, 8, 9]), and takes one of two approaches. The first approach investigates the problem from a statistical perspective. That is, it assumes that the observations are generated by a (sparse) linear combination of the features, and investigates the asymptotic or probabilistic conditions required for Lasso to correctly recover the generative model. The second approach treats the problem from an optimization perspective, and studies under what conditions a pair (A, b) defines a problem with sparse solutions (e.g., [16]). We follow the second approach and do not assume a generative model. Instead, we consider the conditions that lead to a feature receiving zero weight. In particular, we show that (i) as a direct result of feature-wise independence of the uncertainty set, a slight change of a feature that was originally assigned zero weight still gets zero weight (Theorem 4); (ii) using Theorem 4, we show that “nearly” orthogonal features get zero weight (Corollary 1); and (iii) “nearly” linearly dependent features get zero weight (Theorem 5). Substantial research regarding sparsity properties of Lasso can be found in the literature (cf [6, 7, 8, 9, 17, 18, 19, 20] and many others). In particular, similar results as in point (ii), that rely on an incoherence property, have been established in, e.g., [16], and are used as standard tools in investigating sparsity of Lasso from a statistical perspective. However, a proof exploiting robustness and properties of the uncertainty is novel. Indeed, such a proof shows a fundamental connection between robustness and sparsity, and implies that robustifying w.r.t. a feature-wise independent uncertainty set might be a plausible way to achieve sparsity for other problems. Theorem 4. Given ( ˜A, b), let x∗be an optimal solution of the robust regression problem: min x∈Rm  max ∆A∈U ∥b −( ˜A + ∆A)x∥2  . Let I ⊆{1, · · · , m} be such that for all i ∈I, x∗ i = 0. Now let ˜U ≜ n (δ1, · · · , δm) ∥δj∥2 ≤cj, j ̸∈I; ∥δi∥2 ≤ci + ℓi, i ∈I o . Then, x∗is an optimal solution of min x∈Rm  max ∆A∈˜U ∥b −(A + ∆A)x∥2  , for any A that satisfies ∥ai −˜ai∥≤ℓi for i ∈I, and aj = ˜aj for j ̸∈I. 4 Proof. Notice that for i ∈I, x∗ i = 0, hence the ith column of both A and ∆A has no effect on the residual. We have max ∆A∈˜U b −(A + ∆A)x∗ 2 = max ∆A∈U b −(A + ∆A)x∗ 2 = max ∆A∈U b −( ˜A + ∆A)x∗ 2. For i ∈I, ∥ai−˜ai∥≤li, and aj = ˜aj for j ̸∈I. Thus  ˜A+∆A ∆A ∈U ⊆  A+∆A ∆A ∈˜U . Therefore, for any fixed x′, the following holds: max ∆A∈U b −( ˜A + ∆A)x′ 2 ≤max ∆A∈˜U b −(A + ∆A)x′ 2. By definition of x∗, max ∆A∈U b −( ˜A + ∆A)x∗ 2 ≤max ∆A∈U b −( ˜A + ∆A)x′ 2. Therefore we have max ∆A∈˜U b −(A + ∆A)x∗ 2 ≤max ∆A∈˜U b −(A + ∆A)x′ 2. Since this holds for arbitrary x′, we establish the theorem. Theorem 4 is established using the robustness argument, and is a direct result of the feature-wise independence of the uncertainty set. It explains why Lasso tends to assign zero weight to nonrelative features. Consider a generative model1 b = P i∈I wiai + ˜ξ where I ⊆{1 · · · , m} and ˜ξ is a random variable, i.e., b is generated by features belonging to I. In this case, for a feature i′ ̸∈I, Lasso would assign zero weight as long as there exists a perturbed value of this feature, such that the optimal regression assigned it zero weight. This is also shown in the next corollary, in which we apply Theorem 4 to show that the problem has a sparse solution as long as an incoherence-type property is satisfied (this result is more in line with the traditional sparsity results). Corollary 1. Suppose that for all i, ci = c. If there exists I ⊂{1, · · · , m} such that for all v ∈span {ai, i ∈I} S{b}  , ∥v∥= 1, we have v⊤aj ≤c ∀j ̸∈I, then any optimal solution x∗ satisfies x∗ j = 0, ∀j ̸∈I. Proof. For j ̸∈I, let a= j denote the projection of aj onto the span of {ai, i ∈I} S{b}, and let a+ j ≜aj −a= j . Thus, we have ∥a= j ∥≤c. Let ˆA be such that ˆai =  ai i ∈I; a+ i i ̸∈I. Now let ˆU ≜{(δ1, · · · , δm)|∥δi∥2 ≤c, i ∈I; ∥δj∥2 = 0, j ̸∈I}. Consider the robust regression problem minˆx n max∆A∈ˆU b−( ˆA+∆A)ˆx 2 o , which is equivalent to minˆx n b −ˆAˆx 2 + P i∈I c|ˆxi| . Now we show that there exists an optimal solution ˆx∗such that ˆx∗ j = 0 for all j ̸∈I. This is because ˆaj are orthogonal to the span of of {ˆai, i ∈I} S{b}. Hence for any given ˆx, by changing ˆxj to zero for all j ̸∈I, the minimizing objective does not increase. Since ∥ˆa −ˆaj∥= ∥a= j ∥≤c ∀j ̸∈I, (and recall that U = {(δ1, · · · , δm)|∥δi∥2 ≤c, ∀i}) applying Theorem 4 we establish the corollary. The next corollary follows easily from Corollary 1. Corollary 2. Suppose there exists I ⊆{1, · · · , m}, such that for all i ∈I, ∥ai∥< ci. Then any optimal solution x∗satisfies x∗ i = 0, for i ∈I. 1While we are not assuming generative models to establish the results, it is still interesting to see how these results can help in a generative model setup. 5 The next theorem shows that sparsity is achieved when a set of features are “almost” linearly dependent. Again we refer to [15] for the proof. Theorem 5. Given I ⊆{1, · · · , m} such that there exists a non-zero vector (wi)i∈I satisfying ∥ X i∈I wiai∥2 ≤ min σi∈{−1,+1} | X i∈I σiciwi|, then there exists an optimal solution x∗such that ∃i ∈I : x∗ i = 0. Notice that for linearly dependent features, there exists non-zero (wi)i∈I such that ∥P i∈I wiai∥2 = 0, which leads to the following corollary. Corollary 3. Given I ⊆{1, · · · , m}, let AI ≜  ai  i∈I, and t ≜rank(AI). There exists an optimal solution x∗such that x∗ I ≜(xi)⊤ i∈I has at most t non-zero coefficients. Setting I = {1, · · · , m}, we immediately get the following corollary. Corollary 4. If n < m, then there exists an optimal solution with no more than n non-zero coefficients. 4 Density Estimation and Consistency In this section, we investigate the robust linear regression formulation from a statistical perspective and rederive using only robustness properties that Lasso is asymptotically consistent. We note that our result applies to a considerably more general framework than Lasso. In the full version ([15]) we use some intermediate results used to prove consistency, to show that regularization can be identified with the so-called maxmin expected utility (MMEU) framework, thus tying regularization to a fundamental tenet of decision-theory. We restrict our discussion to the case where the magnitude of the allowable uncertainty for all features equals c, (i.e., the standard Lasso) and establish the statistical consistency of Lasso from a distributional robustness argument. Generalization to the non-uniform case is straightforward. Throughout, we use cn to represent c where there are n samples (we take cn to zero). Recall the standard generative model in statistical learning: let P be a probability measure with bounded support that generates i.i.d. samples (bi, ri), and has a density f ∗(·). Denote the set of the first n samples by Sn. Define x(cn, Sn) ≜arg min x n v u u t 1 n n X i=1 (bi −r⊤ i x)2 + cn∥x∥1 o = arg min x n√n n v u u t n X i=1 (bi −r⊤ i x)2 + cn∥x∥1 o ; x(P) ≜arg min x nsZ b,r (b −r⊤x)2dP(b, r) o . In words, x(cn, Sn) is the solution to Lasso with the tradeoff parameter set to cn √n, and x(P) is the “true” optimal solution. We have the following consistency result. The theorem itself is a well-known result. However, the proof technique is novel. This technique is of interest because the standard techniques to establish consistency in statistical learning including VC dimension and algorithm stability often work for a limited range of algorithms, e.g., SVMs are known to have infinite VC dimension, and we show in the full version ([15]) that Lasso is not stable. In contrast, a much wider range of algorithms have robustness interpretations, allowing a unified approach to prove their consistency. Theorem 6. Let {cn} be such that cn ↓0 and limn→∞n(cn)m+1 = ∞. Suppose there exists a constant H such that ∥x(cn, Sn)∥2 ≤H almost surely. Then, lim n→∞ sZ b,r (b −r⊤x(cn, Sn))2dP(b, r) = sZ b,r (b −r⊤x(P))2dP(b, r), almost surely. 6 The full proof and results we develop along the way are deferred to [15], but we provide the main ideas and outline here. The key to the proof is establishing a connection between robustness and kernel density estimation. Step 1: For a given x, we show that the robust regression loss over the training data is equal to the worst-case expected generalization error. To show this we establish a more general result: Proposition 1. Given a function g : Rm+1 →R and Borel sets Z1, · · · , Zn ⊆Rm+1, let Pn ≜{µ ∈P|∀S ⊆{1, · · · , n} : µ( [ i∈S Zi) ≥|S|/n}. The following holds 1 n n X i=1 sup (ri,bi)∈Zi h(ri, bi) = sup µ∈Pn Z Rm+1 h(r, b)dµ(r, b). Step 2: Next we show that robust regression has a form like that in the left hand side above. Also, the set of distributions we supremize over, in the right hand side above, includes a kernel density estimator for the true (unknown) distribution. Indeed, consider the following kernel estimator: given samples (bi, ri)n i=1, hn(b, r) ≜(ncm+1)−1 n X i=1 K b −bi, r −ri c  , where: K(x) ≜I[−1,+1]m+1(x)/2m+1. (5) Observe that the estimated distribution given by Equation (5) belongs to the set of distributions Pn(A, ∆, b, c) ≜{µ ∈P|Zi = [bi −c, bi + c] × m Y j=1 [aij −δij, aij + δij]; ∀S ⊆{1, · · · , n} : µ( [ i∈S Zi) ≥|S|/n}, and hence belongs to ˆP(n) = ˆP(n) ≜S ∆|∀j,P i δ2 ij=nc2 j Pn(A, ∆, b, c), which is precisely the set of distributions used in the representation from Proposition 1. Step 3: Combining the last two steps, and using the fact that R b,r |hn(b, r) −h(b, r)|d(b, r) goes to zero almost surely when cn ↓0 and ncm+1 n ↑∞since hn(·) is a kernel density estimation of f(·) (see e.g. Theorem 3.1 of [21]), we prove consistency of robust regression. We can remove the assumption that ∥x(cn, Sn)∥2 ≤H, and as in Theorem 6, the proof technique rather than the result itself is of interest. We postpone the proof to [15]. Theorem 7. Let {cn} converge to zero sufficiently slowly. Then lim n→∞ sZ b,r (b −r⊤x(cn, Sn))2dP(b, r) = sZ b,r (b −r⊤x(P))2dP(b, r), almost surely. 5 Conclusion In this paper, we consider robust regression with a least-square-error loss, and extend the results of [11] (i.e., Tikhonov regularization is equivalent to a robust formulation for Frobenius norm-bounded disturbance set) to a broader range of disturbance sets and hence regularization schemes. A special case of our formulation recovers the well-known Lasso algorithm, and we obtain an interpretation of Lasso from a robustness perspective. We consider more general robust regression formulations, allowing correlation between the feature-wise noise, and we show that this too leads to tractable convex optimization problems. We exploit the new robustness formulation to give direct proofs of sparseness and consistency for Lasso. As our results follow from robustness properties, it suggests that they may be far more general than Lasso, and that in particular, consistency and sparseness may be properties one can obtain more generally from robustified algorithms. 7 References [1] L. Elden. Perturbation theory for the least-square problem with linear equality constraints. BIT, 24:472– 476, 1985. [2] G. Golub and C. Van Loan. Matrix Computation. John Hopkins University Press, Baltimore, 1989. [3] A. Tikhonov and V. Arsenin. Solution for Ill-Posed Problems. Wiley, New York, 1977. [4] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society, Series B, 58(1):267–288, 1996. [5] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. Annals of Statistics, 32(2):407–499, 2004. [6] S. Chen, D. Donoho, and M. Saunders. Atomic decomposition by basis pursuit. SIAM Journal on Scientific Computing, 20(1):33–61, 1998. [7] A. Feuer and A. Nemirovski. On sparse representation in pairs of bases. IEEE Transactions on Information Theory, 49(6):1579–1581, 2003. [8] E. Cand`es, J. Romberg, and T. Tao. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Transactions on Information Theory, 52(2):489–509, 2006. [9] J. Tropp. Greed is good: Algorithmic results for sparse approximation. IEEE Transactions on Information Theory, 50(10):2231–2242, 2004. [10] M. Wainwright. Sharp thresholds for noisy and high-dimensional recovery of sparsity using ℓ1-constrained quadratic programming. Technical Report Available from: http://www.stat.berkeley.edu/tech-reports/709.pdf, Department of Statistics, UC Berkeley, 2006. [11] L. El Ghaoui and H. Lebret. Robust solutions to least-squares problems with uncertain data. SIAM Journal on Matrix Analysis and Applications, 18:1035–1064, 1997. [12] A. Ben-Tal and A. Nemirovski. Robust solutions of uncertain linear programs. Operations Research Letters, 25(1):1–13, August 1999. [13] D. Bertsimas and M. Sim. The price of robustness. Operations Research, 52(1):35–53, January 2004. [14] P. Shivaswamy, C. Bhattacharyya, and A. Smola. Second order cone programming approaches for handling missing and uncertain data. Journal of Machine Learning Research, 7:1283–1314, July 2006. [15] H. Xu, C. Caramanis, and S. Mannor. Robust regression and Lasso. Submitted, available from http://arxiv.org/abs/0811.1790v1, 2008. [16] J. Tropp. Just relax: Convex programming methods for identifying sparse signals. IEEE Transactions on Information Theory, 51(3):1030–1051, 2006. [17] F. Girosi. An equivalence between sparse approximation and support vector machines. Neural Computation, 10(6):1445–1480, 1998. [18] R. R. Coifman and M. V. Wickerhauser. Entropy-based algorithms for best-basis selection. IEEE Transactions on Information Theory, 38(2):713–718, 1992. [19] S. Mallat and Z. Zhang. Matching Pursuits with time-frequence dictionaries. IEEE Transactions on Signal Processing, 41(12):3397–3415, 1993. [20] D. Donoho. Compressed sensing. IEEE Transactions on Information Theory, 52(4):1289–1306, 2006. [21] L. Devroye and L. Gy¨orfi. Nonparametric Density Estimation: the l1 View. John Wiley & Sons, 1985. 8
2008
50
3,538
Hierarchical Fisher Kernels for Longitudinal Data Zhengdong Lu Todd K. Leen Dept. of Computer Science & Engineering Oregon Health & Science University Beaverton, OR 97006 luz@cs.utexas.edu,tleen@csee.ogi.edu Jeffrey Kaye Layton Aging & Alzheimer’s Disease Center Oregon Health & Science University Portland, OR 97201 kaye@ohsu.edu Abstract We develop new techniques for time series classification based on hierarchical Bayesian generative models (called mixed-effect models) and the Fisher kernel derived from them. A key advantage of the new formulation is that one can compute the Fisher information matrix despite varying sequence lengths and varying sampling intervals. This avoids the commonly-used ad hoc replacement of the Fisher information matrix with the identity which destroys the geometric invariance of the kernel. Our construction retains the geometric invariance, resulting in a kernel that is properly invariant under change of coordinates in the model parameter space. Experiments on detecting cognitive decline show that classifiers based on the proposed kernel out-perform those based on generative models and other feature extraction routines, and on Fisher kernels that use the identity in place of the Fisher information. 1 Introduction Time series classification arises in diverse application. This paper develops new techniques based on hierarchical Bayesian generative models and the Fisher kernel derived from them. A key advantage of the new formulation is that, despite varying sequence lengths and sampling times, one can compute the Fisher information matrix. This avoids its common ad hoc replacement with the identity matrix. The latter strategy, common in the biological sequence literature [4], destroys the geometrical invariance of the kernel. Our construction retains the proper geometric structure, resulting in a kernel that is properly invariant under change of coordinates in the model parameter space. This work was motivated by the need to classify clinical longitudinal data on human motor and psychometric test performance. Clinical studies show that at the population level progressive slowing of walking and the rate at which a subject can tap their fingers are predictive of cognitive decline years before its manifestation [1]. Similarly, performance on psychometric tests such as delayed recall of a story or word lists( tests not used in diagnosis), are predictive of cognitive decline [8]. An early predictor of cognitive decline for individual patients based on such longitudinal data would improve medical care and planning for assistance. 1 Our new Fisher kernels use mixed-effects models [6] as the generative process. These are hierarchical models that describe the population (consisting of many individuals) as a whole, and variations between individuals in the population. The population model parameters (called fixed effects), the covariance of the between-individual variability (the random effects), and the additive noise variance are fit by maximum likelihood. The overall population model together with the covariance of the random effects comprise a set of parameters for the prior on an individual subject model, so the fitting scheme is a hierarchical empirical Bayesian procedure. Data Description The data for this study was drawn from the Oregon Brain Aging Study (OBAS) [2], a longitudinal study spanning up to fifteen years with roughly yearly assessment of subjects. For our work, we grouped the subjects into two classes: those who remain cognitively healthy through the course of the study (denoted normal), and those who progress to mild cognitive impairment (MCI) or further to dementia (denoted impaired). Since we are interested in prediction, we retain only data taken prior to diagnosis of impairment. We use 97 subjects from the normal group and 46 from the group that becomes impaired. Motor task data included the time (denoted as seconds) and the number of steps (denoted as steps) to walk 9 meters, and the number of times the subject can tap their forefinger, both dominant (tappingD) and nondominant hands (tappingN) in 10 seconds. Psychometric test data include delayed-recall, which measures the number of words from a list of 10 that the subject can recall one minute after hearing the list, and logical memory II in which the subject is graded on recall of a story told 15-20 minutes earlier. 2 Mixed-effect Models 2.1 Mixed-effect Regression Models In this paper, we confine attention to parametric regression. Suppose there are k individuals (indexed by i = 1, . . . , k) contributing data to the sample, and we have observations {ti n, yi n}, n = 1, . . . , N i as a function of time t for individual i. The data are modeled as yi n = f(ti n; γi) + ϵi n, where γi are the regression parameters and ϵi n is zero-mean white Gaussian noise with (unknown) variance σ2. The superscript on the model parameters γi indicates that the regression parameters are different for each individual contributing to the population. Since the model parameters vary between individuals, it is natural to consider them generated by the sum of a fixed and a random piece: γi = α + βi, where βi (called the random effect), is assumed distributed N(0, D) with unknown covariance D. The expected parameter vector α, called the fixed effect, determines the model for the population as a whole, and the random effect βi accounts for the differences between individuals. This intuition is most precise for the case in which the model is linear in parameters f(t; γ) = γT Φ(t) = αT Φ(t) + βT Φ(t) (1) where Φ(t) = [φ1(t), φ2(t), ..., φd(t)]T denotes a vector of basis functions1. We use M = {α, D, σ} to denote the mixed-effect model parameters. The feature values, observation times, and observation noise are yi ≡[yi 1, · · · , yi N i]T , ti ≡[ti 1, · · · , ti Ni]T , ϵi ≡[ϵi 1, · · · , ϵi N i]T . 2.2 Maximum Likelihood Fitting Model fitting uses the entire collection of data {ti, yi}, i = 1, . . . , k to determine the parameters M = {α, D, σ} by maximum likelihood. The likelihood of the data {ti, yi} given M is p(yi; ti, M) = Z p(yi| βi; ti, σ)p(βi| M)dβi (2) = (2π)−Ni/2|Σi|−1/2 exp((yi −αT Φ(ti))T (Σi)−1(yi −αT Φi(ti))) (3) where 1More generally, the fixed and random effects can be associated with different basis functions. 2 Seconds: Normal Seconds: Impaired logical memory II: Normal logical memory II: Impaired 70 80 90 100 1 1.5 2 2.5 3 3.5 4 Age log(seconds) 70 80 90 100 1 1.5 2 2.5 3 3.5 4 Age log(seconds) 70 80 90 100 0 2 4 6 8 10 Age # of words 70 80 90 100 0 2 4 6 8 10 Age # of words Figure 1: The fit mixed-effect models for two tests. In each panel, the red line stands for the fixed effect αT Φ(t). The two green lines stand for αT Φ(t) ± p ΦT (t)DΦ(t), i.e., the population model ± the s.t.d. of the deviation from the uncertainty of the β. The black dash line is the s.t.d of the deviation when we consider the observation noise. Σi = N i X n=1 Φ(ti n)DΦ(ti n)T + σ2I, and Φ(ti) = [Φ(ti 1), Φ(ti 2), · · · , Φ(ti n)]T . The data likelihood for Y = {y1, y2, · · · , yk} with T = {t1, t2, · · · , tk} is then p(Y; T, M) = Qk i=1 p(yi| ti; M). The maximum likelihood values of {α, D, σ} are found using the ExpectationMaximization algorithm [6] with {β1, β2, · · · , βk} considered as the latent variable: E-step: Q(M, Mg) = E{βi}(log p(Y, {βi}; T, M)|Y; T, Mg) (4) M-step: M = arg max M Q(M, Mg), (5) where Mg stands for the model parameters estimated in previous step, and the expectation in the E-step is with respect to the posterior distribution of on {βi} when Y is known and the model parameter is Mg. For the linear mixed-effect model in Equation (1), the M-step can be given in a closed form. The details of the updating equations are given by Laird et al. [6]. Figure 2: The graphical model of the mixture of mixed-effect models. We use the linear mixed-effect model with polynomial basis functions Φ(t) = [1, t]T . We trained separate mixed-effect models for each of the six measurements. For the four motor behavior measurements, we use the logarithm of data to reduce the skew of the residuals. Figure 1 shows the fit models for seconds and logical memory II, as the representatives of the six measurements. The plots show the fixed effect regression αT Φ(t) (red curve), and the standard deviations arising from the random effects (green curves) and measurement noise (dashed black curve, see caption). The data are the blue spaghetti plots. The plots confirm that subjects that become impaired deteriorate faster than those who remain healthy. With multiple classes (or component subpopulations), it is natural to use a mixture of mixed-effect models. We have two components: one fit on the normal group (denoted M0) and one fit on impaired group (denoted M1), with Mm = {αm, Dm, σm}, m = 0, 1. Here, we use f M = {π0, M0, π1, M1} to denote the parameters of this mixture, with π0 and π1 being the mixing proportions (prior) estimated from the training data. The overall generative process for any individual (ti, yi) is summarized in Figure 2. Here zi ∈{0, 1} is the latent variable indicating which model component is used to generate yi. 3 3 Hierarchical Fisher Kernel 3.1 Fisher Kernel Background The Fisher kernel [4] provides a way to extract discriminative features from the generative model. For any θ-parameterized model p(x; θ), the Fisher kernel between xi and xj is defined as K(xi, xj) = (∇θ log p(xi; θ))T I−1∇θ log p(xj; θ), (6) where I is the Fisher information matrix with the (n, m) entry In,m = Z x ∂log p(x; θ) ∂θn ∂log p(x; θ) ∂θm p(x; θ)dx. (7) The kernel entry K(xi, xj) can be viewed as the inner product of the natural gradient I−1∇θ log p(x; θ) at xi and xj with metric I, and is invariant to re-parametrization of θ. Jaakkola and Haussler [4] prove that a linear classifier based on the Fisher kernel performs at least as well as the generative model. 3.2 Retaining the Fisher Information Matrix In the bioinformatics literature [3] and for longitudinal data such as ours, p(xi; θ) is different for each individual owing to different sequence lengths, and (for longitudinal data) different sampling times ti. The integral in Equation (7) must therefore include the distribution sequence lengths and observation times. Where only sequence lengths differ, an empirical average can be used. However where observation times are non-uniform and vary considerably between individuals (as is the case here), there is insufficient data to form an estimate by empirical averaging. The usual response to the difficulty is to replace the Fisher information with the identity matrix [4]. This spoils the geometric structure, in particular the invariance of the the kernel K(xi, xj) under change of coordinates in the model parameter space (model re-parameterization). This is a significant flaw: the coordinate system used to describe the model is immaterial and should not influence the value of K(xi, xj). For probabilistic kernel regression, the choice of metric is immaterial in the limit of large training sets [4]. However for our application, which uses a support vector machine (SVM), we found the difference cannot be neglected. In our case, replacing Fisher information matrix with the identity matrix is grossly unsuitable. For the mixed-effect model with polynomial basis functions the Fisher score components associated with higher order terms (such as slope and curvature) are far larger than the entries associated with lower order term (such as intercept). Without the proper normalization provided by the Fisher information matrix, the kernel will be dominated by higher order entries2. A principled extension of the Fisher kernel provided by our hierarchical model allows proper calculation of the Fisher information matrix. 3.3 Hierarchical Fisher Kernel Our design of kernel is based on the generative hierarchy of mixture of mixed-effect models, in Figure 2. We notice that the individual-specific information ti enter into this generative process at the last step, but the “latent” variables γi and zi are drawn from the Gaussian mixture model (GMM) ˜Θ = {π0, α0, D0, π1, α1, D1}, with p(zi, γi; eΘ) = πzip(γzi; αzi, Dzi). We can thus build a standard Fisher kernel for the latent variables, and use it to induce a kernel on the observed data. Denoting the latent variables by vi, the Fisher kernel between vi and vj is K(vi, vj) = (∇Θ log p(vi; θ))T (Iv)−1∇θ log p(vj; Θ), 2Our experiments on the OBAS data show that replacing the Fisher information with the identity compromises classifier performance. 4 where the Fisher score ∇˜Θ log p(vi; ˜Θ) is a column vector ∇˜Θ log p(vi; ˜Θ) = [∂log p ∂π0 ; ∂log p ∂α0 ; ∂log p ∂D0 ; ∂log p ∂π1 ; ∂log p ∂α1 ; ∂log p ∂D1 ]T , and Iv is the well-defined Fisher information matrix for v: Iv n,m = Z v ∂log p(v; ˜Θ) ∂˜Θn ∂log p(v; ˜Θ) ∂˜Θm p(v|˜Θ)dv. (8) The kernel for yi and yj is the expectation of K(vi, vj) given the observation yi and yj. K(yi, yj) = Evi,vj[K(vi, vj)| yi, yj; ti, tj, f M] = ZZ K(vi, vj)p(vi| yi; ti, f M)p(vj| yj; tj, f M)dvidvj With different choices of latent variable v, we have three kernel design strategies in the following subsections. This extension to the Fisher kernel, named hierarchical Fisher kernel (HFK), enables us to deal with time series with irregular sampling and different sequence lengths. To our knowledge it has not been reported elsewhere in the literature. Design A: vi = γi This kernel design marginalizes out the higher level variable {zi} and constructs Fisher kernel between the {γi}. This generative process is illustrated in Figure 3 (left panel), which is the same graphical model in Figure 2 with latent variable zi marginalized out3. The Fisher kernel for γ is K(γi, γj) = (∇˜Θ log p(γi|˜Θ))T (Iγ)−1∇˜Θ log p(γi|˜Θ). (9) The kernel between yi and yj as the expectation of K(γi, γj): K(yi, yj) = Eγi,γj(K(γi, γj)| yi, yj; ti, tj, f M) (10) = ( Z ∇˜Θ log p(γi|˜Θ)p(γi| yi; ti, f M)dγi)T (Iγ)−1 Z ∇˜Θ log p(γj|˜Θ)p(γj| yj; tj f M)dγj. (11) The computational drawback is that the integral required to evaluate R ∇˜Θ log p(γj|˜Θ)p(γj| yj; tj f M)dγj and Ir do not have an analytical solution. In our experiments, we estimated the integral with Monte-Carlo sampling. Design B: vi = (zi, γi) This design strategy takes both γi and zi as joint latent variable and build a Fisher kernel for them. The generative process, as summarized in Figure 3 (middle panel), gives the probability for latent variables p(zi, γi; ˜Θ) = πzip(γi; αzi, Dzi). The Fisher kernel for the joint variable (γi, zi) is K((zi, γi), (zj, γj)) = (∇˜Θ log p(zi, γi; ˜Θ))T (Iz,γ)−1∇˜Θ log p(zi, γi; ˜Θ), (12) where Iz,γ is the Fisher information matrix associated with distribution p(z, γ; ˜Θ). It can be shown that K((zi, γi), (zj, γj)) = 1 πzi δ(zi, zj)(1 + Kzi(γi, γj)) 3Strictly speaking, we cannot sum out zi at this step since the group membership is used later in generating the observation noise. However this is a reasonable approximation since the noise variance from M0 and M1 are similar. 5 where Km(γi, γj) is the Fisher kernel for γi associated with component m (= 0, 1) Km(γi, γj) = (∇Θm log p(γi; αm, Dm))T I−1 m ∇Θm log p(γi; αm, Dm), (13) The kernel for yi and yj is defined similarly as in Design A: K(yi, yj) = Ezi,γi,zj,γj(K((zi, γi), (zj, γj))| yi, yj; ti, tj, f M) (14) where the integral can be evaluated analytically. Design C: f M = Mm, m = 0, 1 Design A Design B Design C Figure 3: The graphical model of the mixture of mixed-effect models for Design A, B, and C. This design uses one mixed-effect component instead of the mixture as the generative model, as illustrated in Figure 3 (right panel). Although any single Mm is not a satisfying generative model for the whole population, the resulting kernel is still useful for classification as follows. For either model, m = 0, 1, the Fisher score for the ith individual ∇Θm log p(γi; Θm) describes how the probability p(γi; Θm) responds to the change of parameters Θm. This is a discriminative feature vector since the likelihood of γi for individuals from different group are likely to have different response to the change of parameters Θm. The kernel between γi and γj is Km(γi, γj) defined in Equation (13). And then the kernel for yi and yj: K(yi, yj) = Eγi,γj(K(γi, γj)| yi, yj; ti, tj, Mm) (15) Our experiments show that the kernel based on the impaired group is significantly better than others; we therefore use this kernel as the representative of Design C. It is easy to see that the designed kernel is a special case of Design A or Design B when π0 = 1 and π1 = 0. 3.4 Related Models Marginalized Kernel Our HFK is related to the marginalized kernel (MK) proposed by Tsuda et. al. [10]. MK uses a distribution with discrete latent variable h (indicating the generating component) and observable x, which form a complete data pair x = (h, x). The kernel for observable xi and xj is defined as eK(xi, xj) = X hi X hj P(hi|xi)P(hj|xj) eK(xi, xj) (16) where eK(xi, xj) is the joint kernel for complete data. Tsuda et. al. [10] uses the form: eK(xi, xj) = δ(hi, hj)Khi(xi, xj), (17) where Khi(xi, xj) is the pre-defined kernel for observables associated the hi generative component. Equation (17) says that eK(xi, xj) takes the value of kernel defined for the mth component model if xi and xj are generated from the same component hi = hj = m; otherwise, eK(xi, xj) = 0. HFK can be viewed as a special case of the generalized marginalized kernel that allows continuous latent variables h. This is clear if we re-write Equation (16) as eK(xi, xj) = Ehi,hj( eK(xi, xj)|xi, xj) and view eK(xi, xj) as a generalization of kernel between hi and hj. Nevertheless HFK is different from the original work in [10], in that MK requires existing kernels for observable, such as Kh(xi, xj) in Equation (17). In our problem setting, this kernel does not exist due to the different lengths of time series. 6 Probability Product Kernel We can get a family of kernels by employing various kernel designs of K(vi, vj). The simplest example is to let K(vi, vj) = δ(zi, zj), which immediately leads to K(yi, yj) = Evi,vj(K(vi, vj)| yi, yj; ti, tj, f M) = P m P(zi = m|yi; ti, f M)P(zj = m|yj; tj, f M), which is obviously related to the posterior probabilities of samples, and is essentially a special case of the probability product kernels [5] proposed by Jebara et. al. 4 Experiments Performance Evaluation We use the empirical ROC curve (detection rate vs. false alarm rate) to evaluate classifiers. We compare different classifiers using the area under the curve (AUC), and calculate the statistical significance following the method given by Pepe [9]. We tested the classifiers on the five features: steps, seconds, tappingD, tappingN, and logical memory II. The results of delayed-recall are omitted, they are very close to those for logical memory II. The mixed-effect models for each feature were trained separately with order-1 polynomials (linear) as the basis functions. For each feature, the kernels are used in support vector machines (SVM) for classification, and the ROC is obtained by thresholding the classifier output with varying values. The classifiers are evaluated by leave-one-out cross-validation, the left-out sample consisting of an individual subject’s complete time series (which is also held out of the fitting of the generative model). Classifiers For comparison, we also examined the following two classifiers. First, we consider the likelihood ratio test based on mixed-effect models {M0, M1}. For any given observation (t, y), the likelihood that it is generated by mixed-effect model Mm is given by p(y; t, Mm), which is defined similarly as in Equation (3). The classification decision for a likelihood ratio classifier is made by thresholding the ratio p(y;t,M0) p(y;t,M1). Second, we consider a feature extraction routine independent of any generative model. We summarize each individual i with the least-square fit coefficients for a d-degree polynomial regression model, denoted as pi. To get a reliable fitting we only consider the case d = 1 since many individuals only have four or five observations. We use the coefficients (normalized to their s.t.d.), denoted as ˆpi, as the feature vector, and build a RBF kernel Gij = exp(−||ˆpi−ˆpj||2 2 2s2 ), where s is the kernel width estimated with leave-one-out cross validation in our experiment. The obtained kernel matrix G will be referred to as LSQ kernel. Results We first compare three HFK designs, using the ROC curves plotted in Figure 4 (upper row). On all four motor tests, Design A and Design B are very much comparable except on tappingD, on which Design A is marginally better than Design B with p = 0.136. Also on the motor tests, Design C is slightly but consistently better than other two designs. On logical memory II (story recall), the three designs have comparable performance. We thus use Design C as the representative of HFK, and compare it with the likelihood ratio classifier and SVM based on LSQ kernel, as shown in Figure 4 (lower row). On four motor test, the classifier based on HFK obviously out-performs the other two classifiers, and on logical memory II, the three classifiers have very much comparable performance. 5 Discussion Fisher kernels derived from mixed-effect generative models retain the Fisher information matrix, and hence the proper invariance of the kernel under change of coordinates in the model parameter space. In additional experiments, classifiers constructed with the proper kernel out-perform those constructed with the identity matrix in place of the Fisher information on our data. For example, on seconds, the HKF (Deign C) achieves AUC = 0.7333, while the Fisher kernel computed with the identity matrix as metric on p(yi; ti, M) achieves a AUC = 0.6873, with the p-value (Z-test) 0.0435. Our classifiers built with Fisher kernels derived from mixed-effect models outperform those based solely on the generative model (using likelihood ratio tests) for the motor task data, and are comparable on the psychometric tests. The hierarchical kernels also produce better classifiers than a standard SVM using the coefficients of a least squares fit to the individual’s data. This shows that the generative model provides real advantage for classification. The mixed-effect models capture both the population behavior (through α), and the statistical variability of the individual subject models (through the covariance of β). Knowledge of 7 steps seconds tappingD tappingN logical memory II p1=0.486, p2=0.326 p1=0.387, p2=0.158 p1=0.136, p2=0.210 p1=0.482, p2=0.286 p1=0.491, p2=0.452 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 False Alarm Rate Detection Rate DesignA DesignB DesignC 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 False Alarm Rate Detection Rate DesignA DesignB DesignC 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 False Alarm Rate Detection Rate DesignA DesignB DesignC 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 False Alarm Rate Detection Rate DesignA DesignB DesignC 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 False Alarm Rate Detection Rate DesignA DesignB DesignC p1=0.041, p2=0.038 p1=0.056, p2=0.083 p1=0.042, p2=0.085 p1=0.38, p2=0.049 p1=0.485, p2=0.523 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 False Alarm Rate Detection Rate DesignC LSQK LKHD 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 False Alarm Rate Detection Rate DesignC LSQK LKHD 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 False Alarm Rate Detection Rate DesignC LSQK LKHD 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 False Alarm Rate Detection Rate DesignC LSQK LKHD 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 False Alarm Rate Detection Rate DesignC LSQK LKHD Figure 4: Comparison of classifiers. Upper row: Three HFK designs. The number in the parenthesis is the p-value (Z-test) for the null-hypothesis “the AUC of Classifier 1 is the same as the AUC of Classifier 2”. Upper row: Three HKF designs. p1: Design A vs. Design B, p2: Design C vs. Design A; Lower row: HFK & other classifiers. p1: Design C vs. Likelihood ratio, p2 : Design C vs. LSQ kernel. the statistics of the subject variability is extremely important for classification: although not discussed here, classifiers based only on the population model (α) perform far worse than those presented here [7]. Acknowledgements This work was supported by Intel Corp. under the OHSU BAIC award. Milar Moore and to Robin Guariglia of the Layton Aging & Alzheimer’s Disease Center gave invaluable help with data from the Oregon Brain Aging Study. We thank Misha Pavel, Tamara Hayes, and Nichole Carlson for helpful discussion. References [1] R. Camicioli, D. Howieson, B. Oken, G. Sexton, and J. Kaye. Motor slowing precedes cognitive impairment in the oldest old. Neurology, 50:1496–1498, 1998. [2] M. Green, J. Kaye, and M. Ball. The Oregon brain aging study: Neuropathology accompanying healthy aging in the oldest old. Neurology, 54(1):105–113, 2000. [3] T. Jaakkola, M. Diekhaus, and D. Haussler. Using the fisher kernel method to detect remote protein homologies. 7th Intell. Sys. Mol. Biol., pages 149–158, 1999. [4] T. Jaakkola and D. Haussler. Exploiting generative models in discriminative classifiers. Technical report, Dept. of Computer Science, Univ. of California, 1998. [5] T. Jebara, R. Kondor, and A. Howard. Probability product kernels. Journal of Machine Learning Research, 5:819– 844, 2004. [6] N. Laird and J. Ware. Random-effects models for longitudinal data. Biometrics, 38(4):963–974, 1982. [7] Z. Lu. Constrained Clustering and Cognitive Decline Detection. PhD thesis, OHSU, 2008. [8] S. Marquis, M. Moore, D. Howieson, G. Sexton, H. Payami, J. Kaye, and R. Camicioli. Independent predictors of cognitive decline in healthy elderly persons. Arch. Neurol., 59:601–606, 2002. [9] M. Pepe. The Statistical Evaluation of Medical Tests for Classification and Prediction. Oxford University Press, Oxford, 2003. [10] K. Tsuda, T. Kin, and K. Asai. Marginalized kernels for biological sequences. Bioinformatics, 1(1):1–8, 2002. 8
2008
51
3,539
Correlated Bigram LSA for Unsupervised Language Model Adaptation Yik-Cheung Tam∗ InterACT, Language Technologies Institute Carnegie Mellon University Pittsburgh, PA 15213 yct@cs.cmu.edu Tanja Schultz InterACT, Language Technologies Institute Carnegie Mellon University Pittsburgh, PA 15213 tanja@cs.cmu.edu Abstract We present a correlated bigram LSA approach for unsupervised LM adaptation for automatic speech recognition. The model is trained using efficient variational EM and smoothed using the proposed fractional Kneser-Ney smoothing which handles fractional counts. We address the scalability issue to large training corpora via bootstrapping of bigram LSA from unigram LSA. For LM adaptation, unigram and bigram LSA are integrated into the background N-gram LM via marginal adaptation and linear interpolation respectively. Experimental results on the Mandarin RT04 test set show that applying unigram and bigram LSA together yields 6%–8% relative perplexity reduction and 2.5% relative character error rate reduction which is statistically significant compared to applying only unigram LSA. On the large-scale evaluation on Arabic, 3% relative word error rate reduction is achieved which is also statistically significant. 1 Introduction Language model (LM) adaptation is crucial to automatic speech recognition (ASR) as it enables higher-level contextual information to be effectively incorporated into a background LM improving recognition performance. Exploiting topical context for LM adaptation has shown to be effective for ASR using latent semantic analysis (LSA) such as LSA using singular value decomposition [1], Latent Dirichlet Allocation (LDA) [2, 3, 4] and HMM-LDA [5, 6]. One issue in LSA is the bagof-word assumption which ignores word ordering. For document classification, word ordering may not be important. But in the LM perspective, word ordering is crucial since a trigram LM normally performs significantly better than a unigram LM for word prediction. In this paper, we investigate whether relaxing the bag-of-word assumption in LSA helps improving the ASR performance via LM adaptation. We employ bigram LSA [7] which is a natural extension of LDA to relax the bag-of-word assumption by connecting the adjacent words in a document together to form a Markov chain. There are two main challenges in bigram LSA which are not addressed properly in [7] especially for largescale application. Firstly, the model can be very sparse since it covers topical bigrams in O(V 2 · K) where V and K denote the vocabulary size and the number of topics. Therefore, model smoothing becomes critical. Secondly, model initialization is important for EM training, especially for bigram LSA due to the model sparsity. To tackle the first challenge, we represent bigram LSA as a set of K topic-dependent backoff LM. We propose fractional Kneser-Ney smoothing 1 which supports ∗This work is partly supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR0011-06-2-0001. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA. 1This method was briefly mentioned in [8] without detail. To the best of our knowledge, our formulation in this paper is considered new to the research community. Latent topics Prior distribution over topic mixture weights <s> Observed words w2 w1 z2 z1 θ zN wN Figure 1: Graphical representation of bigram LSA. Adjacent words in a document are linked together to form a Markov chain from left to right. fractional counts to smooth each backoff LM. We show that our formulation recovers the original Kneser-Ney smoothing [9] which supports only integral counts. To address the second challenge, we propose a bootstrapping approach for bigram LSA training using a well-trained unigram LSA as an initial model. During unsupervised LM adaptation, word hypotheses from the first-pass decoding are used to estimate the topic mixture weight of each test audio to adapt both unigram and bigram LSA. The adapted unigram and bigram LSA are combined with the background LM in two stages. Firstly, marginal adaptation [10] is applied to integrate unigram LSA into the background LM. Then the intermediately adapted LM from the first stage is combined with bigram LSA via linear interpolation with the interpolation weights estimated by minimizing the word perplexity on the word hypotheses. The final adapted LM is employed for re-decoding. Related work includes topic mixtures [11] which perform document clustering and train a trigram LM for each document cluster as an initial model. Sentence-level topic mixtures are modeled so that the topic label is fixed within a sentence. Topical N-gram model [12] focuses on phrase discovery and information retrieval. We do not apply this model because the phrase-based LM seems not outperform the word-based LM. The paper is organized as follows: In Section 2, we describe the bigram LSA training and the fractional Kneser-Ney smoothing algorithm. In Section 3, we present the LM adaptation approach based on marginal adaptation and linear interpolation. In Section 4, we report LM adaptation results on Mandarin and Arabic ASR, followed by conclusions and future work in Section 5. 2 Correlated bigram LSA Latent semantic analysis such as LDA makes a bag-of-word assumption that each word in a document is generated irrespective of its position in a document. To relax this assumption, bigram LSA has been proposed [7] to modify the graphical structure of LDA by connecting adjacent words in a document together to form a Markov chain. Figure 1 shows the graphical representation of bigram LSA where the top node represents the prior distribution over the topic mixture weights and the middle layer represents the latent topic label associated to each observed word at the bottom layer. The document generation procedure of bigram LSA is similar to LDA except that the previous word is taken into consideration for generating the current word: 1. Sample θ from a prior distribution p(θ) 2. For each word wi at the i-th position of a document: (a) Sample topic label: zi ∼Multinomial(θ) (b) Sample wi given the previous word wi−1 and the topic label zi: wi ∼p(·|wi−1, zi) Our incremental contributions for bigram LSA are three-folded: Firstly, we present a technique for topic correlation modeling using Dirichlet-Tree prior in Section 2.1. Secondly, we propose efficient algorithm for bigram LSA training via variational Bayes approach and model bootstrapping which are scalable to large settings in Section 2.2. Thirdly, we formulate the fractional Kneser-Ney smoothing to generalize the original Kneser-Ney smoothing which supports only integral counts in Section 2.3. j=J Dir(.) Dir(.) Dir(.) Dir(.) topic 1 Latent topics topic K j=1 j=2 j=3 topic 2 topic 3 topic K−1 topic 4 0.3+0.4 Dir(.) Dir(.) Dir(.) j=2 j=3 j=1 q(z=k) propagate 0.1 0.2 0.3 0.4 0.1+0.2 Figure 2: Left: Dirichlet-Tree prior of depth two. Right: Variational E-step as bottom-up propagation and summation of fractional topic counts. 2.1 Topic correlation Modeling topic correlations is motivated by an observation that documents such as newspaper articles are usually organized into main-topic and sub-topic hierarchy for document browsing. From this perspective, a Dirichlet prior is not appropriate since it assumes topic independence. A DirichletTree prior [13, 14] is employed to capture topic correlations. Figure 2 (Left) illustrates a depth-two Dirichlet-Tree. A depth-one Dirichlet-tree is equivalent to a Dirichlet prior in LDA. The sampling procedure for the topic mixture weight θ ∼p(θ) can be described as follows: 1. Sample a vector of branch probabilities bj ∼Dirichlet(·; {αjc}) for each node j = 1...J where {αjc} denotes the parameter of the Dirichlet distribution at node j, i.e. the pseudocounts of the outgoing branch c at node j. 2. Compute the topic mixture weight as θk = Q jc bδjc(k) jc where δjc(k) is an indicator function which sets to unity when the c-th branch of the j-th node leads to the leaf node of topic k and zero otherwise. The k-th topic weight θk is computed as the product of sampled branch probabilities from the root node to the leaf node corresponding to topic k. The structure and the number of outgoing branches of each Dirichlet node can be arbitrary. In this paper, we employ a balanced binary Dirichlet-tree. 2.2 Model training Gibbs sampling was employed for bigram LSA training [7]. Despite the simplicity, it can be slow and inefficient since it usually requires many sampling iterations for convergence. We present a variational Bayes approach for model training. The joint likelihood of a document wN 1 , the latent topic sequence zN 1 and θ using the bigram LSA can be written as follows: p(wN 1 , zN 1 , θ) = p(θ) · N Y i=1 p(zi|θ) · p(wi|wi−1, zi) (1) By introducing a factorizable variational posterior distribution q(zN 1 , θ; Γ) = q(θ) · QN i=1 q(zi) over the latent variables and applying the Jensen’s inequality, the lower bound of the marginalized document likelihood can be derived as follows: log p(wN 1 ; Λ, Γ) = log Z θ X z1...zN q(zN 1 , θ; Γ) · p(wN 1 , zN 1 , θ; Λ) q(zN 1 , θ; Γ) (2) ≥ Z θ X z1...zN q(zN 1 , θ; Γ) · log p(wN 1 , zN 1 , θ; Λ) q(zN 1 , θ; Γ) (By Jensen’s Inequality) (3) = Eq[log p(θ) q(θ)] + N X i=1 Eq[log p(zi|θ) q(zi) ] + N X i=1 Eq[log p(wi|wi−1, zi)] (4) = Q(wN 1 ; Λ, Γ) (5) where the expectation is taken using the variational posterior q(zN 1 , θ). For the E-step, we compute the partial derivative of the auxiliary function Q(·) with respect to q(zi) and the parameter γjc in the Dirichlet-Tree posterior q(θ). Setting the derivatives to zero yields: E-Steps: q(zi = k) ∝ p(wi|wi−1, k) · eEq[log θk;{γjc}] for k = 1..K (6) γjc = αjc + N X i=1 Eq[δjc(zi)] = αjc + N X i=1 K X k=1 q(zi = k) · δjc(k) (7) where Eq[log θk] = X jc δjc(k) · Eq[log bjc] = X jc δjc(k) à Ψ(γjc) −Ψ( X c γjc) ! (8) where Eqn 7 is motivated from the conjugate property that the Dirichlet-Tree posterior given the topic sequence zN 1 has the same form as the Dirichlet-Tree prior: p(bJ 1 |zN 1 ) ∝ p(zN 1 |bJ 1 ) · p(bJ 1 ; {αjc}) ∝   N Y i=1 Y jc bδjc(zi) jc  · Y jc bαjc−1 jc (9) = Y jc b(αjc+P N i=1 δjc(zi))−1 jc = Y jc b γ′ jc−1 jc = J Y j=1 Dirichlet(bj; {γ′ jc}) (10) Figure 2 (Right) illustrates that Eqn 7 can be implemented as propagation of fractional topic counts in a bottom-up fashion with each branch as an accumulator for γjc. Eqn 6 and Eqn 7 are applied iteratively until convergence is reached. For the M-step, we compute the partial derivative of the auxiliary function Q(·) over all training documents d with respect to topic bigram probability p(v|u, k) and set it to zero: M-Step (unsmoothed): p(v|u, k) ∝ X d Nd X i=1 q(zi = k|d) · δ(wi−1, u)δ(wi, v) (11) = P d Cd(u, v|k) P d PV v′=1 Cd(u, v′|k) = C(u, v|k) PV v′=1 C(u, v′|k) (12) where Nd denote the number of words in document d and δ(wi, v) is a 0-1 Kronecker Delta function to test if the i-th word in document d is vocabulary v. Cd(u, v|k) denotes the fractional counts of a bigram (u, v) belonging to topic k in document d. Intuitively, Eqn 12 simply computes the relative frequency of the bigram (u, v). However, this solution is not practical since bigram LSA assigns zero probability to unseen bigrams. Therefore, bigram LSA should be smoothed properly. One simple approach is to use Laplace-smoothing by adding a small count δ to all bigrams. However, this approach can lead to worse performance since it will bias the bigram probability towards a uniform distribution when the vocabulary size V gets large. Our approach is to represent p(v|u, k) as a standard backoff LM smoothed by fractional Kneser-Ney smoothing as described in Section 2.3. Model initialization is crucial for variational EM training. We employ a bootstrapping approach using a well-trained unigram LSA as an initial model for bigram LSA so that p(wi|wi−1, k) is approximated by p(wi|k) in Eqn 6. It saves computation and avoids keeping the full initial bigram LSA in memory during the EM training. To make the training procedure more practical, we apply bigram pruning during statistics accumulation in the M-step when the bigram count in a document is less than 0.1. This heuristic is reasonable since only a small number of topics are “active” to a bigram. With the sparsity, there is no need to store K copies of accumulators for each bigram and thus reducing the memory requirement significantly. The pruned bigram counts are re-assigned to the most likely topic of the current document so that the counts are conserved. For practical implementation, accumulators are saved into the disk in batches for count merging. In the final step, each topic-dependent LM is smoothed individually using the merged count file. 2.3 Fractional Kneser-Ney smoothing Standard backoff N-gram LM is widely used in the ASR community. The state-of-the-art smoothing for the backoff LM is based on Kneser-Ney smoothing [9]. The belief of its success is due to the preservation of marginal distributions. However, the original formulation only works for integral counts which is not suitable for bigram LSA using fractional counts. Therefore, we propose the fractional Kneser-Ney smoothing as a generalization of the original formulation. The interpolated form using absolute discounting can be expressed as follows: pKN(v|u) = max{C(u, v) −D, 0} C(u) + λ(u) · pKN(v) (13) where D is a discounting factor. In the original formulation, D lies between 0 and 1. But in our formulation, D can be any positive number. Intuitively, D controls the degree of smoothing. If D is set to zero, the model is unsmoothed; If D is too big, bigrams with counts smaller than D are pruned from the LM. λ(u) ensures the bigram probability sums to unity. After summing over all possible v on both sides of Eqn 13 and re-arranging terms, λ(u) becomes: 1 = X v max{C(u, v) −D, 0} C(u) + λ(u) (14) =⇒λ(u) = 1 − X v max{C(u, v) −D, 0} C(u) = 1 − X v:C(u,v)>D C(u, v) −D C(u) (15) = C(u) −P v:C(u,v)>D C(u, v) + D P v:C(u,v)>D 1 C(u) (16) = P v:C(u,v)≤D C(u, v) + D P v:C(u,v)>D 1 C(u) (17) = C≤D(u, ·) + D · N>D(u, ·) C(u) (18) where C≤D(u, ·) denotes the sum of bigram counts following u and smaller than D. N>D(u, ·) denotes the number of word types following u with the bigram counts bigger than D. In Kneser-Ney smoothing, the lower-order distribution pKN(v) is treated as unknown parameters which can be estimated using the preservation of marginal distributions: ˆp(v) = X u pKN(v|u) · ˆp(u) (19) where ˆp(v) is the marginal distribution estimated from the background training data so that ˆp(v) = C(v) P v′ C(v′). Therefore, we substitute Eqn 13 into Eqn 19: C(v) = X u µmax{C(u, v) −D, 0} C(u) + λ(u) · pKN(v) ¶ · C(u) (20) = ÃX u max{C(u, v) −D, 0} ! + pKN(v) · X u C(u) · λ(u) (21) =⇒pKN(v) = C(v) −P u max{C(u, v) −D, 0} P u C(u) · λ(u) (22) = C(v) −C>D(·, v) + D · N>D(·, v) P u C(u) · λ(u) (23) = C≤D(·, v) + D · N>D(·, v) P u C≤D(u, ·) + D · N>D(u, ·) (using Eqn 18) (24) = C≤D(·, v) + D · N>D(·, v) P v C≤D(·, v) + D · N>D(·, v) (25) Eqn 25 generalizes Kneser-Ney smoothing to integral and fractional counts. For the original formulation, C≤D(u, ·) equals to zero since each observed bigram count must be at least one by definition with D less than one. As a result, the D term cancels out yielding the original formulation which counts the number of words preceding v and thus recovering the original formulation. Intuitively, the numerator in Eqn 25 measures the total discounts of observed bigrams ending at v. In other words, fractional Kneser-Ney smoothing estimates the lower-order probability distribution using the relative frequency over discounts instead of word counts. With this approach, each topic-dependent LM in bigram LSA can be smoothed using our formulation. 3 Unsupervised LM adaptation Unsupervised LM adaptation is performed by first inferring the topic distribution of each test audio using the word hypotheses from the first-pass decoding via variational inference in Eqn 6–7. Relative frequency over the branch posterior counts γjc is applied on each Dirichlet node j. The MAP topic mixture weight ˆθ and the adapted unigram and bigram LSA are computed as follows: ˆθk ∝ Y jc µ γjc P c′ γjc′ ¶δjc(k) for k = 1...K (26) pa(v) = K X k=1 p(v|k) · ˆθk and pa(v|u) = K X k=1 p(v|u, k) · ˆθk (27) The unigram LSA marginals are integrated into the background N-gram LM pbg(v|h) via marginal adaptation [10] as follows: p(1) a (v|h) ∝ µ pa(v) pbg(v) ¶β · pbg(v|h) (28) Marginal adaptation has a close connection to maximum entropy modeling since the marginal constraints can be encoded as unigram features. Intuitively, bigram LSA would be integrated in the same fashion by introducing bigram marginal constraints. However, we found that integrating bigram features via marginal adaptation did not offer further improvement compared to only integrating unigram features. Since marginal adaptation integrates a unigram feature as a likelihood ratio between the adapted marginal pa(v) and the background marginal pbg(v) in Eqn 28, perhaps the unigram and bigram likelihood ratios are very similar and thus the latter does not give extra information. Another explanation is that marginal adaptation corresponds to only one iteration of generalized iterative scaling (GIS). Due to the large number of bigram features in terms of millions, one GIS iteration may not be sufficient for convergence. On the other hand, simple linear LM interpolation is found to be effective in our experiment. The final LM adaptation formula is provided using results from Eqn 27 and Eqn 28 as a two-stage process: p(2) a (v|h) = λ · p(1) a (v|h) + (1 −λ) · pa(v|u) (29) where λ is tuned to optimize perplexity on word hypotheses from the first-pass decoding on a peraudio basis. 4 Experimental setup Our LM adaptation approach was evaluated using the RT04 Mandarin Broadcast News evaluation system. The system employed context-dependent Initial-Final acoustic models trained using 100hour broadcast news audio from the Mandarin HUB4 1997 training set and a subset of TDT4. 42dimension features were extracted after linear discriminant analysis projected from a window of MFCC and energy features. The system employed a two-pass decoding strategy using speakerindependent and speaker-adaptive acoustic models. For the second-pass decoding, we applied standard acoustic model adaptation such as vocal tract length normalization and maximum likelihood linear regression on the feature and model spaces. The training corpora include Xinhua News 2002 (January–September) containing 13M words and 64k documents. A background 4-gram LM was trained using modified Kneser-Ney smoothing using the SRILM toolkit [15]. The same training corpora were used for unigram and bigram LSA training with 200 topics. The vocabulary size is 108k words. Discounting factor D for fractional Kneser-Ney smoothing was set to 0.4. First-pass decoding was first performed to obtain an automatic transcript for each audio show. Then unsupervised LM adaptation was applied using the automatic transcript to obtain an adapted LM for second-pass decoding using the approach described in Section 3. Word perplexity and character error rates (CER) were measured on the Mandarin RT04 test set. Matched pairs sentence-segment word error test was performed for significance test using the NIST scoring tool. Table 1: Correlated bigram topics extracted from bigram LSA. Topic index Top bigrams sorted by p(u, v|k) “topic-61” {+¦ (’s student), {+s¸(’s education), s¸+{(education ’s) ¦D+{(school ’s), è#+Á(youth class), £Ÿ+s¸(quality of education) “topic-62” |b+w÷(expert cultivation), L¦+DŸ(university chancellor) ø+Ö(famous), Ä+°D(high-school), {+¦ (’s student) “topic-63” Z+öÌâF(and social security), {+Ò(’s employment), +|Ê(unemployed officer), Ò+« (employment position) “topic-64” {+ÏÄ(’s research), Û+¦V(expert people), + ­(etc area) Ô+b(biological technology), ÏÄ+Ä*(research result) “topic-65” |¡+äO(Human DNA sequence), {+äO(’s DNA) Ô+b(biological technology), vÎ+šûÜ(embryo stem cell) Table 2: Character Error Rates (Word perplexity) on the RT04 test set. Bigram LSA was applied in addition to unigram LSA. LM (13M) CCTV NTDTV RFA OVERALL background LM 15.3% (748) 21.8 (1718) 39.5 (3655) 24.9 +unigram LSA 14.4 (629) 21.5 (1547) 38.9 (3015) 24.3 +bigram LSA (Kneser-Ney, 30 topics) 14.5 (604) 20.7 (1502) 39.0 (2736) 24.1 +bigram LSA (Witten-Bell) 14.1 (594) 20.9 (1452) 38.3 (2628) 23.8 +bigram LSA (Kneser-Ney) 14.0 (587) 20.8 (1448) 38.2 (2586) 23.7 4.1 LM adaptation results Table 1 shows the correlated bigram topics sorted by the joint bigram probability p(v|u, k) · p(u|k). Most of the top bigrams appear either as phrases or words attached with a stopword such as {(’s in English). Table 2 shows the LM adaptation results in CER and perplexity. Applying both unigram and bigram LSA yields consistent improvement over unigram LSA in the range of 6.4%–8.5% relative reduction in perplexity and 2.5% relative reduction in the overall CER. The CER reduction is statistically significant at 0.1% significance level. We compared our proposed fractional Kneser-Ney smoothing with Witten-Bell smoothing which also supports fractional counts. The results showed that Kneser-Ney smoothing performs slightly better than Witten-Bell smoothing. Increasing the number of topics in bigram LSA helps despite model sparsity. We applied extra EM iterations on top of the bootstrapped bigram LSA but no further performance improvement was observed. 4.2 Large-scale evaluation We evaluated our approach using the CMU-InterACT vowelized Arabic transcription system discriminatively trained on 1500-hour transcribed audio using MMIE for the GALE Phase-3 evaluation. A large background 4-gram LM was trained using 962M-word text corpora with 737k vocabulary. Unigram and bigram LSA were trained on the same corpora and were applied to lattice rescoring on Dev07 and unseen Dev08 test sets with 2.6-hour and 3-hour audio shows containing broadcast news (BN) and broadcast conversation (BC) genre. Table 3 shows that bigram LSA rescoring reduces the overall word error rate by more than 3.0% relative compared to the unadapted baseline on both sets which are statistically significant at 0.1% significance level. However, degradation is observed using trigram LSA compared to bigram LSA which may be due to data sparseness. Table 3: Lattice rescoring results in word error rate on Dev07 (unseen Dev08) using the CMUInterACT Arabic transcription system for the GALE Phase-3 evaluation. GALE LM (962M) BN BC OVERALL background LM 11.6% 19.4 14.3 (16.4) +unigram LSA 11.5 19.2 14.2 (16.3) +bigram LSA (Witten-Bell) 11.0 19.0 13.9 (15.9) +bigram LSA (Kneser-Ney) 11.0 18.9 13.8 (15.9) +trigram LSA (Kneser-Ney) 11.3 18.8 14.0 (-) 5 Conclusion We present a correlated bigram LSA approach for unsupervised LM adaptation for ASR. Our contributions include efficient variational EM for model training and fractional Kneser-Ney approach for LM smoothing with fractional counts. Bigram LSA yields additional improvement in both perplexity and recognition performance in addition to unigram LSA. Increasing the number of topics for bigram LSA helps despite the model sparsity. Bootstrapping bigram LSA from unigram LSA saves computation and memory requirement during EM training. Our approach is scalable to large training corpora and works well on different languages. The improvement from bigram LSA is statistically significant compared to the unadapted baseline. Future work includes applying the proposed approach for statistical machine translation. Acknowledgement We would like to thank Mark Fuhs for help parallelizing the bigram LSA training via condor. References [1] J. R. Bellegarda, “Large Vocabulary Speech Recognition with Multispan Statistical Language Models,” IEEE Transactions on Speech and Audio Processing, vol. 8, no. 1, pp. 76–84, Jan 2000. [2] D. Blei, A. Ng, and M. Jordan, “Latent Dirichlet Allocation,” in Journal of Machine Learning Research, 2003, pp. 1107–1135. [3] Y. C. Tam and T. Schultz, “Language model adaptation using variational Bayes inference,” in Proceedings of Interspeech, 2005. [4] D. Mrva and P. C. Woodland, “Unsupervised language model adaptation for mandarin broadcast conversation transcription,” in Proceedings of Interspeech, 2006. [5] T. Griffiths, M. Steyvers, D. Blei, and J. Tenenbaum, “Integrating topics and syntax,” in Advances in Neural Information Processing Systems, 2004. [6] B. J. Hsu and J. Glass, “Style and topic language model adaptation using HMM-LDA,” in Proceedings of Empirical Methods on Natural Language Processing (EMNLP), 2006. [7] Hanna M. Wallach, “Topic Modeling: Beyond Bag-of-Words,” in International Conference on Machine Learning, 2006. [8] P. Xu, A. Emami, and F. Jelinek, “Training connectionist models for the structured language model,” in Proceedings of Empirical Methods on Natural Language Processing (EMNLP), 2003. [9] R. Kneser and H. Ney, “Improved backing-off for M-gram language modeling,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 1995, vol. 1, pp. 181–184. [10] R. Kneser, J. Peters, and D. Klakow, “Language model adaptation using dynamic marginals,” in Proceedings of European Conference on Speech Communication and Technology (EUROSPEECH), 1997, pp. 1971–1974. [11] R. Iyer and M. Ostendorf, “Modeling long distance dependence in language: Topic mixtures versus dynamic cache models,” IEEE Transactions on Speech and Audio Processing, vol. 7, no. 1, pp. 30–39, Jan 1999. [12] X. Wang, A. McCallum, and X. Wei, “Topical N-grams: Phrase and topic discovery, with an application to information retrieval,” in IEEE International Conference on Data Mining, 2007. [13] T. Minka, “The dirichlet-tree distribution,” 1999. [14] Y. C. Tam and T. Schultz, “Correlated latent semantic model for unsupervised language model adaptation,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2007. [15] A. Stolcke, “SRILM - an extensible language modeling toolkit,” in Proceedings of International Conference on Spoken Language Processing (ICSLP), 2002.
2008
52
3,540
The Infinite Factorial Hidden Markov Model Jurgen Van Gael∗ Department of Engineering University of Cambridge, UK jv279@cam.ac.uk Yee Whye Teh Gatsby Unit University College London, UK ywteh@gatsby.ucl.ac.uk Zoubin Ghahramani Department of Engineering University of Cambridge, UK zoubin@eng.cam.ac.uk Abstract We introduce a new probability distribution over a potentially infinite number of binary Markov chains which we call the Markov Indian buffet process. This process extends the IBP to allow temporal dependencies in the hidden variables. We use this stochastic process to build a nonparametric extension of the factorial hidden Markov model. After constructing an inference scheme which combines slice sampling and dynamic programming we demonstrate how the infinite factorial hidden Markov model can be used for blind source separation. 1 Introduction When modeling discrete time series data, the hidden Markov model [1] (HMM) is one of the most widely used and successful tools. The HMM defines a probability distribution over observations y1, y2, · · · yT using the following generative model: it assumes there is a hidden Markov chain s1, s2, · · · , sT with st ∈{1 · · · K} whose dynamics is governed by a K by K stochastic transition matrix π. At each timestep t, the Markov chain generates an output yt using some likelihood model F parametrized by a state dependent parameter θst. We can write the probability distribution induced by the HMM as follows1 p(y1:T , s1:T ) = T Y t=1 p(st|st−1)p(yt|st) = T Y t=1 πst−1,stF(yt; θst). (1) Figure 1 shows the graphical model for the HMM. One shortcoming of the hidden Markov model is the limited representational power of the latent variables. One way to look at the distribution defined by the HMM is to write down the marginal distribution of yt given the previous latent state st−1 p(yt|st−1) = X st p(st|st−1)p(yt|st) = X st πst−1,stF(yt; θst). (2) Equation (2) illustrates that the observations are generated from a dynamic mixture model. The factorial hidden Markov model (FHMM), developed in [2], addresses the limited representational power of the hidden Markov model. The FHMM extends the HMM by representing the hidden state ∗http://mlg.eng.cam.ac.uk/jurgen 1To make the notation more convenient, we assume w.l.o.g. that for all our models, all latent chains start in a dummy state that is in the 0 state. E.g. for the HMM s0 = 0, for the FHMM s(m) 0 = 0 for all m. 1 Figure 1: The Hidden Markov Model Figure 2: The Factorial Hidden Markov Model in a factored form. This way, information from the past is propagated in a distributed manner through a set of parallel Markov chains. The parallel chains can be viewed as latent features which evolve over time according to Markov dynamics. Formally, the FHMM defines a probability distribution over observations y1,y2,· · · yT as follows: M latent chains s(1),s(2),· · · ,s(M) evolve according to Markov dynamics and at each timestep t, the Markov chains generate an output yt using some likelihood model F parameterized by a joint state-dependent parameter  s(1:m) t . The graphical model in figure 2 shows how the FHMM is a special case of a dynamic Bayesian network. The FHMM has been successfully applied in vision [3], audio processing [4] and natural language processing [5]. Unfortunately, the dimensionality M of our factorial representation or equivalently, the number of parallel Markov chains, is a new free parameter for the FHMM which we would prefer learning from data rather than specifying it beforehand. Recently, [6] introduced the basic building block for nonparametric Bayesian factor models called the Indian Buffet Process (IBP). The IBP defines a distribution over infinite binary matrices Z where element znk denotes whether datapoint n has feature k or not. The IBP can be combined with distributions over real numbers or integers to make the features useful for practical problems. In this work, we derive the basic building block for nonparametric Bayesian factor models for time series which we call the Markov Indian Buffet Process (mIBP). Using this distribution we build a nonparametric extension of the FHMM which we call the Infinite Factorial Hidden Markov Model (iFHMM). This construction allows us to learn a factorial representation for time series. In the next section, we develop the novel and generic nonparametric mIBP distribution. Section 3 describes how to use the mIBP do build the iFHMM. Which in turn can be used to perform independent component analysis on time series data. Section 4 shows results of our application of the iFHMM to a blind source separation problem. Finally, we conclude with a discussion in section 5. 2 The Markov Indian Buffet Process Similar to the IBP, we define a distribution over binary matrices to model whether a feature at time t is on or off. In this representation rows correspond to timesteps and the columns to features or Markov chains. We want the distribution over matrices to satisfy the following two properties: (1) the potential number of columns (representing latent features) should be able to be arbitrary large; (2) the rows (representing timesteps) should evolve according to a Markov process. Below, we will formally derive the mIBP distribution in two steps: first, we describe a distribution over binary matrices with a finite number of columns. We choose the hyperparameters carefully so we can easily integrate out the parameters of the model. In a second phase, we take the limit as the number of features goes to infinity in a manner analogous to [7]’s derivation of infinite mixtures. 2.1 A finite model Let S represent a binary matrix with T rows (datapoints) and M columns (features). stm represents the hidden state at time t for Markov chain m. Each Markov chain evolves according to the transition matrix W (m) =  1  am am 1  bm bm  , (3) 2 where W (m) ij = p(st+1,m = j|stm = i). We give the parameters of W (m) distributions am ∼ Beta(α/M, 1) and bm ∼Beta(γ, δ). Each chain starts with a dummy zero state s0m = 0. The hidden state sequence for chain m is generated by sampling T steps from a Markov chain with transition matrix W (m). Summarizing, the generative specification for this process is ∀m ∈{1, 2, · · · , M} : am ∼Beta  α M , 1  , bm ∼Beta(γ, δ), (4) s0m = 0 , stm ∼Bernoulli(a1−st−1,m m bst−1,m m ). Next, we evaluate the probability of the state matrix S with the transition matrix parameters W (m) marginalized out. We introduce the following notation, let c00 m, c01 m, c10 m, c11 m be the number of 0 → 0, 0 →1, 1 →0 and 1 →1 transitions respectively, in binary chain m (including the transition from the dummy state to the first state). We can then write p(S|a, b) = M Y m=1 (1 −am)c00 m ac01 m m (1 −bm)c10 m bc11 m m . (5) We integrate out a and b with respect to the conjugate priors defined in equation (4) and find p(S|α, γ, δ) = M Y m=1 α M Γ( α M + c01 m)Γ(c00 m + 1)Γ(γ + δ)Γ(δ + c10 m)Γ(γ + c11 m) Γ( α M + c00 m + c01 m + 1)Γ(γ)Γ(δ)Γ(γ + δ + c10 m + c11 m) , (6) where Γ(x) is the Gamma function. 2.2 Taking the infinite limit Analogous to the IBP, we compute the limit for M →∞of the finite model in equation (6). The probability of a single matrix in the limit as M →∞is zero. This is not a problem since we are only interested in the probability of a whole class of matrices, namely those matrices that can be transformed into each other through column permutations. In other words, our factorial model is exchangeable in the columns as we don’t care about the ordering of the features. Hence, we compute the infinite limit for left-ordered form (lof)-equivalence classes [6]. The left-ordered form of a binary S matrix can be defined as follows: we interpret one column of length T as encoding a binary number: column m encodes the number 2T −1s1m +2T −2s2m +· · ·+ sT m. We call the number which a feature encodes the history of the column. Then, we denote with Mh the number of columns in the matrix S that have the same history. We say a matrix is a lofmatrix if its columns are sorted in decreasing history values. Let S be a lof-matrix, then we denote with [S] the set of all matrices that can be transformed into S using only column permutations; we call [S] the lof-equivalence class. One can check that the number of elements in the lof-equivalence class of S is equal to M! Q 2T −1 h=0 Mh!. We thus find the probability of the equivalence class of S to be p([S]) = X S∈[S] p(S|α, γ, δ) (7) = M! Q2T −1 h=0 Mh! M Y m=1 α M Γ( α M + c01 m)Γ(c00 m + 1)Γ(γ + δ)Γ(δ + c10 m)Γ(γ + c11 m) Γ( α M + c00 m + c01 m + 1)Γ(γ)Γ(δ)Γ(γ + δ + c10 m + c11 m) . (8) This form allows us to compute a meaningful limit as M →∞. A writeup on the technical details of this computation can be found on the author’s website. The end result has the following form lim M→∞p([S]) = αM+ Q2T −1 h=0 Mh! exp{−αHT } M+ Y m=1 (c01 m −1)!c00 m!Γ(γ + δ)Γ(δ + c10 m)Γ(γ + c11 m) (c00 m + c01 m)!Γ(γ)Γ(δ)Γ(γ + δ + c10 m + c11 m) , (9) where Ht denotes the t’th Harmonic number and M+ denotes the number of Markov chains that switch on at least once between 0 and T, i.e. M+ is the effective dimension of our model. 3 2.3 Properties of the distribution First of all, it is interesting to note from equation (9) that our model is exchangeable in the columns and Markov exchangeable2 in the rows. Next, we derive the distribution in equation (9) through a stochastic process that is analogous to the Indian Buffet Process but slightly more complicated for the actors involved. In this stochastic process, T customers enter an Indian restaurant with an infinitely long buffet of dishes organized in a line. The first customer enters the restaurant and takes a serving from each dish, starting at the left of the buffet and stopping after a Poisson(α) number of dishes as his plate becomes overburdened. A waiter stands near the buffet and takes notes as to how many people have eaten which dishes. The t’th customer enters the restaurant and starts at the left of the buffet. At dish m, he looks at the customer in front of him to see whether he has served himself that dish. • If so, he asks the waiter how many people have previously served themselves dish m when the person in front of them did (the waiters replies to him the number c11 m) and how many people didn’t serve themselves dish m when the person in front of them did (the waiter replies to him the number c10 m). The customer then serves himself dish m with probability (c11 m + δ)/(γ + δ + c10 m + c11 m). • Otherwise, he asks the waiter how many people have previously served themselves dish m when the person in front of them did not (the waiters replies to him the number c01 m) and how many people didn’t serve themselves dish m when the person in front of them did not either (the waiter replies to him the number c00 m). The customer then serves himself dish m with probability c00 m/(c00 m + c01 m). The customer then moves on to the next dish and does exactly the same. After the customer has passed all dishes people have previously served themselves from, he tries Poisson(α/t) new dishes. If we denote with M (t) 1 the number of new dishes tried by the t’th customer, the probability of any particular matrix being produced by this process is p([S]) = αM+ QT t=1 M (t) 1 ! exp{−αHT } M Y m=1 α M Γ( α M + c01 m)Γ(c00 m + 1)Γ(γ + δ)Γ(δ + c10 m)Γ(γ + c11 m) Γ( α M + c00 m + c01 m + 1)Γ(γ)Γ(δ)Γ(γ + δ + c10 m + c11 m) . (10) We can recover equation (9) by summing over all possible matrices that can be generated using the Markov Indian Buffet process that are in the same lof-equivalence class. It is straightforward to check that there are exactly Q T t=1 M (t) 1 ! Q 2T −1 h=0 Mh! of these. Multiplying this by equation (10) we recover equation (9). This construction shows that the effective dimension of the model (M+) follows a Poisson(αHT ) distribution. 2.4 A stick breaking representation Although the representation above is convenient for theoretical analysis, it is not very practical for inference. Interestingly, we can adapt the stick breaking construction for the IBP [8] to the mIBP. This will be very important for the iFHMM as it will allow us to use a combination of slice sampling and dynamic programming to do inference. The first step in the stick breaking construction is to find the distribution of a(1) > a(2) > · · · , the order statistics of the parameters a. Since the distribution on the variables am in our model are identical to the distribution of the feature parameters in the IBP model, we can use the result in [8] that these variables have the following distribution a(1) ∝ Beta(α, 1), (11) p(a(m)|a(m−1)) = αa−α (m−1)aα−1 (m) I(0 ≤a(m) ≤a(m−1)). (12) The variables bm are all independent draws from a Beta(γ, δ) distribution which is independent of M. Hence if we denote with b(m) the b variable corresponding to the m’th largest a value (in other words: the b value corresponding to a(m)) then it follows that b(m) ∼Beta(γ, δ). 2A sequence is Markov exchangeable if its distribution is invariant under permutations of the transitions. 4 Figure 3: The Infinite Factorial Hidden Markov Model 3 The Infinite Factorial Hidden Markov Model In this section, we explain how to use the mIBP as a building block in a full blown probabilistic model. The mIBP provides us with a matrix S which we interpret as an arbitrarily large set of parallel Markov chains. First we augment our binary representation with a more expressive component which can describe feature specific properties. We do this by introducing a base distribution H from which we sample a parameter  m  H for each Markov chain. This is a rather flexible setup as the base distribution can introduce a parameter for every chain and every timestep, which we will illustrate in section 3.1. Now that we have a model with a more expressive latent structure, we want to add a likelihood model F which describes the distribution over the observations conditional on the latent structure. Formally, F(yt|  ,st,· ) describes the probability of generating yt given the model parameters  and the current latent feature state st,· . We note that there are two important conditions which the likelihood must satisfy in order for the limit M   to be valid: (1) the likelihood must be invariant to permutations of the features, (2) the likelihood cannot depend on  m if stm = 0. Figure 3 shows the graphical model for our construction which we call the Infinite Factorial Hidden Markov Model (iFHMM). In the following section, we describe one particular choice of base distribution and likelihood model which performs Independent Component Analysis on time series. 3.1 The Independent Component Analysis iFHMM Independent Component Analysis [9] (ICA) means different things to different people. Originally invented as an algorithm to unmix a signal into a set of independent signals, it will be more insightful for our purpose to think of ICA in terms of the probabilistic model which we describe below. As we explain in detail in section 4, we are interested in ICA to solve the blind source separation problem. Assume that M signals are represented through the vectors xm; grouping them we can represent the signals using the matrix X = [x1x2 · · · xM]. Next, we linearly combine the signals using a mixing matrix W to generate the observed signal Y = XW. Additionally, we will assume IID Normal(0, 2 Y ) noise added: Y = XW + . A variety of fast algorithms exist which unmix the observations Y and recover the signal X. However, crucial to these algorithms is that the number of signals is known in advance. [10] used the IBP to design the Infinite Independent Component Analysis (iICA) model which learns an appropriate number of signals from exchangeable data. Our ICA iFHMM model extends the iICA for time series. The ICA iFHMM generative model can be described as follows: we sample S  mIBP and pointwise multiply (denoted by  ) it with a signal matrix X. Each entry in X is an IID sample from a Laplace(0,1) distribution. One could choose many other distributions for X, but since in section 4 we will model speech data, which is known to be heavy tailed, the Laplace distribution is a convenient choice. Speakers will be speaking infrequently so pointwise multiplying a heavy tailed distribution with a sparse binary matrix achieves our goal of producing a sparse heavy tailed distribution. Next, we introduce a mixing matrix W which has a row for each signal in S  X and a column for each observed dimension in Y . The entries for W are sampled IID from a Normal(0, 2 W ) distribution. Finally, we combine the signal and mixing matrices as in the finite case to form the 5 observation matrix Y : Y = (S ⊙X)W + ϵ where ϵ is Normal(0, σ2 Y ) IID noise for each element. In terms of the general iFHMM model defined in the previous section, the base distribution H is a joint distribution over columns of X and rows of W . The likelihood F performs the pointwise multiplication, mixes the signals and adds the noise. It can be checked that our likelihood satisfies the two technical conditions for proper iFHMM likelihoods described in section 3. 3.2 Inference Inference for nonparametric models requires special treatment as the potentially unbounded dimensionality of the model makes it hard to use exact inference schemes. Traditionally, in nonparametric factor models inference is done using Gibbs sampling, sometimes augmented with Metropolis Hastings steps to improve performance. However, it is commonly known that naive Gibbs sampling in a time series model is notoriously slow due to potentially strong couplings between successive time steps [11]. In the context of the infinite hidden Markov model, a solution was recently proposed in [12], where a slice sampler adaptively truncates the infinite dimensional model after which a dynamic programming performs exact inference. Since a stick breaking construction for the iFHMM is readily available, we can use a very similar approach for the iFHMM. The central idea is the following: we introduce an auxiliary slice variable µ with the following distribution µ ∼Uniform(0, min m:∃t,stm=1 am). (13) It is not essential that we sample from the uniform distribution, in fact for some of our experiments we use the more flexible Beta distribution. The resulting joint distribution is p(µ, a, b, S) = p(µ|a, S)p(a, b, S). (14) It is clear from the equation above that one recovers the original mIBP distribution when we integrate out µ. However, when we condition the joint distribution on µ we find p(S|Y , µ, a, b) ∝p(S|Y , a, b)I(0 ≤µ ≤minm:∃t,stm=1 am) minm:∃t,stm=1 am (15) which forces all columns of S for which am < µ to be in the all zero state. Since there can only be a finite number of am > µ, this effectively implies that we need only resample a finite number of columns of S. We now describe our algorithm in the context of the ICA iFHMM: we start with an initial S matrix and sample a, b. Next, conditional on our initial S and the data Y , we sample the ICA parameters X and W . We then start an iterative sampling scheme which involves the following steps: 1. We sample the auxiliary slice variable µ. This might involve extending the representation of S, X and W , 2. For all the represented features, we sample S, X and W , 3. We resample the hyperparameters (σY , σW , α, γ, δ) of our model, 4. We compact our representation by removing all unused features. We experimented with 3 different algorithms for step 2. The first, a naive Gibbs sampler, did not perform well as we expected. The second algorithm, which we used for our experiments, is a blocked Gibbs sampler which fixes all but one column of S and runs a forward-filtering backward-sampling sweep on the remaining column. This allows us to analytically integrate out one column of X in the dynamic program and resample it from the posterior afterwards. W can be sampled exactly conditional on X, S and Y . A third algorithm runs dynamic programming on multiple chains at once. We originally designed this algorithm as it has the potential to merge two features in one sweep. However, we found that because we cannot integrate out X and W in this setting, the inference was not faster than our second algorithm. Note that because the bulck of the computation is used for estimating X and W , the dynamic programming based algorithms are effectively as fast as the naive Gibbs sampler. A prototype implementation of the iFHMM sampler in Matlab or .NET can be obtained from the first author. 6 (a) Ground Truth (b) ICA iFHMM (c) iICA (d) ICA iFHMM (e) iICA Figure 4: Blind speech separation experiment; figures represent which speaker is speaking at a certain point in time: columns are speakers, rows are white if the speaker is talking and black otherwise. The left figure is ground truth, the next two figures in are for the 10 microphone experiment, the right two figures are for the 3 microphone experiment. 4 Experiments To test our model and inference algorithms, we address a blind speech separation task, also known as the cocktail party problem. More specifically, we record multiple people who are simultaneously speaking, using a set of microphones. Given the mixed speech signals, the goal is to separate out the individual speech signals. Key to our presentation is that we want to illustrate that using nonparametric methods, we can learn the number of speakers from a small amount of data. Our first experiment learns to recover the signals in a setting with more microphones then speakers, our second experiment uses less microphones then speakers. The experimental setup was the following: we downloaded data from 5 speakers from the Speech Separation Challenge website3. The data for each speaker consists of 4 sentences which we appended with random pauses in between each sentence. Figure 4(a) illustrates which person is talking at what point in time. Next, we artificially mix the data 10 times. Each mixture is a linear combination of each of the 5 speakers using Uniform(0, 1) mixing weights. We centered the data to have zero mean and unit variance and added IID Normal(0, σ2 Y ) noise with σY = 0.3. In our first experiment we compared the ICA iFHMM with the iICA model using all 10 microphones. We subsample the data so we learn from 245 datapoints. We initialized the samplers for both models with an initial S matrix with 10 features, 5% random entries on. We use a Gamma(1.0, 4.0) prior on α. In both models, we use a InverseGamma(2.0, 1.0) prior for σY and σW . Finally, for the iFHMM, we chose a Gamma(10.0, 1.0) prior on γ and a Gamma(1.0, 1.0) prior on δ to encode our belief that people speak for larger stretches of time, say the time to pronounce a sentence. We ran the samplers for 5000 iterations and then gathered 20 samples every 20 iterations. For both the ICA iFHMM and iICA models, we average the 20 samples and rearrange the features to have maximal overlap with the ground truth features. Figure 4(b) shows that the ICA iFHMM model recognizes that the data was generated from 5 speakers. Visual inspection of the recovered S matrix also shows that the model discovers who is speaking at what time. 4(c) illustrated the results of the iICA model on the same data. Although the model discovers some structure in the data, it fails to find the right number of speakers (it finds 9) and does a poor job in discovering which speaker is active at which time. We computed the average mutual information between the 5 columns of the true S matrix and the first 5 columns of the recovered S matrices. We find that the iFHMM has an average mutual information of 0.296 compared to 0.068 for the iICA model. The difference between the two models is strictly limited to the difference between using the IBP versus mIBP. We want to emphasize that although one could come up with ad-hoc heuristics to smooth the iICA results, the ICA iFHMM is a principled probabilistic model that does a good job at comparable computational cost. In a second experiment, we chose to perform blind speech separation using only the first 3 microphones. We subsampled a noiseless version of the data to get 489 datapoints. We ran both the ICA iFHMM and iICA inference algorithms using exactly the same settings as in the previous experi3http://www.dcs.shef.ac.uk/ martin/SpeechSeparationChallenge.htm 7 ment. Figure 4(d) and 4(e) show the average of 20 samples, rearranged to match the ground truth. In this setting both methods fail to identify the number of speakers although the ICA iFHMM clearly performs better. The ICA iFHMM finds one too many signal: the spurious signal is very similar to the third signal which suggests that the error is a problem of the inference algorithm and not so much of the model itself. The iICA on the other hand performs poorly: it is very hard to find any structure in the recovered Z matrix. We compared the mutual information as described above and find that the iFHMM has a mutual information of 0.091 compared to 0.028 for the iICA model. 5 Discussion The success of the Hidden Markov Model set off a wealth of extensions to adapt it to particular situations. [2] introduced a factorial hidden Markov model which explicitly models dynamic latent features while in [13] a nonparametric version of the the Hidden Markov Model was presented. In this paper we “complete the square” by presenting a nonparametric Factorial Hidden Markov Model. We introduced a new stochastic process for latent feature representation of time series called the Markov Indian Buffet Process. We showed how this stochastic process can be used to build a nonparametric extension of the FHMM which we call the iFHMM. Another issue which deserves further exploration is inference: in [2] it was found that a structured variational method provides a good balance between accuracy and computational effort. An interesting open problem is whether we can adapt the structured variational method to the iFHMM. Finally, analogous to the two-parameter IBP [14] we would like to add one more degree of flexibility to control the 0 →1 transition probability more finely. Although the derivation of the mIBP with this extra parameter is straightforward, we as yet lack a stick breaking construction for this model which is crucial for our inference scheme. Acknowledgments We kindly acknowledge David Knowles for discussing the generalized Amari error and A. Taylan Cemgil for his suggestions on blind source separation. Jurgen Van Gael is supported by a Microsoft Research PhD scholarship; Zoubin Ghahramani is also in the Machine Learning department, CMU. References [1] L. R. Rabiner, “A tutorial on hidden markov models and selected applications in speech recognition,” Proceedings of the IEEE, vol. 77, pp. 257–286, 1989. [2] Z. Ghahramani and M. I. Jordan, “Factorial hidden markov models,” Machine Learning, vol. 29, pp. 245– 273, 1997. [3] P. Wang and Q. Ji, “Multi-view face tracking with factorial and switching hmm,” in Proceedings of the Seventh IEEE Workshops on Application of Computer Vision, pp. 401–406, IEEE Computer Society, 2005. [4] B. Logan and P. Moreno, “Factorial hmms for acoustic modeling,” 1998. [5] K. Duh, “Joint labeling of multiple sequences: A factorial hmm approach,” in 43rd Annual Meeting of the Association of Computational Linguistics (ACL) - Student Research Workshop, 2005. [6] T. L. Griffiths and Z. Ghahramani, “Infinite latent feature models and the indian buffet process,” Advances in Neural Information Processing Systems, vol. 18, pp. 475–482, 2006. [7] R. M. Neal, “Bayesian mixture modeling,” Maximum Entropy and Bayesian Methods, 1992. [8] Y. W. Teh, D. G¨or¨ur, and Z. Ghahramani, “Stick-breaking construction for the indian buffet process,” Proceedings of the International Conference on Artificial Intelligence and Statistics, vol. 11, 2007. [9] A. Hyvarinen and E. Oja, “Independent component analysis: Algorithms and applications,” Neural Networks, vol. 13, pp. 411–30, 2000. [10] D. Knowles and Z. Ghahramani, “Infinite sparse factor analysis and infinite independent components analysis,” Lecture Notes in Computer Science, vol. 4666, p. 381, 2007. [11] S. L. Scott, “Bayesian methods for hidden markov models: Recursive computing in the 21st century,” Journal of the American Statistical Association, vol. 97, pp. 337–351, Mar. 2002. [12] J. Van Gael, Y. Saatci, Y. W. Teh, and Z. Ghahramani, “Beam sampling for the infinite hidden markov model,” in The 25th International Conference on Machine Learning, vol. 25, (Helsinki), 2008. [13] M. J. Beal, Z. Ghahramani, and C. E. Rasmussen, “The infinite hidden markov model,” Advances in Neural Information Processing Systems, vol. 14, pp. 577 – 584, 2002. [14] Z. Ghahramani, T. L. Griffiths, and P. Sollich, “Bayesian nonparametric latent feature models,” Bayesian Statistics, vol. 8, 2007. 8
2008
53
3,541
Sparse Signal Recovery Using Markov Random Fields Volkan Cevher Rice University volkan@rice.edu Marco F. Duarte Rice University duarte@rice.edu Chinmay Hegde Rice University chinmay@rice.edu Richard G. Baraniuk Rice University richb@rice.edu Abstract Compressive Sensing (CS) combines sampling and compression into a single subNyquist linear measurement process for sparse and compressible signals. In this paper, we extend the theory of CS to include signals that are concisely represented in terms of a graphical model. In particular, we use Markov Random Fields (MRFs) to represent sparse signals whose nonzero coefficients are clustered. Our new model-based recovery algorithm, dubbed Lattice Matching Pursuit (LaMP), stably recovers MRF-modeled signals using many fewer measurements and computations than the current state-of-the-art algorithms. 1 Introduction The Shannon/Nyquist sampling theorem tells us that in order to preserve information when uniformly sampling a signal we must sample at least two times faster than its bandwidth. In many important and emerging applications, the resulting Nyquist rate can be so high that we end up with too many samples and must compress in order to store or transmit them. In other applications, including imaging systems and high-speed analog-to-digital converters, increasing the sampling rate or density beyond the current state-of-the-art is very expensive. A transform compression system reduces the effective dimensionality of an N-dimensional signal by re-representing it in terms of a sparse expansion in some basis (for example, the discrete cosine transform for JPEG). By sparse we mean that only K ≪N of the basis coefficients are nonzero. The new theory of compressive sensing (CS) combines sampling and compression into a single subNyquist linear measurement process for sparse signals [1,2]. In CS, we measure not periodic signal samples but rather inner products with M < N known measurement vectors; random measurement vectors play a starring role. We then recover the signal by searching for the sparsest signal that agrees with the measurements. Research in CS to date has focused on reducing both the number of measurements M (as a function of N and K) and on reducing the computational complexity of the recovery algorithm. Today’s state-of-the-art CS systems can recover K-sparse and more general compressible signals using M = O(K log(N/K)) measurements using polynomial-time linear programming or greedy algorithms. While such sub-Nyquist measurement rates are impressive, our contention in this paper is that for CS to truly live up its name it must more fully leverage concepts from state-of-the-art compression algorithms. In virtually all such algorithms, the key ingredient is a signal model that goes beyond simple sparsity by providing a model for the basis coefficient structure. For instance, JPEG does not only use the fact that most of the DCT of a natural image are small. Rather, it also exploits the fact that the values and locations of the large coefficients have a particular structure that is characteristic of natural images. Coding this structure using an appropriate model enables JPEG and other similar algorithms to compress images close to the maximum amount possible, and significantly better than a naive coder that just assigns bits to each large coefficient independently. 1 In this paper, we extend the theory of CS to include signals that are concisely represented in terms of a graphical model [3]. We use Markov Random Fields (MRFs) to represent sparse signals whose nonzero coefficients also cluster together. Our new model-based recovery algorithm, dubbed Lattice Matching Pursuit (LaMP), performs rapid and numerically stable recovery of MRF-modeled signals using far fewer measurements than standard algorithms. The organization of the paper is as follows. In Sections 2 and 3, we briefly review the CS and MRF theories. We develop LaMP in Section 4 and present experimental results in Section 5 using both simulated and real world data. We conclude by offering our perspective on the future directions of model-based CS research in Section 6. 2 Compressive sensing: From sparsity to structured sparsity Sparse signal recovery. Any signal x ∈RN can be represented in terms of N coefficients {αi} in a basis {ψi}N i=1; stacking the ψi as columns into the matrix ΨN×N, we can write succinctly that x = Ψθ. We say that x has a sparse representation if only K ≪N entries of θ are nonzero, and we denote by ΩK the set of N K  possible supports for such K-sparse signals. We say that x is compressible if the sorted magnitudes of the entries of θ decay rapidly enough that it can be well approximated as K-sparse. In Compressive Sensing (CS), the signal is not acquired by measuring x or α directly. Rather, we measure the M < N linear projections y = Φx = ΦΨθ using the M × N matrix Φ. In the sequel, without loss of generality, we focus on two-dimensional image data and assume that Ψ = I (the N × N identity matrix) so that x = θ. The most commonly used criterion for evaluating the quality of a CS measurement matrix is the restricted isometry property (RIP). A matrix Φ satisfies the K-RIP if there exists a constant δK > 0 such that for all K-sparse vectors x, (1 −δK)∥x∥2 ≤∥Φx∥2 ≤(1 + δK)∥x∥2. (1) The recovery of the set of significant coefficients θi is achieved using optimization: we search for the sparsest θ that agrees with the measurements y. While in principle recovery is possible using a matrix that has the 2K-RIP with δ2K < 1, such an optimization is combinatorially complex (NPcomplete) and numerically unstable. If we instead use a matrix that has the 3K-RIP with δ3K < 1/2, then numerically stable recovery is possible in polynomial time using either a linear program [1,2] or a greedy algorithm [4]. Intriguingly, a random Gaussian or Bernoulli matrix works with high probability, leading to a randomized acquisition protocol instead of uniform sampling. Structured sparsity. While many natural and manmade signals and images can be described to the first-order as sparse or compressible, their sparse supports (set of nonzero coefficients) often have an underlying order. This order plays a central role in the transform compression literature, but it has barely been explored in the CS context [5,6]. The theme of this paper is that by exploiting a priori information on coefficient structure in addition to signal sparsity, we can make CS better, stronger, and faster. Figure 1 illustrates a real-world example of structured sparse support in a computer vision application. Figure 1(b) is a background subtracted image computed from a video sequence of a parking lot with two moving people (one image frame is shown in Figure 1(a)). The moving people form the foreground (white in (b)), while the rest of the scene forms the background (black in (b)). The background subtraction was computed from CS measurements of the video sequence. Background subtracted images play a fundamental role in making inferences about objects and activities in a scene and, by nature, they have structured spatial sparsity corresponding to the foreground innovations. In other words, compared to the scale of the scene, the foreground innovations are usually not only sparse but also clustered in a distinct way, e.g., corresponding to the silhouettes of humans and vehicles. Nevertheless, this clustering property is not exploited by current CS recovery algorithms. Probabilistic RIP. The RIP treats all possible K-sparse supports equally. However, if we incorporate a probabilistic model on our signal supports and consider only the signal supports with the highest likelihoods, then we can potentially do much better in terms of the required number of measurements required for stable recovery. We say that Φ satisfies the (K, ǫ)-probabilistic RIP (PRIP) if there exists a constant δK > 0 such that for a K-sparse signal x generated by a specified probabilistic signal model, (1) holds with probability at least 1 −ǫ over the signal probability space. We propose a preliminary result on the 2 PSfrag (a) (b) (c) Figure 1: A camera surveillance image (b) with the background subtracted image (b) recovered using compressive measurements of the scene. The background subtracted image has resolution N = 240 × 320 and sparsity K = 390. (c) A random K = 390 sparse image in N = 240 × 320 dimensions. The probability of image (b) under the Ising model is approximately 10856 times greater than the probability of image (c). number of random measurements needed under this new criterion; this is a direct consequence of Theorem 5.2 of [8]. (See also [9] for related results.) Lemma 1. Suppose that M, N, and δ ∈[0, 1] are given and that the signal x is generated by a known probabilistic model. Let ΩK,ǫ ⊆ΩK denote the smallest set of supports for which the probability that a K-sparse signal x has supp(x) /∈ΩK,ǫ is less than ǫ, and denote D = |ΩK,ǫ|. If Φ is a matrix with normalized i.i.d. Gaussian or Bernoulli/Rademacher (±1) random entries, then Φ has the (K, ǫ)-PRIP with probability at least 1 −e−c2M if M ≥c1(K + log(D)), where c1, c2 > 0 depend only on the PRIP constant δK. To illustrate the significance of the above lemma, consider the following probabilistic model for an N-dimensional, K-sparse signal. We assume that the locations of the non-zeros follow a homogeneous Poisson process with rate λ = −log(ǫ/K)N −α, where α ≪1. Thus, a particular non-zero coefficient occurs within a distance of N α of its predecessor with probability 1−ǫ/K. We determine the size of the likely K-sparse support set ΩK under this particular signal model using a simple counting argument. The location of the first non-zero coefficients is among the first N α indices with probability 1 −ǫ/K. After fixing the location of the first coefficient, the location of the second coefficient is among the next N α indices immediately following the first location with probability 1−ǫ/K. Proceeding this way, after the locations of the first j −1 coefficients, have been fixed, we have that the jth non-zero coefficient is among N α candidate locations with probability 1 −ǫ/K. In this way, we obtain a set of supports ΩK,ǫ of size N αK that will occur with probability (1 −ǫ/K)K > 1 −ǫ. Thus for the (K, ǫ)-PRIP to hold for a random matrix, the matrix must have M = cK(1 + α log N) rows, as compared to the cK log(N/K) rows required for the standard K-RIP to hold. When α is on the order of (log N)−1, the number of measurements required and the complexity of the solution method grow essentially linearly in K, which is a considerable improvement over the best possible M = O(K log(N/K)) measurements required without such a priori information. 3 Graphical models for compressive sensing Clustering of the nonzero coefficients in a sparse signal representation can be realistically captured by a probabilistic graphical model such as a Markov random field (MRF); in this paper we will focus for concreteness on the classical Ising model [10]. Support model. We begin with an Ising model for the signal support. Suppose we have a K-sparse signal x ∈RN whose support is represented by s ∈{−1, 1}N such that si = −1 when xi = 0 and si = 1 when xi ̸= 0. The probability density function (PDF) of the signal support can be modeled using a graph Gs = (Vs, Es), where Vs = {1, . . . , N} denotes a set of N vertices – one for each of the support indices – and Es denotes the set of edges connecting support indices that are spatial neighbors (see Figure 2(a)). The contribution of the interaction between two elements {si, sj} in the support of x is controlled by the coefficient λij > 0. The contribution of each element si is controlled by a coefficient λi, resulting in the following PDF for the sparse support s: p(s; λ) = exp    X (i,j)∈Es λijsisj + X i∈Vs λisi −Zs(λ)   , (2) where Zs(λ) is a strictly convex partition function with respect to λ that normalizes the distribution so that it integrates to one. The parameter vector λ quantifies our prior knowledge regarding the 3 si sj (a) xi xj si sj (b) xi xj si sj y1 yM (c) Figure 2: Example graphical models: (a) Ising model for the support, (b) Markov random field model for the resulting coefficients, (c) Markov random field with CS measurements. signal support s and consists of the edge interaction parameters λij and the vertex bias parameters λi. These parameters can be learned from data using ℓ1-minimization techniques [11]. The Ising model enforces coefficient clustering. For example, compare the clustered sparsity of the real background subtracted image in Figure 1(b) with the dispersed “independent” sparsity of the random image in Figure 1(c). While both images (b) and (c) are equally sparse, under a trained Ising model (λij = 0.45 and λi = 0), the image (b) is approximately 10856 times more likely than the image (c). Signal model. Without loss of generality, we focus on 2D images that are sparse in the space domain, as in Figure 1(b). Leveraging the Ising support model from above, we apply the MRF graphical model in Figure 2(b) for the pixel coefficient values. Under this model, the support is controlled by an Ising model, and the signal values are independent given the support. We now develop a joint PDF for the image pixel values x, the support labels s, and the CS measurements y. We begin with the support PDF p(s) from (2) and assume that we are equipped with a sparsitypromoting PDF p(x|s) for x given s. The most commonly used PDF is the Laplacian density (which is related to the ℓ1-norm of x); however, other reference priors, such as generalized Gaussians that are related to the ℓp-norm of x, p < 1, can be more effective [12]. We assume that the measurements y are corrupted by i.i.d. Gaussian noise, i.e., p(y|x) = N y|Φx, σ2I  , where σ2 is the unknown noise variance. From Figure 2(c), it is easy to show that, given the signal x, the signal support s and the compressive measurements y are independent using the D-separation property of graphs [13]. Hence, the joint distribution of the vertices in the graph in Figure 2(b) can be written as p(z) = p(s, x, y) = p(s, x)p(y|s, x) = p(s)p(x|s)p(y|x), (3) where z = [sT , xT , yT ]T . Then, (3) can be explicitly written as p(z) ∝exp    X (i,j)∈Es λijsisj + X i∈Vs [λisi + log(p(xi|si))] − 1 2σ2 ||y −Φx||2 2   . (4) 4 Lattice matching pursuit Using the coefficient graphical model from Section 3, we are now equipped to develop a new modelbased CS signal recovery algorithm. Lattice Matching Pursuit (LaMP) is a greedy algorithm for signals on 2D lattices (images) in which the likelihood of the signal support is iteratively evaluated and optimized under an Ising model. By enforcing a graphical model, (i) partial knowledge of the sparse signal support greatly decreases the ambiguity and thus size of the search space for the remaining unknown part, accelerating the speed of the algorithm; and (ii) signal supports of the same size but different structures result in different likelihoods (recall Figure 1(b) and (c)), decreasing the required number of CS measurements and increasing the numerical stability. Algorithm. The LaMP pseudocode is given in Algorithm 1. Similar to other greedy recovery algorithms such as matching pursuit and CoSaMP [4], each iteration of LaMP starts by estimating a data residual r{k} given the current estimate of the signal x{k−1} (Step 1). After calculating the residual, LaMP calculates a temporary signal estimate (Step 2) denoted by x{k} t . This signal estimate is the sum of the previous estimate x{k−1} and Φ′r{k}, accounting for the current residual. Using this temporary signal estimate as a starting point, LaMP then maximizes the likelihood (4) over the support via optimization (Step 3). This can be efficiently solved using graph cuts with 4 Algorithm 1: LaMP – Lattice Matching Pursuit Input: y, Φ, x{0} = 0, s{0} = −1, and eK (desired sparsity). Output: A eK-sparse approximation x of the acquired signal. Algorithm: repeat {Matching Pursuit Iterations} Step 1. Calculate data residual: r{k} = y −Φx{k−1}; Step 2. Propose a temporary target signal estimate: x{k} t = Φ′r{k} + x{k−1}; Step 3. Determine MAP estimate of the support using graph cuts: s{k} = maxs∈{−1,+1}N P (i,j)∈Es λijsisj + P i∈Vs h λisi + log(p([x{k} t ]i|si)) i ; Step 4. Estimate target signal: t = 0; t[s{k} = 1] = Φ†[:, s{k} = 1]y; x{k} = Prune{t; eK}; Step 5. Iterate: k = k + 1; until Maximum iterations or r{k} < threshold; Return x = x{k}. p(xi|si = −1) p(xi|si = +1) L τ τ ′ 1 ǫ1 ǫ2 ǫ3 ǫ1 −log ǫ1 log ǫ2 log ǫ3 log ǫ1 1 ≈1 τ 0 −1 ⇒log p(xi|si = −1) ⇒log p(xi|si = +1) ⇒log p(xi|si=−1) −log ǫ1 ⇒log p(xi|si=+1) log ǫ1 U−1(xi; τ) U+1(xi; τ) Figure 3: Geometrical approximations of p(xi|si = −1) and log p(xi|si = +1). O(N) complexity [14]. In particular, for planar Ising models, the global minimum of the problem can be obtained. Once a likely signal support s{k} is obtained in Step 3, LaMP obtains an updated signal estimate x{k} using least squares with the selected columns of the measurement matrix Φ[:, s{k} = 1] and pruning back to the largest eK signal coefficients (Step 4). Hence, the parameter eK controls the sparsity of the approximation. In Step 4, a conjugate gradient method is used for efficiently performing the product by a pseudoinverse. If the graphical model includes dependencies between the signal values xi, we then replace the pseudoinverse product by a belief propagation algorithm to efficiently solve for the signal values x{k} within Step 4. Signal log-likelihood log p(x|s). The correct signal PDF to use given the support p(x|s) is problem-dependent. Here, we provide one approximation that mimics the ℓ0 minimization for CS recovery for the signal graphical model in Figure 2(c); we also use this in our experiments in Section 5. The state si = 1 represents a nonzero coefficient; thus, all nonzero values of xi should have equal probability, and the value xi = 0 should have zero probability. Similarly, the state si = −1 represents a zero-valued coefficient; thus, the mass of its probability function is concentrated at zero. Hence, we use the approximations for xi ∈[−L, L], a restricted dynamic range: p(xi|si = −1) = δ(xi) and p(xi|si = 1) = (1 −δ(xi))/2L. However, the optimization over the joint PDF in (4) requires a “smoothing” of these PDFs for two reasons: (i) to obtain robustness against noise and numerical issues; and (ii) to extend the usage of the algorithm from sparse to compressible signals. We approximate log p(xi|si = ±1) using the parametric form illustrated in Figure 3. Here, the constant τ is a slack parameter to separate large and small signal coefficients, and ǫ1, ǫ2, and ǫ3 are chosen according to τ and L to normalize each PDF. We also denote a = ǫ3L, with a ≈1. Using the normalization constraints, it is possible to show that as the dynamic range increases, lim L→∞−log ǫ2 log ǫ1 →1 τa and lim L→∞−log ǫ3 log ǫ1 →0. 5 Hence, we approximate the likelihoods using the utility functions Usi(x; τ) that follow this form. The optimization problem used by Step 3 of LaMP to determine the support is then approximately equivalent to the following problem s{k+1} = max s∈{−1,+1}N X (i,j)∈Es eλijsisj + X i∈Vs h eλisi + Usi([x{k+1} t ]i; τ) i , (5) where eλ = λ log ǫ1 . If the signal values are known to be positive, then the definitions of Usi can be changed to enforce the positivity during estimation. The choice of eλij is related to the desired sparseness on the lattice structure. To enforce a desired sparsity eK on the lattice structure, we apply statistical mechanics results on the 2D Ising model and choose eλij = 0.5 arcsin((1 −m8)−1 4 ), where m is called the average magnetization. In our recovery problem, the average magnetization and the desired signal sparsity has a simple relationship: m = h (+1) × eK + (−1) × (N −eK) i /N. We set eλi = 0 unless there is prior information on the signal support. The threshold τ is chosen at each iteration adaptively by sorting the magnitudes of the temporary target signal estimate coefficients and determining the 5 eK threshold; this gives preference to the largest 5 eK coefficients that attain states si = 1, unless the cost incurred by enforcing the lattice structure is too large. The pruning operation in Step 4 of LaMP then enforces the desired sparsity eK. 5 Experiments We now use several numerical simulations to demonstrate that for spatially clustered sparse signals, which have high likelihood under our MRF model, LaMP requires far fewer measurements and fewer computations for robust signal recovery than state-of-the-art greedy and optimization techniques.1 Experiment 1: Shepp-Logan phantom. Figure 4 (top left) shows the classical N = 100 × 100 Shepp-Logan phantom image. Its sparsity in the space domain is K = 1740. We obtained compressive measurements of this image, which were then immersed in additive white Gaussian noise to an SNR of 10dB. The top row of Figure 4 illustrates the iterative image estimates obtained using LaMP from just M = 2K = 3480 random Gaussian measurements of the noisy target. Within 3 iterations, the support of the image is accurately determined; convergence occurs at the 5th iteration. Figure 4 (bottom) compares LaMP to CoSaMP [4], a state-of-the-art greedy recovery algorithm, and fixed-point continuation (FPC) [17], a state-of-the-art ℓ1-norm minimization recovery algorithm using the same set of measurements. Despite the presence of high noise (10dB SNR), LaMP perfectly recovers the signal support from only a small number of measurements. It also outperforms both CoSaMP and FPC in terms of speed. Experiment 2: Numerical stability. We demonstrate LaMP’s stability in the face of substantial measurement noise. We tested both LaMP and FPC with a number of measurements that gave close to perfect recovery of the Shepp-Logan phantom in the presence of a small amount of noise; for LaMP, setting M = 1.7K suffices, while FPC requires M = 4K. We then studied the degradation of the recovery quality as a function of the noise level for both algorithms. For reference, a value of σ = 20 corresponds to a measurement-to-noise ratio of just 6dB. The results in Figure 5(a) demonstrate that LaMP is stable for a wide range of measurement noise levels. Indeed, the rate of increase of the LaMP recovery error as a function of the noise variance σ (a measure of the stability to noise) is comparable to that of FPC, while using far fewer measurements. Experiment 3: Performance on real background subtracted images. We test the recovery algorithms over a set of background subtraction images. The images were obtained from a test video sequence, one image frame of which is shown in Figure 1, by choosing at random two frames from the video and subtracting them in a pixel-wise fashion. The large-valued pixels in the resulting images are spatially clustered and thus are well-modeled by the MRF enforced by LaMP. We created 100 different test images; for each image, we define the sparsity K as the number of coefficients 1We use the GCOptimization package [14–16] to solve the support recovery problem in Step 3 in Algorithm 1 in our implementation of LaMP. 6 Noise-free target LaMP Iter. #1 LaMP Iter. #2 LaMP Iter. #3 LaMP Iter. #4 LaMP Iter. #5, 0.9s CoSaMP, 6.2s FPC, 6.5s Figure 4: Top: LaMP recovery of the Shepp-Logan phantom (N = 100 × 100, K = 1740, SNR = 10dB) from M = 2K = 3480 noisy measurements. Bottom: Recoveries from LaMP, CoSaMP, and FPC, including running times on the same computer. 0 5 10 15 20 0 500 1000 1500 2000 2500 3000 σ Maximum reconstruction error LaMP, M = 1.7K FPC, M = 5K FPC, M = 4K (a) 0 1 2 3 4 5 0 0.5 1 1.5 M/K Average normalized error magnitude LaMP CoSaMP FPC (b) Figure 5: Performance of LaMP. (a) Maximum recovery error over 1000 noise iterations as a function of the input noise variance. LaMP has the same robustness to noise as the FPC algorithm. (b) Performance over background subtraction dataset of 100 images. LaMP achieves the best performance at M ≈2.5K, while both FPC and CoSaMP require M > 5K to achieve the same performance. that contain 97% of the image energy. We then performed recovery of the image using the LaMP, CoSaMP, and FPC algorithms under varying number of measurements M, from 0.5K to 5K. An example recovery is shown in Figure 6. For each test and algorithm, we measured the magnitude of the estimation error normalized by the magnitude of the original image. Figure 5(b) shows the mean and standard deviations for the normalized error magnitudes of the three algorithms. LaMP’s graphical model reduces the number of measurements necessary for acceptable recovery quality to M ≈2.5K, while the standard algorithms require M ≥5K measurements to achieve the same quality. 6 Conclusions We have presented an initial study of model-based CS signal recovery using an MRF model to capture the structure of the signal’s sparse coefficients. As demonstrated in our numerical simulations, for signals conforming to our model, the resulting LaMP algorithm requires significantly fewer CS measurements, has lower computational complexity, and has equivalent numerical stability to the current state-of-the-art algorithms. We view this as an initial step toward harnessing the power of modern compression and data modeling methods for CS reconstruction. Much work needs to be done, however. We are working to precisely quantify the reduction in the required number of measurements (our numerical experiments suggest that M = O(K) is sufficient for stable recovery) and computations. We also assert that probabilistic signal models hold the key to formulating inference problems in the compressive measurement domain since in many signal processing applications, signals are acquired merely for the purpose of making an inference such as a detection or classification decision. 7 Target LaMP CoSaMP FPC Figure 6: Example recoveries for background subtraction images, using M = 3K for each image. Acknowledgements. We thank Wotao Yin for helpful discussions, and Aswin Sankaranarayanan for data used in Experiment 3. This work was supported by grants NSF CCF-0431150 and CCF0728867, DARPA/ONR N66001-08-1-2065, ONR N00014-07-1-0936 and N00014-08-1-1112, AFOSR FA9550-07-1-0301, ARO MURI W311NF-07-1-0185, and the TI Leadership Program. References [1] D. L. Donoho. Compressed sensing. IEEE Trans. Info. Theory, 52(4):1289–1306, Sept. 2006. [2] E. J. Cand`es. Compressive sampling. In Proc. International Congress of Mathematicians, volume 3, pages 1433–1452, Madrid, Spain, 2006. [3] S. L. Lauritzen. Graphical Models. Oxford University Press, 1996. [4] D. Needell and J. Tropp. CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. Applied and Computational Harmonic Analysis, June 2008. To appear. [5] C. La and M. N. Do. Tree-based orthogonal matching pursuit algorithm for signal reconstruction. In IEEE Int. Conf. Image Processing (ICIP), pages 1277–1280, Atlanta, GA, Oct. 2006. [6] M. F. Duarte, M. B. Wakin, and R. G. Baraniuk. Wavelet-domain compressive signal reconstruction using a hidden Markov tree model. In ICASSP, pages 5137–5140, Las Vegas, NV, April 2008. [7] V. Cevher, A. Sankaranarayanan, M. F. Duarte, D. Reddy, R. G. Baraniuk, and R. Chellappa. Compressive sensing for background subtraction. In ECCV, Marseille, France, Oct. 2008. [8] R. G. Baraniuk, M. Davenport, R. A. DeVore, and M. B. Wakin. A simple proof of the restricted isometry property for random matrices. 2006. To appear in Const. Approx. [9] T. Blumensath and M. E. Davies. Sampling theorems for signals from the union of linear subspaces. 2007. Preprint. [10] B. M. McCoy and T. T. Wu. The two-dimensional Ising model. Harvard Univ. Press, 1973. [11] M. J. Wainwright, P. Ravikumar, and J. D. Lafferty. High-dimensional graphical model selection using ℓ1-regularized logistic regression. In Proc. of Advances in NIPS, 2006. [12] D. P. Wipf and B. D. Rao. Sparse bayesian learning for basis selection. IEEE Trans. Sig. Proc., 52(8):2153–2164, August 2004. [13] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann Publishers, 1988. [14] V. Kolmogorov and R. Zabin. What energy functions can be minimized via graph cuts? IEEE Trans. on Pattern Anal. and Mach. Int., 26(2):147–159, 2004. [15] Y. Boykov, O. Veksler, and R. Zabih. Efficient approximate energy minimization via graph cuts. IEEE Trans. on Pattern Anal. and Mach. Int., 20(12):1222–1239, Nov. 2001. [16] Y. Boykov and V. Kolmogorov. An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. IEEE Trans. on Pattern Anal. and Mach. Int., 26(9):1124–1137, Sept. 2004. [17] E. T. Hale, W Yin, and Y. Zhang. A fixed-point continuation method for ℓ1-regularized minimization with applications to compressed sensing. Technical Report TR07-07, Rice University, CAM Dept., 2007. 8
2008
54
3,542
Clustering via LP-based Stabilities Nikos Komodakis University of Crete komod@csd.uoc.gr Nikos Paragios Ecole Centrale de Paris INRIA Saclay Ile-de-France nikos.paragios@ecp.fr Georgios Tziritas University of Crete tziritas@csd.uoc.gr Abstract A novel center-based clustering algorithm is proposed in this paper. We first formulate clustering as an NP-hard linear integer program and we then use linear programming and the duality theory to derive the solution of this optimization problem. This leads to an efficient and very general algorithm, which works in the dual domain, and can cluster data based on an arbitrary set of distances. Despite its generality, it is independent of initialization (unlike EM-like methods such as K-means), has guaranteed convergence, can automatically determine the number of clusters, and can also provide online optimality bounds about the quality of the estimated clustering solutions. To deal with the most critical issue in a centerbased clustering algorithm (selection of cluster centers), we also introduce the notion of stability of a cluster center, which is a well defined LP-based quantity that plays a key role to our algorithm’s success. Furthermore, we also introduce, what we call, the margins (another key ingredient in our algorithm), which can be roughly thought of as dual counterparts to stabilities and allow us to obtain computationally efficient approximations to the latter. Promising experimental results demonstrate the potentials of our method. 1 Introduction Clustering is considered as one of the most fundamental unsupervised learning problems. It lies at the heart of many important tasks in machine learning, patter recognition, computer vision, data mining, biology, marketing, just to mention a few of its application areas. Most of the clustering methods are center-based, thus trying to extract a set of cluster centers that best ‘describe’ the input data. Typically, this translates into an optimization problem where one seeks to assign each input data point to a unique cluster center such that the total sum of the corresponding distances is minimized. These techniques are extremely popular and they are thus essential even to other types of clustering algorithms such as Spectral Clustering methods [1],[2]. Currently, most center-based clustering methods rely on EM-like schemes for optimizing their clustering objective function [3]. K-means is the most characteristic (and perhaps the most widely used) technique from this class. It keeps greedily refining a current set of cluster centers based on a simple gradient descent scheme. As a result, it can very easily get trapped to bad local minima and is extremely sensitive to initialization. It is thus likely to fail in problems with, e.g., a large number of clusters. A second very important drawback of many center-based clustering methods, which severely limits their applicability, is that they either require the input data to be of vectorial form and/or impose strong restrictions on the type of distance functions they can handle. Ideally, one would like to be able to cluster data based on arbitrary distances. This is an important point because, by an appropriate choice of these distances, clustering results with completely different characteristics can be achieved [4]. In addition to that, one would prefer that the number of clusters is automatically estimated by the algorithm (e.g., as a byproduct of the optimization process) and not given as input. In contrast to that, however, many algorithms assume that this number is known a priori. 1 To circumvent all the issues mentioned above, a novel center-based clustering algorithm is proposed in this paper. Similarly to other methods, it reduces clustering to a well-defined (but NP-hard) minimization problem, where, of course, the challenge now is how to obtain solutions of minimum objective value. To this end, we rely on the fact that the above problem admits a linear integer programming formulation. By making heavy use of a dual LP relaxation to that program, we then manage to derive a dual based algorithm for clustering. As in all center-based clustering techniques, the most critical component in the resulting algorithm is deciding what cluster centers to choose. To this end, we introduce, what we call, the stability of a data point as a cluster center (this is an LP-based quantity), which we consider as another contribution of this work. Intuitively, the stability of a data point as a cluster center tries to measure how much we need to penalize that point (by appropriately modifying the objective function) such that it can no longer be chosen as a center in an optimal solution of the modified problem. Obviously, one would like to choose as centers those points having high stability. For applying this idea in practice, however, a crucial issue that one needs to deal with is how to efficiently approximate these stability measures. To this end, we introduce, what we call, the margins, another very important concept in our algorithm and a key contribution of our work. As we prove in this paper, margins can be considered as dual to stabilities. Furthermore, they allow us to approximate the latter on the fly, i.e., as our algorithm runs. The outcome is an efficient and very easily implementable optimization algorithm, which works in the dual domain by iteratively updating a dual solution via two very simple operations: DISTRIBUTE and PROJECT. It can cluster data based on an arbitrary set of distances, which is the only input required by the algorithm (as a result, it can find use in a wide variety of applications, even in case where nonvectorial data need to be used). Furthermore, an important point is that, despite its generality, it does not get trapped to bad local minima. It is thus insensitive to initialization and can always compute clusterings of very low cost. Similarly to [5], the number of clusters does not need to be predefined, but is decided on the fly during the optimization process. However, unlike [5], convergence of the proposed method is always guaranteed and no parameters’ adjustment needs to take place for this. Finally, an additional advantage of our method is that it can provide online optimality guarantees, which can be used for assessing the quality of the generated clusterings. These guarantees come in the form of lower bounds on the cost of the optimal clustering and are computed (for free) by simply using the cost of the dual solutions generated during the course of the algorithm. 2 Clustering via stabilities based on Linear Programming Given a set of objects V with distances d = {dpq}, clustering amounts to choosing a set of cluster centers from V (say {qi}k i=1) such that the sum of distances between each object and its closest center is minimized. To this end, we are going to use the following objective function E(·) (which will be referred to as the primal cost hereafter): min k,{qi}k i=1 E({qi}k i=1) = X p∈V min i dpqi + X i dqiqi (1) Note that, in this case, we require that each cluster is chosen from the set V. Also note that, besides {qi}, here we optimize over the number of cluster centers k as well. Of course, to avoid the trivial solution of choosing all objects as centers, we regularize the problem by assigning a penalty dqq to each chosen center q. Problem (1) has an equivalent formulation as a 0 −1 linear integer program [6], whose relaxation leads to the following LP (denoted by PRIMAL hereafter): PRIMAL ≡min X p,q∈V dpqxpq (2) s.t. X q∈V xpq = 1 (3) xpq ≤xqq (4) xpq ≥0 (5) To get an equivalent problem to (1), we simply have to replace xpq ≥0 with xpq ∈{0, 1}. In this case, each binary variable xpq with p ̸= q indicates whether object p has been assigned to cluster center q or not, while binary variable xqq indicates whether object q has been chosen as a cluster center or not. Constraints (3) simply express the fact that each object must be assigned to exactly one center, while constraints (4) require that if p has been assigned to q then object q must obviously be chosen as a center. 2 Obviously at the core of any clustering problem of this type lies the issue of deciding which objects will be chosen as centers. To deal with that, a key idea of our approach is to rely on, what we call, the stability of an object. This will be a well defined measure which, intuitively, tries to quantitatively answer the following question: “How much do we need to penalize an object in order to ensure that it is never selected as an optimal cluster center?” For formalizing this concept, we will make use of the LP relaxation PRIMAL. We will thus define the stability S(q) of an object q as follows: S(q) = inf{perturbation s that has to be applied to penalty dqq (i.e., dqq ←dqq + s) (6) such that PRIMAL has no optimal solution x with xqq > 0} An object q can be stable or unstable depending on whether it holds S(q) ≥0 or S(q) < 0. To select a set of centers Q, we will then rely on the following observation: a stable object with high stability is also expected to be, with high probability, an optimal center in (1). The reason is that the assumption of a high S(q) ≥0 is essentially a very strong requirement (much stronger than simply requiring q to be active in the relaxed problem PRIMAL): it further requires that q will be active for all problems PRIMAL(dqq +s)1 as well (where s ≤S(q)). Hence, our strategy for generating Q will be to sequentially select a set of stable objects, trying, at each step, to select an object of approximately maximum stability (as already explained, there is high chance that this object will be an optimal center in (1)). Furthermore, each time we insert a stable object q to Q, we reestimate stabilities for the remaining objects in order to take this fact into account (e.g., an object may become unstable if we know that it holds xqq = 1 for another object q). To achieve that, we will need to impose extra constraints to PRIMAL (as we shall see, this will help us to obtain an accurate estimation for the stabilities of the remaining objects given that objects in Q are already chosen as centers). Of course, this process repeats until no more stable objects can be found. 2.1 Margins and dual-based clustering For having a practical algorithm, the most critical issue is how to obtain a rough approximation to the stability of an object q in a computationally efficient manner. As we shall see, to achieve this we will need to to move to the dual domain and introduce a novel concept that lies at the core of our approach: the margin of dual solutions. But, first, we need to introduce the dual to problem PRIMAL, which is the linear program called DUAL in (7)2: DUAL ≡max D(h) = X p∈V hp (7) s.t. hp = minq∈V hpq, ∀p ∈V (8) X p∈V hpq = X p∈V dpq, ∀q ∈V (9) hpq ≥dpq ∀p ̸= q (10) Dual variables hpq can be thought of as representing pseudo-distances between objects, while each variable hp represents the minimum pseudo-distance from p (which is, in fact, ‘thought’ by the dual as an estimation of the actual distance between p and its closest active center). Given a feasible dual solution h, we can now define its margin ∆q(h) (with respect to object q) as follows: ∆q(h) = X p:hpq=hp(ˆhp −hp) − X p̸=q(hpq −max(hp, dpq)) −  hqq −hq  , (11) where (for any h) ˆhp hereafter denotes the next-to-minimum pseudo-distance from p. There is a very tight connection between margins of dual solutions and stabilities of objects. The following lemma provides a first indication for this fact and shows that we can actually use margins to decide whether an object is stable or not and also to lower bound or upper bound its stability accordingly (see [7] for proofs): Lemma 1 ([7]). Let h be an optimal dual solution to DUAL. 1PRIMAL(z) denotes a modified problem PRIMAL where the penalty for q has been set equal to z. 2Problem DUAL results from the standard dual to PRIMAL after applying a transformation to the dual variables. 3 1. If ∆q(h) > 0 then S(q) ≥∆q(h). 2. If ∆q(h) < 0 then S(q) ≤∆q(h). In fact, the following fundamental theorem goes even further by proving that stabilities can be fully characterized solely in terms of margins. Hence, margins and stabilities are two concepts that can be roughly considered as dual to each other: Theorem 2 ([7]). The following equalities hold true: S(q) ≥0 ⇒S(q) = sup{∆q(h) | h optimal solution to DUAL} , (12) S(q) ≤0 ⇒S(q) = inf{∆q(h) | h optimal solution to DUAL} . (13) Furthermore, it can be shown that: S(q) = sign(S(q)) · sup{|∆q(h)| h optimal solution to DUAL} . (14) What the above theorem essentially tells us is that one can compute S(q) exactly, simply by considering the margins of optimal dual solutions. Based on this fact, it is therefore safe to assume that solutions h with high (but not necessarily maximum) dual objective D(h) will have margins that are good approximations to S(q), i.e., it holds: S(q) ≈∆q(h) . (15) This is exactly the idea that our clustering algorithm will rely on in order to efficiently discover objects that are stable. It thus maintains a dual solution h and a set Q containing all stable objects chosen as centers up to the current point (Q is empty initially). At each iteration, it increases the dual objective D(h) by updating solution h via an operation called DISTRIBUTE. This operation is repeatedly applied until a high enough objective value D(h) is obtained such that at least one stable object is revealed based on the estimated margins of h. At that point, the set Q is expanded and h is updated (via an operation called PROJECT) to take account of this fact. The process is then repeated until no more stable objects can be found. A remarkable thing to note in this process is that, as we shall see, determining how to update h during the DISTRIBUTE operation (i.e., for increasing the dual objective) also relies critically on the use of margins. Another technical point that we need to solve comes from the fact that Q gets populated with objects as the algorithm proceeds, which is something that we certainly need to take into account when estimating object stabilities. Fortunately, there is a very elegant solution to this problem: since all objects in Q are assumed to be cluster centers (i.e., it holds xqq = 1, ∀q ∈Q), instead of working with problems PRIMAL and DUAL, it suffices that one works with the following primal-dual pair of LPs called PRIMALQ and DUALQ3: PRIMALQ = min PRIMAL DUALQ = max DUAL s.t. xqq = 1, ∀q ∈Q s.t. hpq = dpq, ∀{p, q} ∩Q ̸= ∅ This means, e.g., that stability S(q) is now defined by using PRIMALQ (instead of PRIMAL) in (6). Likewise, lemma 1 and theorem 2 still continue to hold true provided that DUAL is replaced with DUALQ in the statement of these theorems. In addition to that, the definition of margin ∆q(h) needs to be modified as follows : ∆q(h) = X p/∈Q:hpq=hp(ˆhp −hp) − X p/∈Q∪{q}(hpq −max(hp, dpq)) −  hqq −hq  . (16) The PROJECT operation: Given this modified definition of margins, we can now update Q at any iteration in the following manner: EXPAND: Compute ¯q = arg max q/∈Q ∆q(h) and if ∆¯q(h) ≥0 then set Q = Q ∪{¯q} . (17) Based on the fact that margins are used as approximations to the stabilities of objects, the above update simply says that the object ¯q with maximum stability should be chosen as the new center at the current iteration, provided of course that this object ¯q is stable. Furthermore, in this case, we also 3Actually, to represent the dual of PRIMALQ exactly, we need to add a constant in the objective function of DUALQ. Since, however, this constant does not affect maximization, it is thus omitted for clarity. 4 1: h ←d; 2: while maxq /∈Q ∆q(h) < 0 do 3: Dprev ←D(h); h ←DISTRIBUTE(h); 4: if Dprev = D(h) then exit; 5: end 6: ¯q ←arg maxq /∈Q ∆q(h); Q ←Q ∪{¯q}; h ←PROJECT(h); 7: goto 2; Fig. 1: Pseudocode of our clustering algorithm. need to update the current dual solution h in order to take account of the fact that extra constraints have been added to DUALQ (these are a result of the extra constraint x¯q¯q = 1 that has been added to PRIMALQ). By definition of DUALQ, the new constraints are h¯qp = d¯qp, hp¯q = dp¯q for all p /∈Q and, so, one has to apply the following operation, which simply projects the current dual solution into the feasible set of the updated linear program DUALQ: PROJECT: hpp+= h¯qp −d¯qp, h¯qp = d¯qp, hp¯q = dp¯q, ∀p /∈Q . (18) Note that update hpp+= h¯qp −d¯qp is needed for maintaining dual feasibility constraint (9). Essentially, PROJECT is a warm-start operation, that allows us to reuse existing information for computing a solution h that has a high dual objective value D(h) and is also feasible to the updated DUALQ. The DISTRIBUTE operation: In case it holds ∆q(h) < 0 for all q /∈Q, this means that we are unable to find an object with good stability at the current iteration. To counter that, we will thus need to update solution h in order to increase its dual objective value (recall that, by lemma 1, stable objects will necessarily be revealed at an optimal dual solution, i.e., at a dual solution of maximum objective). Intuitively, what happens is that as we increase the dual objective D(h), objects not in Q actually try to compete with each other for achieving a large margin. Interestingly enough, in order to increase D(h), we will again have to rely on the margins of the current dual solution. In particular, it turns out that, if ∆q(h) < 0 holds true for all q /∈Q, then the following very simple update of h is guaranteed to increase the dual objective: DISTRIBUTE: ∀p, q /∈Q, hpq =      max(hp, dpq), if p ̸= q AND p ∈LQ OR hp < dpq  hp −∆q(h) |Vq| , else if hpq > hp ˆhp −∆q(h) |Vq| , else if hpq = hp In the above update, we denote by LQ the set of objects whose minimum pseudo-distance hp is attained at an object from Q, i.e., LQ = {p /∈Q | hp = minq∈Q hpq}, while |Vq| denotes the cardinality of the set Vq = {p /∈Q ∪LQ | hp ≥dpq} ∪{q}. The following theorem then holds true: Theorem 3. If maxq/∈Q ∆q(h) < 0, then the DISTRIBUTE operation maintains feasibility and, unless V = Q ∪LQ, it also strictly increases the dual objective. The pseudocode of the resulting algorithm is shown in Fig. 1. As already explained, it is an iterative algorithm, which keeps updating a dual solution h by using the DISTRIBUTE and PROJECT operations (the latter applied only when needed) until the dual objective can no longer increase. Note also that, besides maintaining a dual solution h, the algorithm also maintains Q which provides a current clustering and also has a primal cost E(Q). With respect to this cost, the following theorem can be shown to hold true: Theorem 4. If maxq/∈Q ∆q(h) > 0, then the EXPAND operation strictly decreases the primal cost E(Q). This implies that the sequence of primal costs E(Q) generated by the algorithm is decreasing (recall that we actually want to minimize E(·)). It is worth noting at this point that nowhere have we tried to enforce this property by explicitly considering the primal cost when updating Q. This is achieved simply thanks to the requirement of always selecting objects with high stability, thus showing how powerful this requirement actually is. We also note that the algorithm’s convergence is always guaranteed: the algorithm terminates when neither the primal cost E(Q) decreases nor the dual objective D(h) increases during the current iteration. Finally, we note that exactly the same algorithm applies to the general case where the objects in V form a graph with edges E (distance dpq is then defined only for pq ∈E). In this case, it is easy to verify that the cost of each iteration will be O(|E|). Furthermore, the algorithm converges extremely fast in practice (i.e. in very few iterations). 5 3 Related work Before proceeding, let us briefly mention how our method relates to some state-of-the-art exemplarbased clustering techniques. Affinity propagation [5] is a recently proposed method for clustering, which relies on minimizing exactly the same objective function (1). This is an iterative algorithm, which repeatedly updates (through messages) the so-called responsibilities and availabilities. These can be considered as counterparts to our pseudo-distances hpq. Affinity propagation also estimates the so-called self-availabilities for measuring the likelihood of an object being a cluster center. On the contrary, we use for the same purpose the margins that approximate the stability of an object. Furthermore, compared to affinity propagation, our method offers the following significant advantages: its convergence is always guaranteed, it is parameter-free (no need for adjusting parameters such as damping factors in order to ensure convergence), it is a descent method (objective function (1) always decreases), and it can make use of the computed dual solutions for deriving online optimality bounds for free (these can be used for assessing that the derived solutions are almost optimal). At the same time, our method performs equally well or better in practice. Very recently, another exemplar-based algorithm has been proposed as well, which relies on solving a convex formulation of clustering [8]. We note, however, that this method is used for solving a different and much easier problem, which is that of soft clustering. Furthermore, it relies on a convex relaxation which is known to be much less tight than the LP relaxation PRIMAL we use here (essentially [8] replaces all constraints xpq ≤xqq, ∀p ∈V with the much looser constraint P p xpq ≤|V| · xqq). As a result, generated solutions are expected to be of much lower quality. We also note that, unlike EM-like clustering algorithms such as K-means, our method is totally insensitive to initialization conditions and does not get stuck at bad local minima (thus yielding solutions of much better quality). Also, it is much more efficient than methods like [6], that require solving very large linear programs. 4 Experimental results To illustrate the robustness of our algorithm to noise and its insensitivity to initialization, we start by showing clustering results on synthetic data. The synthetic datasets were generated using the following procedure: 2D points were sampled from a mixture of gaussian distributions, where the centers of the gaussians were arranged in an approximately grid-like fashion over the plane. In addition to that, random outliers were generated uniformly all over the grid, with their number being equal to half the number of the points drawn from the gaussian distributions. One such dataset (consisting of 24 gaussians) is displayed in Fig. 2, where colored crosses correspond to samples from gaussians, while the black dots correspond to outliers. The clustering result produced by our algorithm is shown in Fig. 2(a). As can be seen from that figure, despite the heavy percentage of noise, our method has been able to accurately detect all gaussian centers and successfully cluster this 2D dataset. Note that the number of gaussians was not given as input to our algorithm. Instead, it was inferred based on a common penalty term dqq for all objects q, which was set roughly equal to the median distance between points. On the contrary, K-means was unable to produce a good result for this dataset despite the fact that it was restarted multiple times (100 runs were used in this case). This is, of course, due to its well known sensitivity to initialization conditions. We repeated multiple experiments by varying the number of gaussians. Contrary to our algorithm, behavior of K-means gets even worse as this number increases. We have also plotted in Fig. 2(c) the primal and dual costs that were generated by our algorithm when it was applied to the example of Fig. 2(a). These correspond to the solid red and dashed blue curves respectively. Note that the dual costs represent lower bounds to the optimum value of the objective function E(·), while the primal costs represent obviously upper bounds. This fact allows us to obtain online optimality bounds with respect to how far our current primal solution Q is with respect to the unknown optimum of E(·). These bounds are, of course, refined continuously as the algorithm proceeds and can be useful for assessing its performance. For instance, in this particular example, we can be sure that the primal cost of our final solution is within 1% of the unknown optimum of function E(·), i.e., an approximately optimal solution has been obtained. Next we show some results from applying our algorithm to the challenging problem of multibody 3D segmentation, which has several applications in computer vision. As we shall see, a non-Euclidean distance for clustering will have to be used in this case. According to the 3D segmentation problem, we are given a set of N pixel correspondences between two images. These correspondences result 6 0 0.2 0.4 0.6 0.8 1 1.2 1.4 0.2 0.4 0.6 0.8 1 1.2 1.4 (a) Our algorithm 0 0.2 0.4 0.6 0.8 1 1.2 1.4 0.2 0.4 0.6 0.8 1 1.2 1.4 K−means clustering (b) K-means 0 20 40 60 0 500 1000 primal cost dual cost (c) Primal and dual costs Fig. 2: Clustering results for synthetic data. The centers of the big circles represent the points chosen as cluster centers by the 2 algorithms. The primal and dual costs in (c) verify that the cost of our algorithm’s solution is within 1% of the optimum cost. from K objects undergoing K 3D rigid-body motions relative to a moving camera. The 3D-motion segmentation problem is the task of clustering these N pixel pairs according to the K moving objects. We consider the more general and difficult scenario of a fully projective camera model. In this case, each pixel pair, say, pi = (yi, zi) that belongs to a moving object k should satisfy an epipolar constraint: yT i Fkzi = 0 , (19) where Fk represents the fundamental matrix associated with the k-th 3D motion. Of course, the matrices Fk corresponding to different motions are unknown to us. Hence, to solve the 3D segmentation problem, we need to estimate both the matrices Fk as well as the association of each pixel pair pi = (yi, zi) to the correct fundamental matric Fk. To this end, we sample a large set of fundamental matrices by using a RANSAC-based scheme (we recall that a random set of, e.g., 8 pixel pairs pi is enough for generating a new fundamental matrix). The resulting matrices, say, {Fk} will then correspond to cluster centers, whereas all the input pixel pairs {pi} will correspond to objects that need to be assigned to an active cluster center. A clustering objective function of the form (1) thus results and by minimizing it we can also obtain a solution to the 3D segmentation problem. Of course, in this case, the distance function d(pi, Fk) between an object pi = (yi, zi) and a cluster center will not be Euclidean. Instead, based on (19), we can use a distance of the following form: d(pi, Fk) = |yT i Fkzi| . (20) Due to being more robust, a normalized version of the above distance is usually preferred in practice. Figure 3 displays 3D motion segmentation results that were obtained by applying our algorithm to two image pairs (points with different colors correspond to different motions). These examples were downloaded from a publicly available motion segmentation database [9] with ground-truth. The ground-truth motion segmentation is also shown for each example and, as can be seen, it is almost identical with the segmentation estimated by our algorithm. We next compare our method to Affinity Propagation (AP). Some really impressive results on 4 very challenging datasets have been reported for that algorithm in [5], indicating that it outperforms any other center-based clustering method. In particular, AP has been used for: clustering images of faces (using the squared error distance), detecting genes in microarray data (using a distance based on exons’ transcriptions levels), identifying representative sentences in manuscripts (using (a) (b) Fig. 3: Two 3D motion segmentation results. For each one we show (left) ground truth segmentation of feature points and (right) estimated segmentation along with the input optical flow vectors. 7 60 62 13430 13454 Ours AP Ours AP Primal Cost E(Q) #clusters Faces 1301 1290 -210595 -210539 Genes 7 7 92154 92154 Cities 4 4 10234 10241 Sentences (a) 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 Our exemplars (b) 0 20 40 60 80 100 120 140 160 180 200 100 200 300 400 500 600 700 800 900 1000 Primal costs from Affinity Propagation (c) Fig. 4: (a) Comparison of our algorithm with affinity propagation [5] on the 4 very challenging datasets ‘Faces’, ‘Genes’, ‘Cities’ and ‘Sentences’ from [5]. Since the goal of both algorithms is to minimize objective function E(Q), for each dataset we report the final value of this function and the number of estimated clusters. We have used exactly the same settings for both methods. (b) Our algorithm’s clustering when applied to the ‘fourclouds’ dataset from [1]. The primal costs generated by AP for this dataset (shown in (c)) demonstrate that AP fails to converge in this case (to prevent that, a properly chosen damping factor has to be used). the relative entropy as distance), and identifying cities that can be easily accessed by airline travel. In Fig. 4(a), we compare our method to AP on these publicly available problems. Since both methods rely on optimizing the same objective function, we list the values obtained by the two methods for the corresponding problems. Exactly the same settings have been used for both algorithms, with AP using the parameters proposed in [5]. Note that in all cases our algorithm manages to obtain a solution of equal or lower value than AP. This is true even, e.g., in the Genes dataset, where a higher number of clusters is selected by our algorithm (and thus a higher penalty for activating them is paid). Furthermore, an additional advantage of our algorithm is that, unlike AP, it is always guaranteed to converge (e.g., see Figs 4(b), 4(c)). We note that, due to lack of space, a running time comparison with AP, as well as a comparison of our algorithm to the method in [10], are included in [7]. 5 Conclusions In this paper we have introduced a very powerful and efficient center-based clustering algorithm, derived from LP duality theory. The resulting algorithm has guaranteed convergence and can handle data sets with arbitrary distance functions. Furthermore, despite its extreme generality, the proposed method is insensitive to initialization and computes clusterings of very low cost. As such, and considering the key role that clustering has in many problems, we believe that our method can find use in a wide variety of tasks. As another very important (both practical and theoretical) contribution of this work we also consider the fact of introducing the notions of LP-based stabilities and margins, two quantities that, as we have proved, are dual to each other and can be used for deciding what objects should be chosen as cluster centers. We strongly believe that these ideas can be of both practical and theoretical interest not just for designing center-based clustering algorithms, but also in many other contexts as well. References [1] A. Ng, M. Jordan, and Y. Weiss, “On spectral clustering: Analysis and an algorithm,” in NIPS, 2001. [2] D. Verma and M. Meila, “A comparison of spectral clustering algorithms,” Tech. Rep., 2001. [3] A. Banerjee, S. Merugu, I. S. Dhillon, and J. Ghosh, “Clustering with bregman divergences,” J. Mach. Learn. Res., vol. 6, pp. 1705–1749, 2005. [4] B. Fischer, V. Roth, and J. Buhmann, “Clustering with the connectivity kernel,” in NIPS, 2004. [5] B. J. Frey and D. Dueck, “Clustering by passing messages between data points,” Science, vol. 315, 2007. [6] M. Charikar, S. Guha, ´E. Tardos, and D. B. Shmoys, “A constant-factor approximation algorithm for the k-median problem,” J. Comput. Syst. Sci., vol. 65, no. 1, pp. 129–149, 2002. [7] N. Komodakis, N. Paragios, and G. Tziritas, “Clustering via LP-based Stabilities,” Tech. Report, 2009. [8] D. Lashkari and P. Golland, “Convex clustering with exemplar-based models,” in NIPS, 2008. [9] R. Tron and R. Vidal, “A benchmark for the comparison of 3-d motion segmentation algorithms,” in CVPR, 2007. [10] M. Leone, Sumedha, and M. Weigt, “Clustering by soft-constraint affinity propagation: applications to gene-expression data,” Bioinformatics, vol. 23, no. 20, pp. 2708–2715, 2007. 8
2008
55
3,543
Spike Feature Extraction Using Informative Samples Zhi Yang, Qi Zhao and Wentai Liu School of Engineering University of California at Santa Cruz 1156 High Street, Santa Cruz, CA 95064 {yangzhi, zhaoqi, wentai}@soe.ucsc.edu Abstract This paper presents a spike feature extraction algorithm that targets real-time spike sorting and facilitates miniaturized microchip implementation. The proposed algorithm has been evaluated on synthesized waveforms and experimentally recorded sequences. When compared with many spike sorting approaches our algorithm demonstrates improved speed, accuracy and allows unsupervised execution. A preliminary hardware implementation has been realized using an integrated microchip interfaced with a personal computer. 1 Introduction Real-time extraction of information from composite neural recordings is a significant challenge in neural interfacing. Developing integrated circuit (IC) to enable portable and implantable systems is important to allow the study of complex behavior in neuroscience experiments, closed loop deep brain stimulation, and cortical controlled neuromuscular prostheses. In order for a spike feature extraction algorithm to be functional as a small device with real-time low-latency processing and low power operation it must be efficient in both computation and IC implementation. Implementing spike sorting before data telemetry offers many significant advantages. Spike feature extraction provides the necessary information required to sort spikes from raw sampled data. With this information each spike event can be represented by its unique features and firing time, resulting in significant data compression. A data transceiver designed with the current semiconductor technology can simultaneously support a large number of recording channels for a microchip implementation to extract the spike feature. System integration using wireless power telemetry or a rechargeable battery as well as wireless data telemetry removes the need for tethering wires. As a result, a fully wireless operation would relieve the subjects overall stress factor and allow them to move freely in their natural environment. Frequently used spike feature extraction algorithms include principal component analysis (PCA) [1], bayesian algorithm [2], template matching [3], wavelets [4] and independent component analysis (ICA) [5], which demand significant computation. Efforts to improve the efficiency of these algorithms have been reported, however, these efforts relied on either over simplified functionality or bulky hardware systems that consume excessive power. In part, complex algorithm procedures are applied to mediate the effects of noise and distortion in the recording process. The associated noise includes ion channel noise, activities from distant neurons, field potentials, thermal noise and circuit noise. Significant sampling distortion is also present since it is unrealistic to synchronize the sampling clock with individual recorded spikes. This paper reports a new spike feature extraction algorithm which is suitable for real-time spike sorting and enables integrated microchip implementation. 2 Related Work 2.1 PCA Based Spike Feature Extraction PCA is a feature extraction algorithm widely employed for spike sorting. It uses correlation between samples and computes the vectors capturing the maximal variance. PCA algorithm performs well given a strong correlation between samples by reporting relevant features. However, recorded spikes are usually corrupted by large low frequency noise and distortion, which blur sample correlation and compromise the quality of the estimated covariance matrix and its eigenvectors. As a result, PCA may fail to resolve spike clusters in noisy recordings. 2.2 Variable Selection Techniques As a complementary approach to dimensionality reduction algorithms, Jolliffe discussed a general feature extraction algorithm based on a subset of samples in the classic work [6]. This concept requires only a subset of samples containing the necessary information to cluster the data; as opposed to using all of the samples. These informative samples are especially useful in the presence of single prominent sample set. There are two challenges facing a sample selection algorithm. The first challenge is the computational burden to select informative samples. If the training procedure is as complicated as suggested in [6], it would prohibit microchip implementation for implant purposes. The power and area are the primary problems with the microchip implementation of other spike feature extraction algorithms. The second challenge is the availability of localized features. Improved performance compared to PCA is unlikely if localized features are not prominent. 2.3 Our Approach We have developed a spike feature extraction algorithm based on informative samples. The theoretical framework includes neuronal geometry signatures, noise shaping, and informative sample selection. By evaluating neuronal geometry signatures with the compartment model, we find that high frequency signal spectrum may contain useful information to differentiate neurons. Studying the noise properties has revealed that a frequency shaping filter can be used to boost the SNR. The sample selection technique using estimated entropy identifies informative samples for sorting spikes. In addition, a preliminary IC implementation of the algorithm has been reported [7, 8] and further integrated onto a multi-channel neural recording IC with wireless telemetry [9]. 3 Geometry Signatures, Noise and Sampling Distortion 3.1 Neuronal Geometry Signature This section describes how neuronal geometry signatures contribute to the difference among similar waveforms. Assume that both the intra- and extra- fluids are neutral, the induced voltage waveform is V (−→ r0) = Z jm(−→r , t)dr 4πσe|−→r −−→ r0|, (1) where jm is the transmembrane current and σe is the conductivity of the tissue environment; −→ r0 and −→r represent the locations of the point electrode and the active membrane segments, respectively. Since action potentials propagate slowly along the axonal branches of the cortex neurons (averaged 0.5m/sec −2m/sec [10]), active membranes do not fire simultaneously. As a result, the detailed geometry of the underlying neuron influences the shape of spikes. Assuming that ionic channels are uniformly dotted on the active membranes within the recording radius of the electrode, the spike waveform is modeled as the convolution of the transmembrane current profile and an implicit geometry kernel function as V (t) = Z jm(τ)W(t −τ)dτ, (2) where W(t) is the geometry kernel function. The recorded waveforms from neurons with similar ion channel populations can be very similar. A general spike sorting algorithm frequently fails to resolve such ambiguity and may report a single, large, spike cluster. The approach of differentiating associated kernel functions can be used to sort the similar spikes. Assume W1(t) and W2(t) as the geometry kernel functions of two neurons with the same ion channel population, the difference between the two spikes is △V (t) = Z jm(τ)[W1(t −τ) −W2(t −τ)]dτ, (3) Small waveform differences appear if R (W1(t) −W2(t))dt ≈0. Intuitively, the condition means the waveforms are identical, ignoring the skew of the activation of membranes. To differentiate the waveforms, we rewrite Eq. 3 in the frequency domain as F(△V ) = F(jm)F(W1 −W2) (4) where F() denotes the fourier transform. The condition of R [W1(t)−W2(t)]dt ≈0 is equivalent to F(W1−W2) ≈0|f=0Hz, which implies that the waveform difference caused by the geometry kernel functions has small contribution at lower frequency spectrum. A more quantitative explanation can be given by studying the derivative of F(△V ) with respect to the frequency using Eq. 4 ∂F(△V ) ∂f = ∂F(jm) ∂f F(W1 −W2) + F(jm)∂F(W1 −W2) ∂f , (5) where f is frequency. Note that F(jm) is narrowly band limited signal and F(W1−W2) serves as a notch frequency mask with a relative wider spectrum. The first term in Eq. 5 is attenuated by F(W1 −W2) within the dominant spectrum of F(jm). Otherwise, appreciable waveform difference is expected according to Eq. 4. The second term in Eq. 5, on the other hand, exhibits a strong frequency dependency within the dominant spectrum of F(jm). It can be expanded as F(jm)∂F(W1 −W2) ∂f ≈2πF(jm) Z (W1(t) −W2(t))t sin(2πft)dt, (6) when kernel functions Wi are symmetrical. In summary, the waveform difference between similar neurons caused by geometry functions satisfies the following conditions ½ F(△V ) ≈0|f=0Hz ∂F(△V ) ∂f ≈4π2fF(jm) R (W1(t) −W2(t))t sin(2πft) 2πf dt ∝f. (7) In Eq. 7, ∂F(△V ) ∂f is linear to frequency f at low frequency region, as sin(2πft) 2πft ≈1. The strong emphasis on frequency shows that F(△V ) exhibits a higher frequency spectrum. As a result, a frequency-shaping filter that emphasizes on high-frequency spectrum may help to differentiates kernel functions. 3.2 Noise and Sample Distortion An estimated power spectrum of noise associated with recorded neural signal, where the dominance of low frequency noise is clear, is plotted in Figure 1. The noise profile is approximately fitted as N(f) = Nneu + Ne.e + N1/f + Ntherm ≈Nfc1(fc1 f )α + Ntherm, (8) where Nneu is the neuronal noise, Ne.e is the electrode-electrolyte interface noise, N1/f is the flicker noise and Ntherm is the thermal noise. The low frequency noise is assumed to have profile following f −α. Sampling distortion is unavoidable, since the neuron’s firing is random and not synchronized with the sampling clock of the analog-to-digital converter(ADC). It can be reduced by either increasing the sampling frequency of the ADC or performing interpolation and alignment in the digital domain. Both approaches require additional power, computation and storage space, which are not favorable to microchip implementation. The sampling distortion is related to the slope of the spikes. In case a fast transition edge is sampled 4 times, the sampling distortion can be more than 10% of the spike peak-to-peak magnitude. Considerable distortion is expected since “neural spikes” are, by definition, fast changing waveforms. 2000 4000 6000 8000 10000 12000 14000 10 −1 10 0 10 1 10 2 10 3 Power Spectrum Of Spikes Recorded Cat Cerabral Cortex frequency power spectrum (a) 2000 4000 6000 8000 10000 12000 14000 10 −1 10 0 10 1 10 2 Power Spectrum Of Spikes derivative Recorded Cat Cerabral Cortex (b) Figure 1: noise properties of recordings from a cat cerebral cortex (500 Hz to 15K Hz); (a) noise power spectrum of raw data. (b) noise power spectrum of the derivative. 4 Sample Information In order to use informative samples to sort spikes, it is necessary to quantify the information carried by individual spike samples. Intuitively, a sample is considered to be informative if the superimposed spikes can be classified into multiple clusters by evaluating that sample alone. The method used to quantify the sample information is outlined below. Sample Information Estimation Input: M peak aligned spike segments {vi, i = (1, M)} with N samples for each segment Output: Information infoj carried by spike samples {vi(j), i = (1, M)} • j = 1, construct one dimensional data set X = {vi(j), i = (1, M)} • Obtain a nested cluster configuration based on X • Estimate the possibility pq that a spike being partitioned into the qth cluster. Use the entropy to estimate the information infoj = −P pq>p0 pqln(pq), where p0 is a threshold of the cluster size. • Repeat the procedures to a different sample, e.g. j = j + 1. The computation required to accurately quantify the entropy of an underlying data set is typically high. However, only a rough estimation is required to select informative samples. Therefore, the amount of spikes to compute information can be reduced to a relatively small number, which should allow hardware implementation in terms of storage space and computation complexity. With the synthesized spike data we used, each sequence contains 3 neuronal sources with similar firing rate. As a result, the possible information score should be 0, 1 3Ln(3) + 2 3ln(1.5) or Ln(3). When we increase the mount of training events to M = 300 the information scores approximately settle to the expected values, as shown in Figure 2. Quantitative comparisons to investigate the existence of informative samples in noisy spikes have been done. Results using synthesized spikes with recordings from neocortex and basal ganglia [4] are shown in Figure 2. There are two clear observations. First, the amount of information carried by each sample varies, indicating a non-uniform signal-to-noise-plus-distortion-ratio. Second, it is necessary to create informative samples if due to severe noise, distortion and similarity of spike clusters, few of the samples is informative. As a constraint to create informative samples, the computation and storage space have to be feasible for microchip implementation. 5 Create Informative Samples Using Frequency Shaping Filter As analyzed in Section 3, a frequency shaping filter can be used to manifest different geometry kernel functions, reduce noise and redistribute distortion among spike samples. Such a filter is 0 5 10 15 20 25 30 35 −0.2 0 0.2 0.4 0.6 0.8 1 1.2 sample number information spike derivative spike (a) 0 5 10 15 20 25 30 35 −0.2 0 0.2 0.4 0.6 0.8 sample number information spike derivative spike (b) 0 5 10 15 20 25 30 35 −0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 sample number information spike derivative spike (c) 0 5 10 15 20 25 30 35 −0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 sample number information spike derivative spike (d) 0 5 10 15 20 25 30 35 −0.2 0 0.2 0.4 0.6 0.8 1 1.2 sample number information spike derivative spike (e) 0 5 10 15 20 25 30 35 −0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 sample number information spike derivative spike (f) 0 5 10 15 20 25 30 35 −0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 sample number information spike derivative spike (g) 0 5 10 15 20 25 30 35 −0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 sample number information spike derivative spike (h) Figure 2: (a) - (h) information carried by samples from spikes and their derivatives. Horizontal axis is the sample number and vertical axis is the estimated entropy. The black solid line and red dotted line represent the sample information from spikes and their derivatives, respectively. designed to boost high frequency spike features, which should be localized and less correlated if examined in time domain. In this section, we use derivative operation as an example to illustrate the usefulness of the frequency shaping filter, and further demonstrate that the filter creates additional informative samples. In a discrete time spike sequence, the frequency response of taking derivative is H(f) = 2ejπf/2 sin(πf/fs), (9) where fs is the sampling frequency of the ADC. As shown in Section 3.1, the difference between neuron geometry kernel functions W(t) of similar spikes is contained in the higher frequency components, which should be emphasized by derivative operation. The noise power spectrum is modified by taking derivative. Intuitively, low frequency noise is reduced and the high frequency thermal noise is amplified, as shown in Figure 1 (b). The quantitative impact of the frequency shaping filter on noise is affected by the recording system and biological environment, and the typical values of α we observe vary around 2 within the signal band as shown in Figure 1. Use α = 2 for illustration, the filter’s influence on noise could be quantified by α Eq. 9 λ ≈fc1fc2 2f 2 spike ≤1 2, (10) where fc1 and fc2 are the lower and higher corner frequencies of the digital filter, respectively. In case λ is less than 1, SNR further increases, which favors spike sorting from the noise perspective. The sampling distortion distribution among samples is altered after taking the derivative. In the original waveforms, samples close to peaks suffer less distortion compared with those in transition. After taking the derivative, samples initially suffering from large distortion become less distorted because V ′′(t) in Eq. 2 has at least one zero crossing point during the transition. Quantitative experiments to demonstrate the creation of informative samples have been done. A subset of results are shown in Figure 2 (a) - (h). In these data, the black solid lines represent information carried by the samples from spikes and the dotted red lines represent the derivatives. The spike data are 8 challenging sequences from [4]. They are compiled from recordings in the neocortex and basal ganglia with superimposed noise. All 8 sequences contain 3 neuronal sources. During estimation of sample entropy, a mean shift classifier with a hierarchical merging procedure is being used to quantify the partition. Small clusters with events less than 5% are ignored. The corresponding feature extraction results using the most informative samples from spikes as well as their derivatives are shown in Figure 3 (a) - (h), which clearly presents a 3 cluster configuration. −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 (a) (b) (c) (d) (e) (f) (g) (h) 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (i) (j) (k) (l) (m) (n) (o) (p) 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (q) (r) (s) (t) (u) (v) (w) (x) 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (y) (z) (aa) (ab) (ac) (ad) (ae) (af) 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (ag) (ah) (ai) (aj) (ak) (al) (am) (an) Figure 3: feature extraction results using the proposed algorithm and competing algorithms. (a) (h) display the extracted features using the most informative samples of spikes and their derivatives (proposed). (i) - (p) display the extracted features using a subset of samples includes the peaks of the spike derivative and spike height (implemented on chip, proposed). (q) - (x) display the PCA based feature extraction. (y) - (af) display the wavelets based feature extraction. (ag) - (an) display spike peaks based feature extraction. (All the algorithms are tested without performing interpolation. Nonlinear energy operator (NEO) [11] is used as the spike detection algorithm. Overlapping spikes within 600 µSec are ignored. Haar wavelet is used to perform wavelets based feature extraction, and features are obtained from the variance peaks after the wavelet transform. Two dimensional features are projected from a higher dimensional space.) Table 1: Accuracy comparison of using different spike feature extraction algorithms Sequence Number 1 2 3 4 5 6 7 8 Informative Samples 97.8% 97.8% 97.8% 97.0% 98.0% 99.2% 96.6% 92.0% Hardware 97.6% 97.6% 97.4% 95.4% 98.2% 98.4% 93.2% 91.0% PCA 97.8% 89.0% 60.4% 55.2% 97.6% 77.8% 80.2% 68.8% Wavelets 92.4% 91.0% 81.8% 57.4% 97.4% 68.2% 51.0% 49.4% Spike Peaks 34.2% 33.8% 35.4% 34.0% 36.2% 37.8% 35.6% 36.0% Note: Informative samples are harvested from both spikes and their derivatives. Hardware uses peaks of spikes and their derivatives. 3000 spikes each sequence from [4]. 6 Experiments Synthesized spike sequences used in Figure 2 are applied to compare the sorting accuracies of different approaches. Feature extraction using the pre-specified subset consists of the peaks of the spike derivative as well as the height of the original spike is shown in Figure 3 (i) - (p). Comparative feature extraction results using competing algorithms, e.g, PCA, wavelets, spike peaks and width are also shown in Figure 3. The extracted spike features are clustered on a PC [12]. About 5% overlapping spikes are ignored to clearly quantify the performance of different spike feature extraction algorithms. The proposed feature extraction algorithm including the most informative samples (corresponding to Figure 3 (a) - (h)) achieves the highest accuracy (97.0%). The hardware [9, 8] 0 5 10 15 20 25 30 35 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 (a) 0 0.5 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) Figure 4: (a) recorded spikes from cat cerebral cortex are superimposed, (b) the extracted spike features using a subset of samples are plotted and grouped with a clustering algorithm implemented on PC. (c) the classified spike clusters are superimposed. (d) - (k) individual spike clusters superimposed in (c) are displayed. Spike clusters in (d) - (g) are plotted in a smaller vertical scale (-0.3, 0.15) compared with (h) - (j) in (-0.5, 0.3) and (k) in (-0.5, 0.5). using the pre-specified subset gives similar accuracy (96.1%). The counterpart algorithms include PCA, wavelets and spike peaks and width give 78.4%, 73.6% and 35.4%, respectively. The sorting accuracy comparisons are listed in Table 1. Animal sequences are collected to test the performance of the proposed algorithm. An example with overlapped spike clusters is selected for demonstration. The sequence is recorded from the cat cerebral cortex. The sorting results are displayed in Figure 4. In Figure 4 (a), the detected 1210 spikes are superimposed. Extracted spike features using the pre-specified subset of samples implemented on chip are shown in Figure 4 (b). The discrete points in feature space are grouped into 8 clusters with different colors using off-line clustering. Less than 10 % of noisy spikes and overlapping spikes are discarded, the rest are classified and plotted in Figure 4(c). To further quantify the validity of the classified spike clusters, superimposed clusters in Figure 4(c) are individually plotted in Figure 4(d)-(k). The second example containing more than 4000 spikes recorded from a monkey is shown in Figure 5. In Figure 5 (a), detected spikes are superimposed. Extracted features using the pre-specified subset of informative samples are shown in Figure 5 (b). A zoom in of Figure 5 (b) is plotted in Figure 5 (c) to display the isolation quality of clusters in feature space. The corresponding PCA based feature extraction is shown in Figure 5 (d) as a comparison. The classified spike clusters using the prespecified subset of informative samples are plotted in Figure 6 (a) - (e). Spike clusters plotted in Figure 6 (b), (c) and (d) resemble each other in shape and magnitude. To demonstrate that the informative samples based sorting does not over partitioning the data set, the derivatives of spike clusters plotted in Figure 6 (a) - (e) are also plotted in Figure 6 (f)-(j) with the same color indication. Clearly, Figure 6 (g), (h) and (i) present three well-differentiated waveform patterns in either peakto-peak magnitude or shape. 7 Conclusion A sample selection based spike feature extraction algorithm is reported in this paper. The theoretical framework includes neuronal geometry signatures, frequency shaping filter, and informative sample selection. Unlike PCA which uses correlated features, the sample selection algorithm focuses on localized and uncorrelated features which are strengthened by the frequency shaping filter. With simulated spike waveforms from a public data base, the algorithm demonstrates an improved sorting accuracy compared with many competing algorithms. The algorithm is designed for integrated microchip implementation and performing real-time spike sorting. A preliminary hardware implementation has been realized using an integrated circuit chip interfaced with a personal computer. 0 5 10 15 20 25 30 35 −800 −700 −600 −500 −400 −300 −200 −100 0 100 200 300 (a) 0 0.2 0.4 0.6 0.8 1 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 −0.4 −0.2 0 (b) 0 0.1 0.2 0.3 0.4 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 −0.12 −0.1 −0.08 −0.06 −0.04 −0.02 0 (c) 0 0.2 0.4 0.6 0.8 1 0 0.5 1 0 0.2 0.4 0.6 0.8 1 (d) Figure 5: (a) detected spikes from a monkey, (b) extracted spike features using a subset of samples, (c) zoom in of (b) for better visualization; (d) extracted features using PCA. (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) Figure 6: (a) - (e) the classified 5 clusters of the monkey sequence shown in Figure 5, (f)-(j) the derivative of the classified 5 clusters. The identity is indicated by color. References [1] Zumsteg ZS, Kemere C, O’Driscoll S, Santhanam G, Ahmed RE, Shenoy KV, et al. Power feasibility of implantable digital spike sorting circuits for neural prosthetic systems. IEEE Trans Neural Syst Rehabil Eng. 2005 Sep;13(3):272–279. [2] Lewicki MS. Bayesian modeling and classification of neural signals. Advances in NIPS. 1994;p. 590–597. [3] Vargas-Irwin C, Donoghue JP. Automated spike sorting using density grid contour clustering and subtractive waveform decomposition. J Neurosci Methods. 2007;164(1). [4] Quian Quiroga R, Nadasdy Z, Ben-Shaul Y. Unsupervised spike detection and sorting with wavelets and superparamagnetic clustering. Neural Comput. 2004 Aug;16(8):1661–1687. [5] Takahashi S, Sakurai Y. Coding of spatial information by soma and dendrite of pyramidal cells in the hippocampal CA1 of behaving rats. Eur J Neurosci Methods. 2007 Oct;26(7):2033–2045. [6] Jolliffe IT. Principal Component Analysys. New York: Springer-Verlag. 2002;. [7] Yang Z, Chen T, Liu W. A neuron signature based spike feature extraction algorithm for on-chip implementation. to Appear in Proc 30th Ann Int Conf IEEE EMBS. 2008 Aug;p. 4237–4240. [8] Chen T, Yang Z, Liu W, Chen L. NEUSORT2.0: a multiple-channel neural signal processor with systolic array buffer and channel-interleaving processing schedule. to appear Proc 30th Ann Int Conf IEEE EMBS. 2008 Aug;p. 6652–6656. [9] Chae M, Liu W, Yang Z, Chen T, Kim J, Sivaprakasam M, et al. A 128 channel 6mW wireless neural recording IC with on-the-fly spike sorting and UWB transmitter. IEEE ISSCC 2008 Dig Tech Papers. 2008 Feb;7(6):241–261. [10] Buzsaki G, Penttonen M, Nadasdy Z, Bragin A. Pattern and inhibition-dependent invasion of pyramidal cell dendrites by fast spikes in the hippocampus in vivo. Proc Natl Acad Sci USA. 1996 Sep;93(18):9921– 9925. [11] Kaiser JF. On a simple algorithm to calculate the energy of a signal. In Proc IEEE Int Conf Acoustic Speech and Signal Processing. 1990;p. 381–384. [12] Yang Z, Zhao Q, Liu W. Neural signal classification using a simplified feature set with nonparametric clustering. to appear in Neurocomputing;.
2008
56
3,544
Fast Computation of Posterior Mode in Multi-Level Hierarchical Models Liang Zhang Department of Statistical Science Duke University Durham, NC 27708 lz9@stat.duke.edu Deepak Agarwal Yahoo! Research 2821 Mission College Blvd. Santa Clara, CA 95054 dagarwal@yahoo-inc.com Abstract Multi-level hierarchical models provide an attractive framework for incorporating correlations induced in a response variable that is organized hierarchically. Model fitting is challenging, especially for a hierarchy with a large number of nodes. We provide a novel algorithm based on a multi-scale Kalman filter that is both scalable and easy to implement. For Gaussian response, we show our method provides the maximum a-posteriori (MAP) parameter estimates; for non-Gaussian response, parameter estimation is performed through a Laplace approximation. However, the Laplace approximation provides biased parameter estimates that is corrected through a parametric bootstrap procedure. We illustrate through simulation studies and analyses of real world data sets in health care and online advertising. 1 Introduction In many real-world prediction problems, the response variable of interest is clustered hierarchically. For instance, in studying the immunization status of a set of children in a particular geographic location, the children are naturally clustered by families, which in turn are clustered into communities. The clustering often induce correlations in the response variable; models that exploit this provide significant improvement in predictive performance. Multi-level hierarchical models provide an attractive framework for modeling such correlations. Although routinely applied to moderate sized data (few thousand nodes) in several fields like epidemiology, social sciences, biology [3], model fitting is computationally expensive and is usually performed through a Cholesky decomposition of a q (number of nodes in the hierarchy) dimensional matrix. Recently, such models have shown promise in a novel application of internet advertising [1] where the goal is to select top-k advertisements to be shown on a webpage to maximize the click-through rates. To capture the semantic meaning of content in a parsimonious way, it is commonplace to classify webpages and ads into large pre-defined hierarchies. The hierarchy in such applications consist of several levels and the total number of nodes may run into millions. Moreover, the main goal is to exploit the hierarchy for obtaining better predictions; computing the full posterior predictive distribution is of secondary importance. Existing fitting algorithms are difficult to implement and do not scale well for such problems. In this paper, we provide a novel, fast and easy to implement algorithm to compute the posterior mode of parameters for such models on datasets organized hierarchically into millions of nodes with several levels. The key component of our algorithm is a multi-scale Kalman filter that expedites the computation of an expensive to compute conditional posterior. The central idea in multi-level hierarchical (MLH hereafter) models is “shrinkage” across the nodes in the hierarchy. More specifically, these models assume a multi-level prior wherein parameters of children nodes are assumed to be drawn from a distribution centered around the parameter of the parent. This bottom-up, recursive assumption provides a posterior whose estimates at the finest resolution are smoothed using data on the lineage path of the node in the hierarchy. The fundamental 1 Notation Meaning Tj Level j of the hierarchy T mj The number of nodes at level j in T q The total number of nodes in T pa(r) The parent node of node r in T ci(r) The ith child node of node r in T nr The number of observations at leaf node r yir The ith observation (response) at leaf node r Y {yir, i = 1, · · · , nr, r ∈T } xir The ith observation (p-dimensional covariates) at leaf node r X {xir, i = 1, · · · , nr, r ∈T } β The regression parameter vector associated with X φj r The random effect parameter at node r at level j φ {φj r, r ∈T, j = 1, · · · , L} V The residual variance of yir, if yir has a Gaussian model γj The variance of φj r for all the nodes at level j γ {γ1, · · · , γL} φj r|r The mean of φj r|{yir′, i = 1, · · · , nr′, ∀r′ ≺r} σj r|r The variance of φj r|{yir′, i = 1, · · · , nr′, ∀r′ ≺r} ˆφj r The mean of φj r|{yir′, i = 1, · · · , nr′, ∀r′ ∈TL} σj r The variance of φj r|{yir′, i = 1, · · · , nr′, ∀r′ ∈TL} Table 1: A list of the key notations. assumption is that the hierarchy, determined from domain knowledge, provides a natural clustering to account for latent processes generating the data which, when incorporated into the model, improve predictions. Although MLH models are intuitive, parameter estimation presents a formidable challenge, especially for large hierarchies. For Gaussian response, the main computational bottleneck is the Cholesky factorization of a dense covariance matrix whose order depends on the number of nodes, this is expensive for large problems. For non-Gaussian response (e.g binary data), the nonquadratic nature of the log-likelihood adds on an additional challenge of approximating an integral whose dimension depends on the number of nodes in the hierarchy. This is an active area of research in statistics with several solutions being proposed, such as [5] (see references therein as well). For Cholesky factorization, techniques based on sparse factorization of the covariance matrix have been recently proposed in [5]. For non-Gaussian models, solutions require marginalization over a high dimensional integral and is often accomplished through higher order Taylor series approximations[6]. However, these techniques involve linear algebra that is often non-intuitive and difficult to implement. A more natural computational scheme that exploits the structure of the model is based on Gibbs sampling; however, it is not scalable due to slow convergence. Our contributions are as follows: We provide a novel fitting procedure based on multi-scale Kalman filter algorithm that directly exploits the hierarchical structure of the problem and computes the posterior mode of MLH parameters. The complexity of our method is almost linear in the number of nodes in the hierarchy. Other than scalability, our fitting procedure is more intuitive and easy to implement. We note that although multi-scale Kalman filters have been studied in the electrical engineering literature [2] and spatial statistics, their application to fitting MLH is novel. Moreover, fitting such models to non-Gaussian data present formidable challenges as we illustrate in the paper. We provide strategies to overcome those through a bootstrap correction and compare with the commonly used cross-validation approach. Our methods are illustrated on simulated data, benchmark data and data obtained from an internet advertising application. 2 MLH for Gaussian Responses Assume we have a hierarchy T consisting of L levels (root is level 0), for which mj, j = 0, · · · , L, denotes the number of nodes at level j. Denote the set of nodes at level j in the hierarchy T as Tj. For node r in T , denote the parent of r as pa(r), and the ith child of node r as ci(r). If a node r′ is 2 a descendent of r, we say r′ ≺r. Since the hierarchy has L levels, TL denotes the set of leaf nodes in the hierarchy. Let yir, i = 1, · · · , nr denote the ith observation at leaf node r, and xir denote the p-dimensional covariate vector associated with yir. For simplicity, we assume all observations are available at leaf nodes (a more general case where each node in the hierarchy can have observations is easily obtained from our algorithm). Consider the Gaussian MLH defined by yir|φL r ∼N(x ′ irβ + φL r , V ), (1) where β is a fixed effect parameter vector and φj r is a random effect associated with node r at level j with joint distribution defined through a set of hierarchical conditional distributions p(φj r|φj−1 pa(r)), j = 0, · · · , L, where φ0 0 = 0. The form of p(φj r|φj−1 pa(r)), j = 1, · · · , L is assumed to be φj r|φj−1 pa(r) ∼N(φj−1 pa(r), γj); j = 1, · · · , L, (2) where γ = (γ1, · · · , γL) is a vector of level-specific variance components that control the amount of smoothing. To complete the model specification in a Bayesian framework, we put a vague prior on V (π(V ) ∝1/V ) and a mild quadratic prior on γi (π(γi|V ) ∝V/(V + γi)2). For β, we assume a non-informative prior, i.e., π(β) ∝1. The specification of MLH given by Equation 2 is referred to as the centered parametrization and was shown to provide good performance in a fully Bayesian framework by [9]. An equivalent way of specifying MLH is obtained by associating independent random variables bj r ∼N(0, γj) to the nodes and replacing φL r in (1) by the sum of the bj r parameters along the lineage path from root to leaf node in the hierarchy. We denote this compactly as z′ rb, where b is a vector of bj r for all the nodes in the hierarchy, and zr is a vector of 0/1’s turned on for nodes in the path of node r. More compactly, let y = {yir, i = 1, · · · , nr, r ∈T }, and X as well as Z be the corresponding matrix of vectors xir and zr for i = 1, · · · nr and r ∈T , then y ∼N(X ′β+Zb, V I) with b ∼N(0, Ω(γ)). The problem is to compute the posterior mode of (βp×1, bq×1, γL×1, V ) where q = PL j=1 mj. The main computational bottleneck is computing the Cholesky factor of a q×q matrix (Z ′Z+Ω−1), this is expensive for large values of q. Existing state-of-the-art methods are based on sparse Cholesky factorization; we provide a more direct way. In fact, our method provides a MAP estimate of the parameters for the Gaussian case. For non-Gaussian case, we provide an approximation to the MAP through the Laplace method coupled with a bootstrap correction. We also note that our method apply if the random effects are vectors and enter into equation (2) as linear combination with some covariate vector. In this paper, we illustrate through a scalar. 2.1 Model Fitting Throughout, we work with the parametrization specified by φ. The main component of our fitting algorithm is computing the conditional posterior distribution of φ = {φj r, r ∈T, j = 1, · · · , L} given (β, V, γ). Since the parameters V and γ are unknown, we estimate them through an EM algorithm. The multi-scale Kalman filter (described next) computes the conditional posterior of φ mentioned above and is used in the inner loop of the EM. As in temporal state space models, the Kalman filter consists of two steps - a)Filtering: where one propagates information from leaves to the root and b) Smoothing: where information is propagated from root all the way down to the leaves. Filtering: Denote the current estimates of β, γ and V as ˆβ, ˆγ, and ˆV respectively. Then, eir = yir −x ′ ir ˆβ are the residuals and V ar(φj r) = Σj = Pj i=1 ˆγi, r ∈Tj are the marginal variances of the random effects. If the conditional posterior distribution φL r |{yir, i = 1, · · · , nr} ∼N(φL r|r, σL r|r), the first step is to update φL r|r and σL r|r for all leaf random effects φL r using standard Bayesian update formula for Gaussian models φL r|r = ΣL nr P i=1 eir ˆV + nrΣL , (3) σL r|r = ΣL ˆV ˆV + nrΣL . (4) 3 Next, the posteriors φj r|{yir′, i = 1, · · · , nr′, ∀r′ ≺r} ∼N(φj r|r, σj r|r), are recursively updated from j = L −1 to j = 1, by regressing the parent node effect towards each child and combining information from all the children. To provide intuition about regression step, it is useful to invert the state equation (2) and express the distribution of φj−1 pa(r) conditional on φj r. Note that φj−1 pa(r) = E(φj−1 pa(r)|φj r) + (φj−1 pa(r) −E(φj−1 pa(r)|φj r)) (5) Simple algebra provides the conditional expectation and variance of φj−1 pa(r)|φj r as φj−1 pa(r) = Bjφj r + ψj r, (6) where Bj = Pj−1 i=1 ˆγi/ Pj i=1 ˆγi, correlation between any two siblings at level j and ψj r ∼ N(0, Bj ˆγj). First, a new prior is obtained for the parent node based on the current estimate of each child by plugging-in the current estimates of a child into equation (6). For the ith child of node r (here we assume that r is at level j −1, and ci(r) is at level j), φj−1 r|ci(r) = Bjφj ci(r)|ci(r), (7) σj−1 r|ci(r) = B2 j σj ci(r)|ci(r) + Bj ˆγj, (8) Next, we combine information obtained by the parent from all its children. φj−1 r|r = σj−1 r|r kr X i=1 (φj−1 r|ci(r)/σj−1 r|ci(r)), (9) 1/σj−1 r|r = Σ−1 j−1 + kr X i=1 ((1/σj−1 r|ci(r)) −Σ−1 j−1). (10) where kr is the number of children of node r at level j −1. Smoothing: In the smoothing step, parents propagate information recursively from root to the leaves to provide us with the posterior of each φj r based on the entire data. Denoting the posterior mean and variance of φj r given all the observations by ˆφj r and σj r respectively, the update equations are given below. For level 1 nodes, set ˆφ1r = φ1 r|r, and σ1 r = σ1 r|r. For node r at other levels, ˆφj r = φj r|r + σj r|rBj( ˆ φj−1 pa(r) −φj−1 pa(r)|r)/σj pa(r)|r, (11) σj r = σj r|r + σj2 r|rB2 j (σj−1 pa(r) −σj−1 pa(r)|r)/σj2 pa(r)|r, (12) and let σj,j−1 r,pa(r) = σj r|rBjσj−1 pa(r)/σj−1 pa(r)|r. (13) The computational complexity of the algorithm is linear in the number of nodes in the hierarchy and for each parent node, we perform an operation which is cubic in the number of children. Hence, for most hierarchies that arise in practical applications, the complexity is “essentially” linear in the number of nodes. Expectation Maximization: To estimate all parameters simultaneously, we use an EM algorithm which assumes the φ parameters to be the missing latent variables. The expectation step consists of computing the expected value of complete log-posterior with respect to the conditional distribution of missing data φ, obtained using the multi-scale Kalman filter algorithm. The maximization step obtains revised estimates of other parameters by maximizing the expected complete log-posterior. 4 ˆV = X r∈TL nr P i=1 (eir −ˆ φLr )2 + nrσL r P r∈TL nr , (14) For j = 1, · · · , L, ˆγj = P r∈Tj (σj r + σj−1 pa(r) −2σj,j−1 r,pa(r) + ( ˆφr j − ˆ φpa(r) j−1)2) |mj| . (15) Updating ˆβ: We use the posterior mean of φ obtained from the Kalman filtering step, to compute the posterior mean of β as given in equation (16). ˆβ = (X′X)−1X′(Y −ˆφL), (16) where ˆφL is the vector of ˆ φLr corresponding to each observation yir at different leaf node r. 2.2 Simulation Performance We first perform a simulation study with a hierarchy described in [7, 8]. The data focus on 2449 Guatemalan children who belong to 1558 families who in turn live in 161 communities. The response variable of interest is binary with a positive label assigned to a child if he/she received a full set of immunizations. The actual data contains 15 covariates capturing individual, family and community level characteristics as shown in Table 2. For our simulation study, we consider only three covariates, with the coefficient vector β set with entries all equal to 1. We simulated Gaussian response as follows: yir|b ∼N(x ′ irβ + b1 r + b2 r, 10) where b1 r ∼N(0, 4), and b2 r ∼N(0, 1). We simulated 100 data sets and compared the estimates from Kalman filter to the one obtained from standard routine lme4 in the statistical software R. Results from our procedure agreed almost exactly with those obtained from lme4, our computations was many times faster than lme4. The EM method converged rapidly and required at most 30 iterations. 3 MLH for Non-Gaussian Responses We discuss model fitting for Bernoulli response but note that other distributions in the generalized linear model family can be easily fitted using the procedure. Let yir ∼Bernoulli(pir), i.e. P(yir) = pyir ir (1 −pir)1−yir. Let θir = log pir 1−pir be the log-odds. The MLH logistic regression is defined as: θir = x ′ irβ + φL r , (17) with the same multi-level prior as described in equation (2). The non-conjugacy of the normal multi-level prior makes the computation more difficult. We take recourse to Taylor series approximation coupled with the Kalman filter algorithm. The estimates obtained are biased; we recommend cross-validation and parametric bootstrap (adapted from [4]) to correct for the bias. The bootstrap procedure though expensive is easily parallelizable and accurate. 3.1 Approximation Methods Let ηir = xir ˆβ + ˆ φLr , where ˆβ, ˆ φLr are current estimates of the parameters in our algorithm. We do a quadratic approximation of the log-likelihood through a second order Taylor expansion (Laplace approximation) around ηir. This enables us to do the calculations as in the Gaussian case with the response yir being replaced by Zir where Zir = ηir + 2yir −1 g((2yir −1)ηir), (18) 5 Algorithm 1 The bootstrap procedure Let θ = (β, γ). Obtain ˜θ as an initial estimate of θ. Bias b(0) = 0. for i = 1 to N do ˆθ = ˜θ −b(i). for j = 1 to M do Use ˆθ to simulate new data j, by simulating φ and the corresponding Y . For data j, obtain an new estimate of θ as ˜θ(j). end for b(i+1) = 1 M M P j=1 ˜θ(j) −ˆθ. end for and g(x) = 1/(1 + exp(−x)). Approximately, Zir ∼N(x′ irβ + φL r , 1 g(ηir)g(−ηir)). (19) Now denote eir = Zir −x′ ir ˆβ, and the approximated variance of Zir as Vir. Analogous to equation (3) and (4), the resulting filtering step for the leaf nodes becomes: φL r|r = σL r|r nr X i=1 eir Vir , (20) σL r|r = ( 1 ΣL + nr X i=1 1 Vir )−1. (21) The step for estimating β becomes: ˆβ = (X′W X)−1X′W (Z −ˆ φL), (22) where W = diag( 1 Vir ). All the other computational steps remain the same as in the Gaussian case. 3.2 Bias correction Table 2 shows estimates of parameters obtained from our approximation method in the column titled KF. Compared to the unbiased estimates obtained from the slow Gibbs sampler, it is clear our estimates are biased. Our bias correction procedure is described in Algorithm 1. In general, a value of M = 50 with about 100 −200 iterations worked well for us. The bias corrected estimates are reported under KF-B in Table 2. The estimates after bootstrap correction are closer to the estimates obtained from Gibbs sampling. It is also customary to estimate hyper parameters like the γ using a tuning dataset. To test the performance of such a strategy, we created a two-dimensional grid for (√γ1, √γ2) for the epidemiological Guatemalan data set ranging in [.1, 3]×[.1, 3] and computed the log-likelihood on a 10% randomly sampled hold-out data. For each point on the two-dimensional grid, we estimated the other parameters φ and β, using our EM algorithm that does not update the value of γ. The estimates at the optimal value of γ are shown in Table 2 under KF-C. The estimates are better than KF but worse than KF-B. Based on our findings, we recommend KF-B when computing resources are available (especially multiple processors) and running time is not a big constraint; if runtime is an issue we recommend grid search using a small number of points around the initial estimate. 4 Content Match Data Analysis We analyze data from an internet advertising application where every showing of an ad on a web page (called an impression) constitutes an event. The goal is to rank ads on a given page based on click-through rates. Building a predictive model for click-rates via features derived from pages and 6 Effects KF KF-B KF-C Gibbs Fixed effects Individual Child age ≥2 years 0.99 1.77 1.18 1.84 Mother age ≥25 years -0.09 -0.16 -0.10 -0.26 Birth order 2-3 -0.10 -0.18 -0.25 -0.29 Birth order 4-6 0.13 0.25 0.10 0.21 Birth order ≥7 0.20 0.36 0.21 0.50 Family Indigenous, no Spanish -0.05 -0.11 0.02 -0.22 Indigenous Spanish 0.00 0.01 0.02 -0.11 Mother’s education primary 0.22 0.44 0.32 0.48 Mother’s education secondary 0.23 0.44 0.27 0.46 or better Husband’s education primary 0.30 0.53 0.39 0.59 Husband’s education secondary 0.27 0.48 0.35 0.55 or better Husband’s education missing 0.02 0.04 -0.08 0.00 Mother ever worked 0.21 0.35 0.24 0.42 Community Rural -0.50 -0.91 -0.62 -0.96 Proportion indigenous, 1981 -0.67 -1.23 -0.89 -1.22 Random effects Standard deviations γ Family 0.74 2.40 1.92 2.60 Community 0.56 1.05 0.81 1.13 Table 2: Estimates for the binary MLH model of complete immunization (Kalman Filtering results) ads is an attractive approach. In our case, semantic features are obtained by classifying pages and ads into a large seven-level content hierarchy that is manually constructed by humans. We form a new hierarchy (a pyramid) by taking the cross product of the two hierarchies. This is used to estimate smooth click-rates of (page,ad) pairs. 4.1 Training and Test Data Although the page and ad hierarchies consist of 7 levels, classification is often done at coarser levels by the classifier. In fact, the average level at which classification took place is 3.8. To train our model, we only consider the top 3 levels of the original hierarchy. Pages and ads that are classified at coarser levels are randomly assigned to the children nodes. Overall, the pyramid has 441, 25751 and 241292 nodes for the top 3 levels. The training data were collected by confining to a specific subset of data which is sufficient to illustrate our methodology but in no way representative of the actual publisher traffic received by the ad-network under consideration. The training data we collected spans 23 days and consisted of approximately 11M binary observations with approximately 1.9M clicks. The test set consisted of 1 day’s worth of data with approximately .5M observations. We randomly split the test data into 20 equal sized partitions to report our results. The covariates include the position at which an ad is shown; ranking ads on pages after adjusting for positional effects is important since the positional effects introduce strong bias in the estimates In the training data a large fraction of leaf nodes in the pyramid (approx 95%) have zero clicks, this provides a good motivation to fit the binary MLH on this data to get smoother estimates at leaf nodes by using information at coarser resolutions. 4.2 Results We compare the following models using log-likelihood on the test data: a) The model which predicts a constant probability for all examples, b) 3 level MLH but without positional effects, c) top 2 level MLH to illustrate the gains of using information at a finer resolution, and d) 3 level MLH with positional effects to illustrate the generality of the approach; one can incorporate both additional features and the hierarchy into a single model. Figure 1 shows the distribution of average test likelihood on the partitions. As expected, all variations of MLH are better than the constant model. The MLH model which uses only 2 levels is inferior to the 3 level MLH while the general model that uses both covariates and hierarchy is the best. 7 2lev 3lev 3lev−pos con −2.65 −2.55 −2.45 Model Log−likelihood Figure 1: Distribution of test log-likelihood on 20 equal sized splits of test data. 5 Discussion In applications where data is aggregated at multiple resolutions with sparsity at finer resolutions, multi-level hierarchical models provide an attractive class to reduce variance by smoothing estimates at finer resolutions using data at coarser resolutions. However, the smoothing provides a better biasvariance tradeoff only when the hierarchy provides a natural clustering for the response variable and captures some latent characteristics of the process; often true in practice. We proposed a fast novel algorithm to fit these models based on a multi-scale Kalman filter that is both scalable and easy to implement. For the non-Gaussian case, the estimates are biased but performance can be improved by using a bootstrap correction or estimation through a tuning set. In future work, we will report on models that generalize our approach to arbitrary number of hierarchies that may all have different structure. This is a challenging problem since in general cross-product of trees is not a hierarchy but a graph. References [1] D. Agarwal, A. Broder, D. Chakrabarti, D. Diklic, V. Josifovski, and M. Sayyadian. Estimating rates of rare events at multiple resolutions. In KDD, pages 16–25, 2007. [2] K. C. Chou, A. S. Willsky, and R. Nikoukhah. Multiscale systems, Kalman filters, and Ricatti equations. IEEE Transactions on Automatic Control, 39:479–492, 1994. [3] A. Gelman and J. Hill. Data Analysis sing Regression and Multi-Level/Hierarchical Models. Cambridge University Press, 2007. [4] A. Y. C. Kuk. Asymptotically unbiased estimation in generalized linear models with random effects. Journal of the Royal Statistical Society, Series B (Methodological),, 57:395–407, 1995. [5] J. C. Pinheiro and D. M. Bates. Mixed-Effects Models in S and S-PLUS. Springer-Verlag, New York, 2000. [6] S. W. Raudenbush, M. L. Yang, and M. Yosef. Maximum likelihood for generalized linear models with nested random effects via high-order, multivariate Laplace approximation. Journal of Computational and Graphical Statistics, 9(1):141–157, 2000. [7] G. Rodriguez and N. Goldman. An assessment of estimation procedures for multilevel models with binary responses. Journal of Royal Statistical Society, Series A,, 158:73–89, 1995. [8] G. Rodriguez and N. Goldman. Improved estimation procedures for multilevel models with binary response: A case-study. Journal of the Royal Statistical Society, Series A,, 164(2):339– 355, 2001. [9] S. K. Sahu and A. E. Gelfand. Identifiability, improper Priors, and Gibbs sampling for generalized linear models. Journal of the American Statistical Association, 94(445):247–254, 1999. 8
2008
57
3,545
An improved estimator of Variance Explained in the presence of noise Ralf. M. Haefner∗ Laboratory for Sensorimotor Research National Eye Institute, NIH Bethesda, MD 20892 ralf.haefner@gmail.com Bruce. G. Cumming Laboratory for Sensorimotor Research National Eye Institute, NIH Bethesda, MD 20892 bgc@lsr.nei.nih.gov Abstract A crucial part of developing mathematical models of information processing in the brain is the quantification of their success. One of the most widely-used metrics yields the percentage of the variance in the data that is explained by the model. Unfortunately, this metric is biased due to the intrinsic variability in the data. We derive a simple analytical modification of the traditional formula that significantly improves its accuracy (as measured by bias) with similar or better precision (as measured by mean-square error) in estimating the true underlying Variance Explained by the model class. Our estimator advances on previous work by a) accounting for overfitting due to free model parameters mitigating the need for a separate validation data set, b) adjusting for the uncertainty in the noise estimate and c) adding a conditioning term. We apply our new estimator to binocular disparity tuning curves of a set of macaque V1 neurons and find that on a population level almost all of the variance unexplained by Gabor functions is attributable to noise. 1 Introduction Constructing models of biological systems, e.g. in systems neuroscience, mostly aims at providing functional descriptions, not fundamental physical laws. It seems likely that any parametric model of signal processing in single neurons can be ruled out given a sufficient amount of data. Rather than only testing the statistical validity of a particular mathematical formulation against data, e.g. by using a χ2-test, it is equally important to know how much of the signal, or variance, in the data is explained by the model. This is commonly measured by Variance Explained (VE), the coefficient of determination or r2 statistic. A fundamental problem of the traditional estimator for VE is its bias in the presence of noise in the data. This noise may be due to measurement error or sampling noise owing to the high intrinsic variability in the underlying data. This is especially important when trying to model cortical neurons where variability is ubiquitous. Either kind of noise is in principle unexplainable by the model and hence needs to be accounted for when evaluating the quality of the model. Since the total variance in the data consists of the true underlying variance plus that due to noise, the traditional estimator yields a systematic underestimation of the true VE of the model in the absence of noise [1][2][3]. This has been noted by several authors before us; David & Gallant compute the traditional measure at several noise levels and extrapolate it to the noise-free condition [1]. This method relies on many repeats of the same stimulus and is therefore often impractical. Sahani & Linden add an analytical correction to the traditional formula in order to reduce its bias [2]. A number of subsequent studies have used their corrections to evaluate their models (e.g. [4][5][6]). We further improve on Sahani ∗Corresponding author (ralf.haefner@gmail.com) 1 & Linden’s formula in three ways: 1) most importantly by accounting for the number of parameters in the model, 2) adding a correction term for the uncertainty in the noise estimation, and 3) including a conditioning term to improve the performance in the presence of excessive noise. We propose a principled method to choose the conditioning term in order to electively minimize either the bias or the mean-square-error (MSE) of the estimator. In numerical simulations we find that the analytical correction alone is capable of drastically reducing the bias at moderate and high noise levels while maintaining a mean-square-error about as good as the traditional formula. Only for very high levels of noise is it advantageous to make use of the conditioning term. We test the effect of our improved formula on a data set of disparity selective macaque V1 neurons and find that for many cells noise accounts for most of the unexplained variance. On a population level we find that after adjusting for the noise, Gabor functions can explain about 98% of the underlying response variance. 2 Derivation of an improved estimator 2.1 Traditional Variance Explained Given a set of N measurements di of process D and given the model predictions mi, the traditional Variance Explained ν is computed as the difference of total variance var(di) and the variance of the residuals of the model var(di −mi). It is usually reported as a fraction of total variance: ν = var(di) −var(di −mi) var(di) = 1 −var(di −mi) var(di) = 1 − N P i=1 (di −mi)2 N P i=1 (di −¯d)2 . (1) In most cases, the di themselves are averages of individual measurements and subject to a sampling error. Since the variances of independent random variables add, this measurement noise leads to additive noise terms in both numerator and denominator of equation (1). Below we show that as the noise level increases, ν →(n −1)/(N −1) with n being the number of model parameters (see equation 8). The consequence is a systematic misestimation of the true Variance Explained (typically underestimation since (n −1)/(N −1) is usually smaller than the true VE). The effect of this can be seen in Figure 1 for two example simulations. In each simulation we fit a model to simulated noisy data sampled from a different but known underlying function. This allows us to compare the estimated VE to the true one, in the absence of noise. The average bias (estimated VE minus true VE) of the traditional variance explained is shown for 2000 instantiations of each simulation (shown in triangles). As we simulate an increase in sampling noise, the variance explained decreases significantly, underestimating the true VE by up to 30% in our examples. 2.2 Noise bias Let ¯di = 1/Ri PRi j=1 dij where the Ri are the number of observations for each variable i. We further assume that the measured dij are drawn from a Gaussian distribution around the true means Di with a variance of RΘ2 i . Then the ¯di are drawn from N[Di; Θ2 i ]. To simplify the presentation we assume that the variables have been transformed to equalize all Σ ≡Σi and that R ≡Ri. It follows that σ2 = 1/(RN(R −1)) PN i=1 PR j=1(dij −¯di)2 is an estimate of Θ2 based on measurements with Nσ = N(R −1) degrees of freedom. In the terms of Sahani & Linden [2], σ2 is the noise power. Our estimator, however, is more direct and accurate – especially for small N and R. Let Mi be the best fitting model to Di of a given model class with parameters. Then the variance explained in the absence of noise becomes: ν0 = 1 −var(Mi −Di) var(Di) = 1 − N P i=1 (Di −Mi)2 N P i=1 (Di −¯D)2 (2) 2 where ¯D = 1/N PN i=1 Di. Then ν0 is the true value for the Variance Explained that one would like to know: based on the best fit of the model class to the underlying data in the absence of any measurement or sampling noise. ν0 is of course unknown and the values obtained by (1) are drawn from a probability distribution around the true Variance Explained. Normalizing both denominator and numerator of formula (1) by σ2 leaves ν unchanged. However it becomes clear that the resulting denominator is drawn from a noncentral F-distribution: 1 N −1 N X i=1 (di −¯¯d)2 σ2 = 1 N−1 N P i=1 (di −¯¯d)2/Σ2 1 Nσ N P i=1 R P j=1 (dij −¯di)2/(RΣ2) ∼χ2 N−1(λDD)/(N −1) χ2 Nσ/Nσ (3) with N−1 and Nσ = N(R−1) degrees of freedom, the noncentrality parameter λDD = PN i=1(Di− ¯D)2/Σ2 and ¯¯d = 1/N PN i=1 ¯di. For Nσ > 2 the mean of this distribution is given by E " 1 N −1 N X i=1 (di −¯¯d)2 σ2 # = Nσ(N −1 + λDD) (N −1)(Nσ −2) (4) Hence, an unbiased estimator of PN i=1(Di −¯D)2/Σ2 = λDD is given by λDD = Nσ −2 Nσ N X i=1 (di −¯¯d)2 σ2 −(N −1) (5) With the same reasoning we find that the numerator of equation (1) 1 N −n N X i=1 (di −mi)2 σ2 ∼χ2 N−n(λDD)/(N −n) χ2 Nσ/Nσ (6) follows a noncentral F-distribution with N −n and Nσ degrees of freedom and the noncentrality parameter λDM = PN i=1(Di −Mi)2/Σ2. Hence, an unbiased estimator of PN i=1(Di −Mi)2/Σ2 = λDM is given by λDM = Nσ −2 Nσ N X i=1 (di −mi)2 σ2 −(N −n) (7) Combining (5) and (7) yields an estimator for ν0 whose numerator and denominator are individually unbiased: Υ[ν0] = 1 − N X i=1 di −mi σ 2 −Nσ(N −n) Nσ −2 N X i=1 di −¯d σ 2 −Nσ(N −1) Nσ −2 . (8) Note that apart from the difference in noise estimation, the estimator proposed by Sahani & Linden is contained in ours as a special case, becoming identical when there is no uncertainty in the noise estimate (Nσ →∞) and testing a model with no free parameters (n = 0). Nσ →∞is an excellent approximation in their case of fitting receptive fields to long series of data, but less so in the case of fitting tuning curves with a limited number of data points. However, the fact that their noiseterm does not account for overfitting due to free parameters in the model means that their formula overestimates the true Variance Explained. Hence, it requires a separate validation data set which might be costly to obtain. At this point we wish to note that (5), (7) and (8) readily generalize to cases where the noise level Σi and the number of observations Ri on which the means ¯di are based (and therefore Nσi) differ between those data points. 3 2.3 Conditioning term First it is important to note that while both numerator and denominator in formula (8) are now unbiased, the ratio is generally not. In fact, the ratio is not even well-defined for arbitrary measurements since the denominator can become zero and negative. In practice this is avoided by implicit or explicit selection criteria imposed by the experimenter requiring a minimum SNR in the data before further analysis. An example would be a criterion based on the significance level pANOVA of the modulation in the data as assessed by a 1-way ANOVA test. (Any criterion can be used in the context of the framework described here, as long as it is used consistently.) The effect of such a criterion is to cut off the lower tail of the distribution from which the denominator is drawn to exclude zero. This introduces a bias to the denominator the size of which depends on the amount of noise and the strictness of the criterion used. We recognize that both biases are strongest when the data is such that the ratio is close to singular and therefore propose an additive conditioning term C in the denominator of (8): Υ(C) = 1 − " N X i=1 di −mi σ 2 −Nσ(N −n) Nσ −2 # / " N X i=1 di −¯d σ 2 −Nσ(N −1) Nσ −2 + C # . (9) Depending on the application, the optimal C can be chosen to either minimize the mean-squareerror (MSE) E[Υ(C) −ν0] or the bias |E[Υ(C)] −ν0| of the estimator. Generally, the optimal levels of conditioning for the two scenarios are different, i.e. unbiasedness comes at the expense of an increased MSE and vice versa. For individual estimates a small bias can be acceptable in order to improve accuracy (and hence minimize MSE). When averaging over a large number of estimates, e.g. from a population of neurons, it becomes important that the estimator is unbiased. C = C(N, n, Nσ, λDM, λDD; pANOVA) is itself a function of a number of variables, only two of which, λDM and λDD, are unknown a priori. We approximate them by our estimates from equations (5) and (7). The optimal C can then be determined in each case by a simple minimization across a large number of random samples drawn from the appropriate distributions (compare equations (3) and (6)): Cbias : min C |E [Υ(C)] −(1 −λDM/λDD)| and therefore : (10) Cbias : min C E  χ2 N−n(λDM)/χ2 Nσ −(N −n)/(Nσ −2) χ2 N−1(λDD)/χ2 Nσ −(N −1)/(Nσ −2) + C/Nσ  −λDM λDD (11) CMSE : min C E " χ2 N−n(λDM)/χ2 Nσ −(N −n)/(Nσ −2) χ2 N−1(λDD)/χ2 Nσ −(N −1)/(Nσ −2) + C/Nσ −λDM λDD 2# (12) Note that the χ2 Nσ distributions in numerator and denominator, sampling over varying estimates of the underlying noise σ2, are shared in both formulas since the σ2 is shared. Those two minimization problems can easily be solved by Monte-Carlo sampling the probability distributions and subsequently find the minimum of MSE or bias, respectively, across all samples. 2.4 Application to simulated data Figure 1 demonstrates the performance of various estimators of VE for three synthetic examples. In the left column we show the results when testing a model that consists of a 3rd degree polynomial that has been fit to noisy data sampled from a Gaussian distribution around an underlying sinefunction. Over the domain studied here, the true VE of the model as fit to the data in the noiseless condition would be 77%. The center & right column shows the case of a Gabor function that is fit to noisy data sampled from a difference-of-Gaussians ”reality”. Here the true VE is 90%. The center column simulates Gaussian and the right column Gamma noise (Fano factor of 2). We confirm that the traditional VE measure (triangles) has an increasingly negative bias with increasing noise level σ. Applying the Sahani-Linden correction (squares) this negative bias is turned into a positive one since the overfitting of noise due to the free parameters in the model is not taken into consideration. This leads to an overestimation of the true VE when applied to the fitting data instead of a separate set of validation data. Accounting for the number of parameters greatly reduces the bias to close to zero across a large range of noise levels (dots). The bias becomes notable only 4 9 10 11 −0.05 0 0.05 0.1 bias 0 0.05 0.1 0.15 RMSE −0.05 0 0.05 0.1 bias 0 0.05 0.1 0.15 σ RMSE 1 2 3 −0.3 −0.2 −0.1 0 0.1 0 0.1 0.2 0.3 −0.3 −0.2 −0.1 0 0.1 0 0.1 0.2 0.3 σ 2 4 6 −0.2 0 0.2 0 0.1 0.2 −0.2 −0.1 0 0.1 0 0.1 0.2 σ Figure 1: Simulation results: Left column: a 3rd degree polynomial is fit to noise data drawn from an underlying sine-function. Center & Right column: a Gabor function is fit to noisy data around a linear combination of three Gaussians – two ’excitatory’ and one ’inhibitory’. Left & Center: Gaussian noise, Right: Gamma distributed noise (Fano factor of 2). First row: data (stars) and model (lines) are shown in the noise-free condition. Their true VE is 77% and 90%, respectively. Rows 2-5: bias (defined as estimated minus true VE) and RMSE are shown as a function of noise σ. The traditional estimator is shown by triangles, the Sahani-Linden correction by squares, our estimator from eq.(8) by dots. Rows 4 & 5: We enforce our prior knowledge that 0 ≤ν ≤1. Estimators with conditioning term C (eq.9) optimized for bias (+) and MSE (x), both dashed, are shown. Restricting VE to 0 ≤ν ≤1 is the reason for the plateau in the bias of the Sahani-Linden estimator (right column, fourth from the top). In all panels data samples with insignificant variation in the data (pANOVA > 0.05) were excluded from the analysis. Note the different scales in each panel. 5 10 20 30 40 50 60 −0.5 −0.4 −0.3 −0.2 −0.1 0 N bias 10 20 30 40 50 60 0 0.1 0.2 0.3 0.4 0.5 N RMSE Figure 2: Tradeoff between number of conditions N and number of repetitions R at each condition. Traditional measure: triangles; unbiased estimate: dots. The total number of measurements was fixed at N · R = 120, while the number of different conditions N is varied along the abscissa. at the highest noise levels (at which a large number of data samples does not pass the ANOVAtest for significant modulation), while still remaining smaller than that of the traditional estimator. The reason for the decreasing bias of the Sahani-Linden estimator at very high noise levels is the coincidental cancellation of two bias terms: the negative bias at high noise levels also seen in our estimator for Gabor-fits to differences of Gaussians, and their general positive bias due to not taking the over-fitting of parameters into account. Comparing the MSE (shown as root-mean-square-error or RMSE) of the different estimators shows that they are similar in the case of fitting a polynomial (left column) and significantly improved in the case of fitting a Gabor function (center & right column – note the different y-axis scales among all column). 1 The bottom two rows simulate the situation where our prior knowledge that 0 ≤VE ≤1 is explicitly enforced. Since the numerator in our unbiased estimator (eq.8) yields values around its noiseless value that can be positive and negative, the estimator can be negative or greater than one. Restricting our estimator to [0..1] interferes with its unbiasedness. We test whether a conditioning term can improve the performance of our estimator and find that this is the case for the Gabor fit, but not the polynomial fit. In the case of the Gabor fit, the improvement due to the conditioning term is greatest at the highest noise levels as expected. The bias is decreased at the highest three noise levels tested and the MSE is slightly decreased (at the highest noise level) or the same as with conditioning. Where the purely analytical formula outperforms the one with conditioning that is because the approximations we have to make in determining the optimal C are greater than the inaccuracy in the analytical formula at those noise levels. This is especially true in the 3rd column where the strongly non-Gaussian noise is incompatible with the Gaussian assumption in our computation of C. We conclude that unless one has to estimate VE in the presence of extremely high noise, and has confirmed that conditioning provides an improvement for the particular situation under consideration, our analytical estimator is preferable. (Note the different y-axis scales across the 2nd and 4th rows.) Using an estimator that accounts for the amount of noise has another major benefit. Because the total number of measurements N · R one can make is usually limited, there is a tradeoff between number of conditions N and number of repeats R. Everything else being equal the result from the traditional estimator for VE will depend strongly on that choice: the more conditions and the fewer repeats, the higher the standard error of the means σ (noise) and hence the lower the estimated VE will be – regardless of the model. Figure 2 demonstrates this behavior in the case of fitting a Gabor to a difference-of-Gaussians exactly as in Figure 1. Keeping the total number of measurements constant, the traditional VE (triangles) decreases drastically as the number of conditions N is increased. The new unbiased estimator (dots) in comparison has a much reduced bias and depends only weakly on R. This means that relatively few repeats (but at least 2) are necessary, allowing many more conditions to be tested than previously, hence increasing resolution. 1It is not surprising that the precise behavior of the respective estimators varies between examples. Two approximations were made in the analytical derivation: (1) the model is approx. linear in its parameters and (2) unbiasing the denominator is not the same as unbiasing the ratio. Both approximations are accurate in the small noise regime. However, as noise levels increase they introduce biases that interact depending on the situation. 6 −1 −0.5 0 0.5 1 1.5 2 2.5 3 3.5 disparity spikerate0.5 A −0.4 −0.2 0 0.2 2 3 4 5 6 7 disparity spikerate0.5 B 10 −2 10 −1 10 0 0 0.5 1 1.5 log(sigma2/var(d)) VE (unbiased) C 0.2 0.4 0.6 0.8 1 0.2 0.4 0.6 0.8 1 VE (old) VE (cond min MSE) D Figure 3: Disparity tuning curves of V1 neurons fit with a Gabor function: A: Data from an example neuron shown by their standard error of the mean (SEM) errorbars. Estimate of VE by Gabor fit (solid line) changes from 85% to 93% when noise is adjusted for. B: Data from 2nd example neuron. VE of Gabor fit changes from 94% to 95%. χ2−test on compatibility of data with model: pχ2 = 4·10−4. C: Unbiased VE as a function of signal-to-noise power. One outlier at (0.93;4.0) not shown. D: Traditional VE estimate vs unbiased VE with conditioning to minimize MSE. VE values are limited to 0..1 range. C & D: Filled symbols denote cells whose responses are incompatible with the Gabor model, as evaluated by a χ2−test (pχ2 < 0.05). 3 Application to experimental data 3.1 Methods The data are recorded extracellularly from isolated V1 neurons in two awake, fixating rhesus macaque monkeys and have been previously published in [7]. The stimulus consisted of dynamic random dots (RDS) with a binocular disparity applied perpendicular to the preferred orientation of the cell. We only included neurons in the analysis which were significantly modulated by binocular disparity as evaluated by a one-way ANOVA test. 109 neurons passed the test with pANOVA < 0.05. Since neuronal spike counts are approximately Poisson distributed we perform all subsequent analysis using the square root of the spike rates to approximately equalize variances. We fit a Gabor function with six parameters to the spike rates of each cell and perform a χ2−test on the residuals. The minimum number of different conditions Nmin = 13 and the median number of repeats median(R) = 15. 3.2 Results Most disparity tuning curves in V1 are reasonably well-described by Gabor functions, which explain more than 90% of the variance in two thirds of the neurons [8]. Whether the remaining third reflect a failure of the model or are merely a consequence of noise in the data has been an open question. Panels A & B in Figure 3 show the responses of two example cells together with their best-fitting Gabor functions. The traditional VE in panel A is only 82% even though the data is not significantly different from the model (pχ2 = 0.64). After adjusting for noise, the unbiased VE becomes 92%, i.e. more than half of the unexplained variance can be attributed to the response variability for each measurement. Panel B shows the opposite situation: 94% of the variance is explained according to the traditional measure and only an additional 1% can be attributed to noise. However, despite 7 this high VE, since the measurement error is relatively small, the model is rejected with a high significance (pχ2 = 4 · 10−4). Panel C shows the unbiased estimate of the VE for the entire population of neurons depending on their noise power relative to signal power. At high relative noise levels there is a wide spread of values and for decreasing noise, the VE values asymptote near 1. In fact, the overall population mean for the unbiased VE is 98%, compared with the traditional estimate of 82%. This means that for the entire population, most of the variance previously deemed unexplained by the model can in fact be accounted for by our uncertainty about the data. 22 out of 109 cells or 20% rejected the model (pχ2 < 0.05) and are denoted by filled circles. Panel D demonstrates the effect of the new measure on each individual cell. For the estimation of the true VE for each neuron individually, we incorporate our knowledge about the bounds 0 ≤ν0 ≤1 and optimize the conditioning term for minimum MSE. With the exception of two neurons, the new estimate of the true VE is greater than the traditional one. On average 40% of the unexplained variance in each individual neuron can be accounted for by noise. 4 Conclusions We have derived an new estimator of the variance explained by models describing noisy data. This estimator improves on previous work in three ways: 1) by accounting for overfitting due to free model parameters, 2) by adjusting for the uncertainty in our estimate of the noise and 3) by describing a way to add an appropriate level of conditioning in cases of very low signal-to-noise in the data or other imposed constraints. Furthermore, our estimator does not rely on a large number of repetitions of the same stimulus in order to perform an extrapolation to zero noise. In numerical simulations with Gaussian and strongly skewed noise we have confirmed that our correction is capable of accounting for most noise levels and provides an estimate with greatly improved bias compared to previous estimators. We note that where the results from the two simulations differ, it is the more realistic simulation where the new estimator performs best. Another important benefit of our new estimator is that it addresses the classical experimenter’s dilemma of a tradeoff between number of conditions N and number of repeats R at each condition. While the results from the traditional estimator quickly deteriorate with increasing N and decreasing R, the new estimator is much closer to invariant with respect to both – allowing the experimenter to choose a greater N for higher resolution. When applying the new VE estimator to a data set of macaque V1 disparity tuning curves we find that almost all of the variance previously unaccounted for by Gabor fits can be attributed to sampling noise. For our population of 109 neurons we find that 98% of the variance can be explained by a Gabor model. This is much higher than previous estimates precisely because they did not account for the variability in their data, illustrating the importance of this correction especially in cases where the model is good. The improvement we present is not limited to neuronal tuning curves but will be valuable to any model testing where noise is an important factor. Acknowledgments We thank Christian Quaia and Stephen David for helpful discussions. References [1] S.V. David, and J.L. Gallant, Network 16, 239 (2005). [2] M. Sahani, and J.F. Linden, Advances in Neural Information Processing Systems 15, 109 (2003). [3] A. Hsu, A. Borst, and F.E. Theunissen, Network 15, 91 (2004). [4] C.K. Machens, M.S. Wehr, and A.M. Zador, J Neurosci 24, 1089 (2004). [5] I. Nauhaus, A. Benucci, M. Carandini, and D.L. Ringach, Neuron 57, 673 (2008). [6] V. Mante, V. Bonin, and M. Carandini, Neuron 58, 625 (2008). [7] R.M. Haefner and B.G. Cumming, Neuron 57, 147 (2008). [8] S.J. Prince, A.D. Pointon, B.G. Cumming, and A.J. Parker, J Neurophysiol 87, 191 (2002). 8
2008
58
3,546
The Infinite Hierarchical Factor Regression Model Piyush Rai and Hal Daum´e III School of Computing, University of Utah {piyush,hal}@cs.utah.edu Abstract We propose a nonparametric Bayesian factor regression model that accounts for uncertainty in the number of factors, and the relationship between factors. To accomplish this, we propose a sparse variant of the Indian Buffet Process and couple this with a hierarchical model over factors, based on Kingman’s coalescent. We apply this model to two problems (factor analysis and factor regression) in gene-expression data analysis. 1 Introduction Factor analysis is the task of explaining data by means of a set of latent factors. Factor regression couples this analysis with a prediction task, where the predictions are made solely on the basis of the factor representation. The latent factor representation achieves two-fold benefits: (1) discovering the latent process underlying the data; (2) simpler predictive modeling through a compact data representation. In particular, (2) is motivated by the problem of prediction in the “large P small N” paradigm [1], where the number of features P greatly exceeds the number of examples N, potentially resulting in overfitting. We address three fundamental shortcomings of standard factor analysis approaches [2, 3, 4, 1]: (1) we do not assume a known number of factors; (2) we do not assume factors are independent; (3) we do not assume all features are relevant to the factor analysis. Our motivation for this work stems from the task of reconstructing regulatory structure from gene-expression data. In this context, factors correspond to regulatory pathways. Our contributions thus parallel the needs of gene pathway modeling. In addition, we couple predictive modeling (for factor regression) within the factor analysis framework itself, instead of having to model it separately. Our factor regression model is fundamentally nonparametric. In particular, we treat the gene-tofactor relationship nonparametrically by proposing a sparse variant of the Indian Buffet Process (IBP) [5], designed to account for the sparsity of relevant genes (features). We couple this IBP with a hierarchical prior over the factors. This prior explains the fact that pathways are fundamentally related: some are involved in transcription, some in signaling, some in synthesis. The nonparametric nature of our sparse IBP requires that the hierarchical prior also be nonparametric. A natural choice is Kingman’s coalescent [6], a popular distribution over infinite binary trees. Since our motivation is an application in bioinformatics, our notation and terminology will be drawn from that area. In particular, genes are features, samples are examples, and pathways are factors. However, our model is more general. An alternative application might be to a collaborative filtering problem, in which case our genes might correspond to movies, our samples might correspond to users and our pathways might correspond to genres. In this context, all three contributions of our model still make sense: we do not know how many movie genres there are; some genres are closely related (romance to comedy versus to action); many movies may be spurious. 1 2 Background Our model uses a variant of the Indian Buffet Process to model the feature-factor (i.e., gene-pathway) relationships. We further use Kingman’s coalescent to model latent pathway hierarchies. 2.1 Indian Buffet Process The Indian Buffet Process [7] defines a distribution over infinite binary matrices, originally motivated by the need to model the latent factor structure of a given set of observations. In the standard form it is parameterized by a scale value, α. The distribution can be explained by means of a simple culinary analogy. Customers (in our context, genes) enter an Indian restaurant and select dishes (in our context, pathways) from an infinite array of dishes. The first customer selects Poisson(α) dishes. Thereafter, each incoming customer i selects a previously-selected dish k with a probability mk/(i −1), where mk is the number of previous customers who have selected dish k. Customer i then selects an additional Poisson(α/i) new dishes. We can easily define a binary matrix Z with value Zik = 1 precisely when customer i selects dish k. This stochastic process thus defines a distribution over infinite binary matrices. It turn out [7] that the stochastic process defined above corresponds to an infinite limit of an exchangeable process over finite matrices with K columns. This distribution takes the form p(Z | α) = QK k=1 α K Γ(mk+ α K )Γ(P −mk−1) Γ(P +1+ α K ) , where mk = P i Zik and P is the total number of customers. Taking K →∞yields the IBP. The IBP has several nice properties, the most important of which is exchangeablility. It is the exchangeablility (over samples) that makes efficient sampling algorithms possible. There also exists a two-parameter generalization to IBP where the second parameter β controls the sharability of dishes. 2.2 Kingman’s Coalescent Our model makes use of a latent hierarchical structure over factors; we use Kingman’s coalescent [6] as a convenient prior distribution over hierarchies. Kingman’s coalescent originated in the study of population genetics for a set of single-parent organisms. The coalescent is a nonparametric model over a countable set of organisms. It is most easily understood in terms of its finite dimensional marginal distributions over n individuals, in which case it is called an n-coalescent. We then take the limit n →∞. In our case, the individuals are factors. The n-coalescent considers a population of n organisms at time t = 0. We follow the ancestry of these individuals backward in time, where each organism has exactly one parent at time t < 0. The n-coalescent is a continuous-time, partition-valued Markov process which starts with n singleton clusters at time t = 0 and evolves backward, coalescing lineages until there is only one left. We denote by ti the time at which the ith coalescent event occurs (note ti ≤0), and δi = ti−1 − ti the time between events (note δi > 0). Under the n-coalescent, each pair of lineages merges indepentently with exponential rate 1; so δi ∼Exp n−i+1 2  . With probability one, a random draw from the n-coalescent is a binary tree with a single root at t = −∞and n individuals at time t = 0. We denote the tree structure by π. The marginal distribution over tree topologies is uniform and independent of coalescent times; and the model is infinitely exchangeable. We therefore consider the limit as n →∞, called the coalescent. Once the tree structure is obtained, one can define an additional Markov process to evolve over the tree. One common choice is a Brownian diffusion process. In Brownian diffusion in D dimensions, we assume an underlying diffusion covariance of Λ ∈RD×D p.s.d. The root is a D-dimensional vector drawn z. Each non-root node in the tree is drawn Gaussian with mean equal to the value of the parent, and variance δiΛ, where δi is the time that has passed. Recently, Teh et al. [8] proposed efficient bottom-up agglomerative inference algorithms for the coalescent. These (approximately) maximize the probability of π and δs, marginalizing out internal nodes by Belief Propagation. If we associate with each node in the tree a mean y and variance v message, we update messages as Eq (1), where i is the current node and li and ri are its children. vi =  (vli + (tli −ti)Λ)−1 + (vri + (tri −ti)Λ)−1−1 (1) yi =  yli(vli + (tli −ti)Λ)−1 + yri(vri + (tri −ti)Λ)−1−1 vi 2 3 Nonparametric Bayesian Factor Regression Recall the standard factor analysis problem: X = AF + E, for standardized data X. X is a P × N matrix consisting of N samples [x1, ..., xN] of P features each. A is the factor loading matrix of size P × K and F = [f 1, ..., f N] is the factor matrix of size K × N. E = [e1, ..., eN] is the matrix of idiosyncratic variations. K, the number of factors, is known. Recall that our goal is to treat the factor analysis problem nonparametrically, to model feature relevance, and to model hierarchical factors. For expository purposes, it is simplest to deal with each of these issues in turn. In our context, we begin by modeling the gene-factor relationship nonparametrically (using the IBP). Next, we propose a variant of IBP to model gene relevance. We then present the hierarchical model for inferring factor hierarchies. We conclude with a presentation of the full model and our mechanism for modifying the factor analysis problem to factor regression. 3.1 Nonparametric Gene-Factor Model We begin by directly using the IBP to infer the number of factors. Although IBP has been applied to nonparametric factor analysis in the past [5], the standard IBP formulation places IBP prior on the factor matrix (F) associating samples (i.e. a set of features) with factors. Such a model assumes that the sample-fctor relationship is sparse. However, this assumption is inappropriate in the geneexpression context where it is not the factors themselves but the associations among genes and factors (i.e., the factor loading matrix A) that are sparse. In such a context, each sample depends on all the factors but each gene within a sample usually depends only on a small number of factors. Thus, it is more appropriate to model the factor loading matrix (A) with the IBP prior. Note that since A and F are related with each other via the number of factors K, modeling A nonparametrically allows our model to also have an unbounded number of factors. For most gene-expression problems [1], a binary factor loadings matrix (A) is inappropriate. Therefore, we instead use the Hadamard (element-wise) product of a binary matrix Z and a matrix V of reals. Z and V are of the same size as A. The factor analysis model, for each sample i, thus becomes: xi = (Z ⊙V )f i + ei. We have Z ∼IBP(α, β). α and β are IBP hyperparameters and have vague gamma priors on them. Our initial model assumes no factor hierarchies and hence the prior over V would simply be a Gaussian: V ∼Nor(0, σ2 vI) with an inverse-gamma prior on σv. F has a zero mean, unit variance Gaussian prior, as used in standard factor analysis. Finally, ei = Nor(0, Ψ) models the idiosyncratic variations of genes where Ψ is a P × P diagonal matrix (diag(Ψ1, ..., ΨP )). Each entry ΨP has an inverse-gamma prior on it. 3.2 Feature Selection Prior Typical gene-expression datasets are of the order of several thousands of genes, most of which are not associated with any pathway (factor). In the above, these are accounted for only by the idiosyncratic noise term. A more realistic model is that certain genes simply do not participate in the factor analysis: for a culinary analogy, the genes enter the restaurant and leave before selecting any dishes. Those genes that “leave”, we term “spurious.” We add an additional prior term to account for such spurious genes; effectively leading to a sparse solution (over the rows of the IBP matrix). It is important to note that this notion of sparsity is fundamentally different from the conventional notion of sparsity in the IBP. The sparsity in IBP is over columns, not rows. To see the difference, recall that the IBP contains a “rich get richer” phenomenon: frequently selected factors are more likely to get reselected. Consider a truly spurious gene and ask whether it is likely to select any factors. If some factor k is already frequently used, then a priori this gene is more likely to select it. The only downside to selecting it is the data likelihood. By setting the corresponding value in V to zero, there is no penalty. Our sparse-IBP prior is identical to the standard IBP prior with one exception. Each customer (gene) p is associated with Bernoulli random variable Tp that indicates whether it samples any dishes. The T vector is given a parameter ρ, which, in turn, is given a Beta prior with parameters a, b. 3.3 Hierarchical Factor Model In our basic model, each column of the matrix Z (and the corresponding column in V ) is associated with a factor. These factors are considered unrelated. To model the fact that factors are, in fact, re3 lated, we introduce a factor hierarchy. Kingman’s coalescent [6] is an attractive prior for integration with IBP for several reasons. It is nonparametric and describes exchangeable distributions. This means that it can model a varying number of factors. Moreover, efficient inference algorithms exist [8]. Figure 1: The graphical model for nonparametric Bayesian Factor Regression. X consists of response variables as well. Figure 2: Training and test data are combined together and test responses are treated as missing values to be imputed 3.4 Full Model and Extension to Factor Regression Our proposed graphical model is depicted in Figure 1. The key aspects of this model are: the IBP prior over Z, the sparse binary vector T, and the Coalescent prior over V. In standard Bayesian factor regression [1], factor analysis is followed by the regression task. The regression is performed only on the basis of F, rather than the full data X. For example, a simple linear regression problem would involve estimating a K-dimensional parameter vector θ with regression value θ⊤F. Our model, on the other hand, integrates factor regression component in the nonparametric factor analysis framework itself. We do so by prepending the responses yi to the expression vector xi and joining the training and test data (see figure 2). The unknown responses in the test data are treated as missing variables to be iteratively imputed in our MCMC inference procedure. It is straightforward to see that it is equivalent to fitting another sparse model relating factors to responses. Our model thus allows the factor analysis to take into account the regression task as well. In case of binary responses, we add an extra probit regression step to predict binary outcomes from real-valued responses. 4 Inference We use Gibbs sampling with a few M-H steps. The Gibbs distributions are summarized here. Sampling the IBP matrix Z: Sampling Z consists of sampling existing dishes, proposing new dishes and accepting or rejecting them based on the acceptance ratio in the associated M-H step. For sampling existing dishes, an entry in Z is set as 1 according to p(Zik = 1|X, Z−ik, V, F, Ψ) ∝ m−i,k (P +β−1)p(X|Z, V, F, Ψ) whereas it is set as 0 according to p(Zik = 0|X, Z−ik, V, F, Ψ) ∝ P +β−1−m−i,k (P +β−1) p(X|Z, V, F, Ψ). m−i,k = P j̸=i Zjk is how many other customers chose dish k. For sampling new dishes, we use an M-H step where we simultaneously propose η = (Knew, V new, F new) where Knew ∼Poisson(αβ/(β + P −1)). We accept the proposal with an acceptance probability (following [9]) given by a = min{1, p(rest|η∗) p(rest|η) }. Here, p(rest|η) is the likelihood of the data given parameters η. We propose V new from its prior (either Gaussian or Coalescent) but, for faster mixing, we propose F new from its posterior. Sampling V new from the coalescent is slightly involved. As shown pictorially in figure 3, proposing a new column of V corresponds to adding a new leaf node to the existing coalescent tree. In particular, we need to find a sibling (s) to the new node y′ and need to find an insertion point on the branch joining the sibling s to its parent p (the grandparent of y′). Since the marginal distribution over trees under the coalescent is uniform, the sibling s is chosen uniformly over nodes in the tree. We then use importance sampling to select an insertion time for the new node y′ between ts and tp, according to the exponential distribution given by the coalescent prior (our proposal distribution is uniform). This gives an insertion point in the tree, which corresponds to the new parent of y′. 4 We denote this new parent by p′ and the time of insertion as t. The predictive density of the newly inserted node y′ can be obtained by marginalizing the parent p′. This yields Nor(y0, v0), given by: v0 = [(vs + (ts −t)Λ)−1 + (vp + (t −tp)Λ)−1]−1 y0 = [ys/(vs + (ts −t)Λ) + yp/(vp + (tp −t)Λ)]v0 Here, ys and vs are the messages passed up through the tree, while yp and vp are the messages passed down through the tree (compare to Eq (1)). Figure 3: Adding a new node to the tree Sampling the sparse IBP vector T: In the sparse IBP prior, recall that we have an additional P-many variables Tp, indicating whether gene p “eats” any dishes. Tp is drawn from Bernoulli with parameter ρ, which, in turn, is given a Bet(a, b) prior. For inference, we collapse ρ and Ψ and get Gibbs posterior over Tp of the form p(Tp = 1|.) ∝(a + P q̸=p Tp)Stu(xp|(Zp ⊙ Vp)F , g/h, g)) and p(Tp = 0|.) ∝(b + P −P q̸=p Tq)Stu(xp|0, g/h, g), where Stu is the non-standard Student’s t-distribution. g, h are hyperparameters of the inverse-gamma prior on the entries of Ψ. Sampling the real valued matrix V: For the case when V has a Gaussian prior on it, we sample V from its posterior p(Vg,j|X, Z, F, Ψ) ∝ Nor(Vg,j|µg,j, Σg,j), where Σg,j = (PN i=1 F 2 j,i Ψg + 1 σ2 v )−1 and µg,j = Σg,j(PN i=1 Fj,iX∗ g,j)Ψ−1 g . We define X∗ g,j = Xg,i − PK l=1,l̸=j(Ag,lVg,l)Fl,i, and A = Z ⊙V. The hyperparameter σv on V has an inverse-gamma prior and posterior also has the same form. For the case with coalescent prior on V, we have Σg,j = (PN i=1 F 2 j,i Ψg + 1 v0j )−1 and µg,j = Σg,j(PN i=1 Fj,iX∗ g,j)(Ψg + y0g,j v0j )−1, where y0 and v0 are the Gaussian posteriors of the leaf node added in the coalescent tree (see Eq (1)), which corresponds to the column of V being sampled. Sampling the factor matrix F: We sample for F from its posterior p(F|X, Z, V, Ψ) ∝Nor(F|µ, Σ) where µ = AT(AAT + Ψ)−1X and Σ = I −(AAT + Ψ)−1A, where A = Z ⊙V Sampling the idiosyncratic noise term: We place an inverse-gamma prior on the diagonal entries of Ψ and the posterior too is inverse-gamma: p(Ψp|.) ∝IG(g + N 2 , h 1+ h 2 tr(ET E)), where E = X −(Z ⊙V)F. Sampling IBP parameters: We sample the IBP parameter α from its posterior: p(α|.) ∼ Gam(K+ + a, b 1+bHP (β)), where K+ is the number of active features at any moment and HP (β) = PP i=1 1/(β + i −1). β is sampled from a prior proposal using an M-H step. Sampling the Factor Tree: Use the Greedy-Rate1 algorithm [8]. 5 Related Work A number of probabilistic approaches have been proposed in the past for the problem of generegulatory network reconstruction [2, 3, 4, 1]. Some take into account the information on the prior network topology [2], which is not always available. Most assume the number of factors is known. To get around this, one can perform model selection via Reversible Jump MCMC [10] or evolutionary stochastic model search [11]. Unfortunately, these methods are often difficult to design and may take quite long to converge. Moreover, they are difficult to integrate with other forms of prior knowledge (eg., factor hierarchies). A somewhat similar approach to ours is the infinite independent component analysis (iICA) model of [12] which treats factor analysis as a special case of ICA. However, their model is limited to factor analysis and does not take into account feature selection, factor hierarchy and factor regression. As a generalization to the standard ICA model, [13] proposed a model in which the components can be related via a tree-structured graphical model. It, however, assumes a fixed number of components. Structurally, our model with Gaussian-V (i.e. no hierarchy over factors) is most similar to the Bayesian Factor Regression Model (BFRM) of [1]. BFRM assumes a sparsity inducing mixture prior on the factor loading matrix A. Specifically, Apk ∼(1 −πpk)δ0(Apk) + πpkNor(Apk|0, τk) 5 where δ0() is a point mass centered at zero. To complete the model specification, they define πpk ∼ (1−ρk)δ0(πpk)+ρkBet(πpk|sr, s(1−r)) and ρk ∼Bet(ρk|av, a(1−v)). Now, integrating out πpk gives: Apk ∼(1−vρk)δ0(Apk)+vρkNor(Apk|0, τk). It is interesting to note that the nonparametric prior of our model (factor loading matrix defined as A = Z ⊙V) is actually equivalent to the (parametric) sparse mixture prior of the BFRM as K →∞. To see this, note that our prior on the factor loading matrix A (composed of Z having an IBP prior, and V having a Gaussian prior), can be written as Apk ∼(1−ρk)δ0(Apk)+ρkNor(Apk|0, σ2 v), if we define ρk ∼Bet(1, αβ/K). It is easy to see that, for BFRM where ρk ∼Bet(av, a(1 −v)), setting a = 1 + αβ/K and v = 1 −αβ/(aK) recovers our model in the limiting case when K →∞. 6 Experiments In this section, we report our results on synthetic and real datasets. We compare our nonparametric approach with the evolutionary search based approach proposed in [11], which is the nonparametric extension to BFRM. We used the gene-factor connectivity matrix of E-coli network (described in [14]) to generate a synthetic dataset having 100 samples of 50 genes and 8 underlying factors. Since we knew the ground truth for factor loadings in this case, this dataset was ideal to test for efficacy in recovering the factor loadings (binding sites and number of factors). We also experimented with a real geneexpression data which is a breast cancer dataset having 251 samples of 226 genes and 5 prominent underlying factors (we know this from domain knowledge). 6.1 Nonparametric Gene-Factor Modeling and Variable Selection For the synthetic dataset generated by the E-coli network, the results are shown in figure 4 comparing the actual network used to generate the data and the inferred factor loading matrix. As shown in figure 4, we recovered exactly the same number (8) of factors, and almost exactly the same factor loadings (binding sites and number of factors) as the ground truth. In comparison, the evolutionary search based approach overestimated the number of factors and the inferred loadings clearly seem to be off from the actual loadings (even modulo column permutations). Factors Genes True Factor Loadings 1 2 3 4 5 6 7 8 5 10 15 20 25 30 35 40 45 50 Factors Genes Inferred Factor Loadings 1 2 3 4 5 6 7 8 5 10 15 20 25 30 35 40 45 50 Factors Genes Factor Loadings Inferred by BFRM 1 2 3 4 5 6 7 8 9 10 5 10 15 20 25 30 35 40 45 Figure 4: (Left and middle) True and inferred factor loadings (with our approach) for the synthetic data with P=50, K=8 generated using connectivity matrix of E-coli data. (Right) Inferred factor loadings with the evolutionary search based approach. White rectangles represent active sites. The data also has added noise with signal-to-noise-ratio of 10 Our results on real data are shown in figure 5. To see the effect of variable selection for this data, we also introduced spurious genes by adding 50 random features in each sample. We observe the following: (1) Without variable selection being on, spurious genes result in an overestimated number of factors and falsely discovered factor loadings for spurious genes (see figure 5(a)), (2) Variable selection, when on, effectively filters out spurious genes, without overestimating the number of factors (see figure 5(b)). We also investigated the effect of noise on the evolutionary search based approach and it resulted in an overestimated number of factor, plus false discovered factor loadings for spurious genes (see figure 5(c)). To conserve space, we do not show here the cases when there are no spurious genes in the data but it turns out that variable selection does not filter out any of 226 relevant genes in such a case. 6.2 Hierarchical Factor Modeling Our results with hierarchical factor modeling are shown in figure 6 for synthetic and real data. As shown, the model correctly infers the gene-factor associations, the number of factors, and the factor 6 Factors Noise Genes 2 4 6 8 50 100 150 200 250 10 20 30 40 50 60 (a) Factors Noise Genes 1 2 3 4 5 50 100 150 200 250 10 20 30 40 50 60 (b) Factors Noise Genes 1 2 3 4 5 6 7 8 50 100 150 200 250 10 20 30 40 50 60 (c) Figure 5: Effect of spurious genes (heat-plots of factor loading matrix shown): (a) Standard IBP (b) Our model with variable selection (c) The evolutionary search based approach hierarchy. There are several ways to interpret the hierarchy. From the factor hierarchy for E-coli data (figure 6), we see that column-2 (corresponding to factor-2) of the V matrix is the most prominent one (it regulates the highest number of genes), and is closest to the tree-root, followed by column2, which it looks most similar to. Columns corresponding to lesser prominent factors are located further down in the hierarchy (with appropriate relatedness). Figure 6 (d) can be interpreted in a similar manner for breast-cancer data. The hierarchy can be used to find factors in order of their prominence. The higher we chop off the tree along the hierarchy, the more prominent the factors, we discover, are. For instance, if we are only interested in top 2 factors in E-coli data, we can chop off the tree above the sixth coalescent point. This is akin to the agglomerative clustering sense which is usually done post-hoc. In contrast, our model discovers the factor hierarchies as part of the inference procedure itself. At the same time, there is no degradation of data reconstruction (in mean squared error sense) and the log-likelihood, when compared to the case with Gaussian prior on V (see figure 7 - they actually improve). We also show in section 6.3 that hierarchical modeling results in better predictive performance for the factor regression task. Empirical evidences also suggest that the factor hierarchy leads to faster convergence since most of the unlikely configurations will never be visited as they are constrained by the hierarchy. 1 2 3 4 5 6 7 8 5 10 15 20 25 30 35 40 45 50 (a) 3 5 8 7 4 6 1 2 0.04 0.045 0.05 0.055 0.06 0.065 0.07 0.075 0.08 0.085 (b) Factors Genes 1 2 3 4 5 20 40 60 80 100 120 140 160 180 200 220 10 20 30 40 50 60 (c) 1 2 3 5 4 0.07 0.08 0.09 0.1 0.11 0.12 (d) Figure 6: Hierarchical factor modeling results. (a) Factor loadings for E-coli data. (b) Inferred hierarchy for E-coli data. (c) Factor loadings for breast-cancer data. (d) Inferred hierarchy for breast-cancer data.. 6.3 Factor Regression We report factor regression results for binary and real-valued responses and compare both variants of our model (Gaussian V and Coalescent V) against 3 different approaches: logistic regression, BFRM, and fitting a separate predictive model on the discovered factors (see figure 7 (c)). The breast-cancer dataset had two binary response variables (phenotypes) associated with each sample. For this binary prediction task, we split the data into training-set of 151 samples and test-set of 100 samples. This is essentially a transduction setting as described in section 3.4 and shown in figure 2. For real-valued prediction task, we treated a 30x20 block of the data matrix as our held-out data and predicted it based on the rest of the entries in the matrix. This method of evaluation is akin to the task of image reconstruction [15]. The results are averaged over 20 random initializations and the low error variances suggest that our method is fairly robust w.r.t. initializations. 7 0 100 200 300 400 500 600 700 800 900 1000 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 Iterations MSE Coalescent V Gaussian V Post Convergence MSE of BFRM 0 100 200 300 400 500 600 700 800 900 1000 −8.5 −8 −7.5 −7 −6.5 −6 −5.5 −5 −4.5 x 10 4 log likelihood Iterations Gaussian V Coalescent V Model Binary Real (%error,std dev) (MSE) LogReg 17.5 (1.6) BFRM 19.8 (1.4) 0.48 Nor-V 15.8 (0.56) 0.45 Coal-V 14.6 (0.48) 0.43 PredModel 18.1 (2.1) Figure 7: (a) MSE on the breast-cancer data for BFRM (horizontal line), our model with Gaussian (top red curved line) and Coalescent (bottom blue curved line) priors. This MSE is the reconstruction error for the data - different from the MSE for the held-out real valued responses (fig 7 c) (b) Log-likelihoods for our model with Gaussian (bottom red curved line) and Coalescent (top blue curved line) priors. (c) Factor regression results 7 Conclusions and Discussion We have presented a fully nonparametric Bayesian approach to sparse factor regression, modeling the gene-factor relationship using a sparse variant of the IBP. However, the true power of nonparametric priors is evidenced by the ease of integration of task-specific models into the framework. Both gene selection and hierarchical factor modeling are straightforward extensions in our model that do not significantly complicate the inference procedure, but lead to improved model performance and more understandable outputs. We applied Kingman’s coalescent as a hierarhical model on V, the matrix modulating the expression levels of genes in factors. An interesting open question is whether the IBP can, itself, be modeled hierarchically. References [1] M. West. Bayesian Factor Regression Models in the “Large p, Small n” Paradigm. In Bayesian Statistics 7, 2003. [2] C. Sabatti and G. James. Bayesian Sparse Hidden Components Analysis for Transcription Regulation Networks,. Bioinformatics 22, 2005. [3] G. Sanguinetti, N. D. Lawrence, and M. Rattray. Probabilistic Inference of Transcription Factor Concentrations and Gene-specific Regulatory Activities. Bioinformatics, 22(22), 2006. [4] M. J. Beal, F. Falciani, Z. Ghahramani, C. Rangel, and D. L. Wild. A Bayesian Approach to Reconstructing Genetic Regulatory Networks with Hidden Factors. Bioinformatics, 21(3), 2005. [5] Z. Ghahramani, T.L. Griffiths, and P. Sollich. Bayesian Nonparametric Latent Feature Models. In Bayesian Statistics 8. Oxford University Press, 2007. [6] J. F. C. Kingman. The coalescent. Stochastic Processes and their Applications, 1982. [7] T. Griffiths and Z. Ghahramani. Infinite Latent Feature Models and the Indian Buffet Process. In Advances in Neural Information Processing Systems 18, 2006. [8] Y. W. Teh, H. Daum´e III, and D. M. Roy. Bayesian Agglomerative Clustering with Coalescents. In Advances in Neural Information Processing Systems, volume 20, 2008. [9] E. Meeds, Z. Ghahramani, R. M. Neal, and S. T. Roweis. Modeling Dyadic Data with Binary Latent Factors. In Advances in Neural Information Processing Systems 19. 2007. [10] P. Green. Reversible jump markov chain monte carlo computation and bayesian model determination. Biometrica 82, 1995. [11] C. Carvalho, J. Lucas, Q. Wang, J. Chang, J. Nevins, and M. West. High-Dimensional Sparse Factor Modelling - Applications in Gene Expression Genomics. In JASA, 2008. [12] D. Knowles and Z. Ghahramani. Infinite Sparse Factor Analysis and Infinite Independent Components Analysis. In ICA 2007, 2007. [13] Francis R. Bach and Michael I. Jordan. Beyond independent components: trees and clusters. Journal of Machine Learning Research, pages 1205–1233, 2003. [14] I. Pournara and L. Wernisch. Factor Analysis for Gene Regulatory Networks and Transcription Factor Activity Profiles. BMC Bioinformatics, 2007. [15] J. J. Verbeek, S. T. Roweis, and N. Vlassis. Non-linear CCA and PCA by Alignment of Local Models. In Advances in Neural Information Processing Systems 16. 2004. 8
2008
59
3,547
Designing neurophysiology experiments to optimally constrain receptive field models along parametric submanifolds. Jeremy Lewi ∗ School of Bioengineering Georgia Institute of Technology jeremy@lewi.us Robert Butera School of Electrical and Computer Engineering Georgia Institute of Technology rbutera@ece.gatech.edu David M. Schneider Departments of Neurobiology and Psychology Columbia University dms2159@columbia.edu Sarah M. N. Woolley Department of Psychology Columbia University sw2277@columbia.edu Liam Paninski † Department of Statistics and Center for Theoretical Neuroscience Columbia University liam@stat.columbia.edu Abstract Sequential optimal design methods hold great promise for improving the efficiency of neurophysiology experiments. However, previous methods for optimal experimental design have incorporated only weak prior information about the underlying neural system (e.g., the sparseness or smoothness of the receptive field). Here we describe how to use stronger prior information, in the form of parametric models of the receptive field, in order to construct optimal stimuli and further improve the efficiency of our experiments. For example, if we believe that the receptive field is well-approximated by a Gabor function, then our method constructs stimuli that optimally constrain the Gabor parameters (orientation, spatial frequency, etc.) using as few experimental trials as possible. More generally, we may believe a priori that the receptive field lies near a known sub-manifold of the full parameter space; in this case, our method chooses stimuli in order to reduce the uncertainty along the tangent space of this sub-manifold as rapidly as possible. Applications to simulated and real data indicate that these methods may in many cases improve the experimental efficiency. 1 Introduction A long standing problem in neuroscience has been collecting enough data to robustly estimate the response function of a neuron. One approach to this problem is to sequentially optimize a series of experiments as data is collected [1, 2, 3, 4, 5, 6]. To make optimizing the design tractable, we typically need to assume our knowledge has some nice mathematical representation. This restriction often makes it difficult to include the types of prior beliefs held by neurophysiologists; for example that the receptive field has some parametric form such as a Gabor function [7]. Here we consider ∗http://www.lewilab.org †http://www.stat.columbia.edu/∼liam/ 1 θ2 p(⃗θ|µt, Ct) θ1 θ1 θ2 ⃗µt ⃗µM,t M T⃗µM,t M θ2 p(⃗θ|µb, Cb) θ1 Figure 1: A schematic illustrating how we use the manifold to improve stimulus design. Our method begins with a Gaussian approximation of the posterior on the full model space after t trials, p(⃗θ|⃗µt, Ct). The left panel shows an example of this Gaussian distribution when dim(⃗θ) = 2. The next step involves constructing the tangent space approximation of the manifold M on which ⃗θ is believed to lie, as illustrated in the middle plot; M is indicated in blue. The MAP estimate (blue dot) is projected onto the manifold to obtain ⃗µM,t (green dot). We then compute the tangent space (dashed red line) by taking the derivative of the manifold at ⃗µM,t. The tangent space is the space spanned by vectors in the direction parallel to M at ⃗µM,t. By definition, in the neighborhood of ⃗µM,t, moving along the manifold is roughly equivalent to moving along the tangent space. Thus, the tangent space provides a good local approximation of M. In the right panel we compute p(⃗θ|⃗µb,t, Cb,t) by evaluating p(⃗θ|⃗µt, Ct) on the tangent space. The resulting distribution concentrates its mass on models which are probable under p(⃗θ|⃗µt, Ct) and close to the manifold. the problem of incorporating this strong prior knowledge into an existing algorithm for optimizing neurophysiology experiments [8]. We start by assuming that a neuron can be modeled as a generalized linear model (GLM). Our prior knowledge defines a subset of all GLMs in which we expect to find the best model of the neuron. We represent this class as a sub-manifold in the parameter space of the GLM. We use the manifold to design an experiment which will provide the largest reduction in our uncertainty about the unknown parameters. To make the computations tractable we approximate the manifold using the tangent space evaluated at the maximum a posteriori (MAP) estimate of the parameters projected onto the manifold. Despite this rather crude approximation of the geometry of the manifold, our simulations show that this method can significantly improve the informativeness of our experiments. Furthermore, these methods work robustly even if the best model does not happen to lie directly on the manifold. 2 Methods We begin by summarizing the three key elements of an existing algorithm for optimizing neurophysiology experiments. A more thorough discussion is available in [8]. We model the neuron’s response function as a mapping between the neuron’s input at time t, ⃗st, and its response, rt. We define the input rather generally as a vector which may consist of terms corresponding to a stimulus, e.g. an image or a sound, or the past activity of the neuron itself, {rt−1, rt−2, . . .}. The response, rt, is typically a non-negative integer corresponding to the number of spikes observed in a small time window. Since neural responses are typically noisy, we represent the response function as a conditional distribution, p(rt|⃗st, ⃗θ). In this context, optimizing the experimental design means picking the input for which observing the response will provide the most information about the parameters ⃗θ defining the conditional response function. 2 The first important component of this algorithm is the assumption that p(rt|⃗st, ⃗θ) can be adequately approximated by a generalized linear model [9, 10]. The likelihood of the response depends on the firing rate, λt, which is a function of the input, λt = E(rt) = f  ⃗θT⃗st  , (1) where f() is some nonlinear function which is assumed known1. To identify the response function, we need to estimate the coefficients of the linear projection, ⃗θ. One important property of the GLM is that we can easily derive sufficient conditions to ensure the log-likelihood is concave [11]. The second key component of the algorithm is that we may reasonably approximate the posterior on ⃗θ as Gaussian. This approximation is justified by the log-concavity of the likelihood function and asymptotic normality of the posterior distribution given sufficient data [12]. As a result, we can recursively compute a Gaussian approximation of the full posterior, p(⃗θ|r1:t, s1:t) ≈p(⃗θ|⃗µt, Ct) [8]. Here (⃗µt, Ct) denote the mean and covariance matrix of our Gaussian approximation: ⃗µt is set to the MAP estimate of ⃗θ, and Ct to the inverse Hessian of the log-posterior at ⃗µt. The final component is an efficient method for picking the optimal input on the next trial, ⃗st+1. Since the purpose of an experiment is to identify the best model, we optimize the design by maximizing the conditional mutual information between rt+1 and ⃗θ given ⃗st+1, I(⃗θ; rt+1|⃗st+1). The mutual information measures how much we expect observing the response to ⃗st+1 will reduce our uncertainty about ⃗θ. We pick the optimal input by maximizing the mutual information with respect to ⃗st+1; as discussed in [8], this step, along with the updating of the posterior mean and covariance (⃗µt, Ct), may be computed efficiently enough for real-time implementation in many cases. 2.1 Optimizing experiments to reduce uncertainty along parameter sub-manifolds. For the computation of the mutual information to be tractable, the space of candidate models, Θ, must have some convenient form so that we can derive a suitable expression for the mutual information. Intuitively, to select the optimal design, we need to consider how much information an experiment provides about each possible model. Evaluating the mutual information entails an integral over model space, Θ. The problem with incorporating prior knowledge is that if we restrict the model to some complicated subset of model space we will no longer be able to efficiently integrate over the set of candidate models. We address this problem by showing how local geometric approximations to the parameter sub-manifold can be used to guide optimal sampling while still maintaining a flexible, tractable representation of the posterior distribution on the full model space. In many experiments, neurophysiologists expect a-priori that the receptive field of a neuron will have some low-dimensional parametric structure; e.g the receptive field might be well-approximated by a Gabor function [13], or by a difference of Gaussians [14], or by a low rank spatiotemporal matrix [15, 13]. We can think of this structure as defining a sub-manifold, M, of the full model space, Θ, M = {⃗θ : ⃗θ = Ψ(⃗φ), ∀⃗φ}. (2) The vector, ⃗φ, essentially enumerates the points on the manifold and Ψ() is a function which maps these points into Θ space. A natural example is the case where we wish to enforce the constraint that ⃗θ has some parametric form, e.g. a Gabor function. The basic idea is that we want to run experiments which can identify exactly where on the manifold the optimal model lies. Since M can have some arbitrary nonlinear shape, computing the informativeness of a stimulus using just the models on the manifold is not easy. Furthermore, if we completely restrict our attention to models in M then we ignore the possibility that our prior knowledge is incorrect. Hence, we do not force the posterior distribution of ⃗θ to only have support on the manifold. Rather, we maintain a Gaussian approximation of the posterior on the full space, Θ. However, when optimizing our stimuli we combine our posterior with our knowledge of M in order to do a better job of maximizing the informativeness of each experiment. 1It is worth noting that this simple GLM can be generalized in a number of directions; we may include spike-history effects, nonlinear input terms, and so on [10]. 3 Computing the mutual information I(rt+1; ⃗θ|⃗st+1, s1:t, r1:t) entails an integral over model space weighted by the posterior probability on each model. We integrate over model space because the informativeness of an experiment clearly depends on what we already know (i.e. the likelihood we assign to each model given the data and our prior knowledge). Furthermore, the informativeness of an experiment will depend on the outcome. Hence, we use what we know about the neuron to make predictions about the experimental outcome. Unfortunately, since the manifold in general has some arbitrary nonlinear shape we cannot easily compute integrals over the manifold. Furthermore, we do not want to continue to restrict ourselves to models on the manifold if the data indicates our prior knowledge is wrong. We can solve both problems by making use of the tangent space of the manifold, as illustrated in Figure 1 [16]. The tangent space is a linear space which provides a local approximation of the manifold. Since the tangent space is a linear subspace of Θ, integrating over ⃗θ in the tangent space is much easier than integrating over all ⃗θ on the manifold; in fact, the methods introduced in [8] may be applied directly to this case. The tangent space is a local linear approximation evaluated at a particular point, ⃗µM,t, on the manifold. For ⃗µM,t we use the projection of ⃗µt onto the manifold (i.e., ⃗µM,t is the closest point in M to ⃗µt). Depending on the manifold, computing ⃗µM,t can be nontrivial; the examples considered in this paper, however, all have tractable numerical solutions to this problem. The challenge is representing the set of models close to ⃗µM,t in a way that makes integrating over the models tractable. To find models on the manifold close to ⃗µM,t we want to perturb the parameters ⃗φ about the values corresponding to ⃗µM,t. Since Ψ is in general nonlinear, there is no simple expression for the combination of all such perturbations. However, we can easily approximate the set of ⃗θ resulting from these perturbations by taking linear combinations of the partial derivatives of Ψ with respect to ⃗φ. The partial derivative is the direction in Θ in which ⃗θ moves if we perturb one of the manifold’s parameters. Thus, the subspace formed by linear combinations of the partial derivatives approximates the set of models on the manifold close to ⃗µM,t. This subspace is the tangent space, T⃗µM,tM = {⃗θ : ⃗θ = ⃗µM,t + B⃗b, ∀⃗b ∈Rdim(M)} B = orth  ∂Ψ ∂φ1 . . . ∂Ψ ∂φd  , (3) where orth is an orthonormal basis for the column space of its argument. Here TxM denotes the tangent space at the point x. The columns of B denote the direction in which ⃗θ changes if we perturb one of the manifold’s parameters. (In general, the directions corresponding to changes in different parameters are not independent; to avoid this redundancy we compute a set of basis vectors for the space spanned by the partial derivatives.) We now use our Gaussian posterior on the full parameter space to compute the posterior likelihood of the models in the tangent space. Since the tangent space is a subspace of Θ, restricting our Gaussian approximation, p(⃗θ|⃗µt, Ct), to the tangent space means we are taking a slice through our Gaussian approximation of the posterior. Mathematically, we are conditioning on ⃗θ ∈T⃗µM,tM. The result is a Gaussian distribution on the tangent space whose parameters may be obtained using the standard Gaussian conditioning formula: ptan(⃗θ|⃗µb,t, Cb,t) =  N(⃗b; ⃗µb,t, Cb,t) if ∃⃗b s.t ⃗θ = ⃗µM,t + B⃗b 0 if ⃗θ /∈T⃗µM,t (4) ⃗µb,t = −Cb,tBT C−1 t (⃗µM,t −⃗µt) Cb,t = (BT C−1 t B)−1 (5) where N denotes a normal distribution with the specified parameters. Now, rather than optimizing the stimulus by trying to squeeze the uncertainty p(⃗θ|r1:t, s1:t, M) on the nonlinear manifold M down as much as possible (a very difficult task in general), we pick the stimulus which best reduces the uncertainty ptan(⃗θ|⃗µb,t, Cb,t) on the vector space T⃗µM,t. We can solve this latter problem directly using the methods presented in [8]. Finally, to handle the possibility that ⃗θ /∈M, every so often we optimize the stimulus using the full posterior p(⃗θ|⃗µt, Ct). This simple modification ensures that asymptotically we do not ignore directions orthogonal to the manifold; i.e., that we do not 4 info. max. tan. space t=250 t=500 t=750 t=1000 θ i.i.d. −20 −10 0 2 4 6 info. max. full Frequency(KHz) Time(ms) −2 0 2 4 Figure 2: MAP estimates of a STRF obtained using three designs: the new info. max. tangent space design described in the text; an i.i.d. design; and an info. max. design which did not use the assumption that ⃗θ corresponds to a low rank STRF. In each case, stimuli were chosen under the spherical power contraint, ||⃗st||2 = c. The true STRF (fit to real zebrafinch auditory responses and then used to simulate the observed data) is shown in the last column. (For convenience we rescaled the coefficients to be between -4 and 4). We see that using the tangent space to optimize the design leads to much faster convergence to the true parameters; in addition, either infomax design significantly outperforms the iid design here. In this case the true STRF did not in fact lie on the manifold M (chosen to be the set of rank-2 matrices here); thus, these results also show that our knowledge of M does not need to be exact in order to improve the experimental design. get stuck obsessively sampling along the incorrect manifold. As a result, µt will always converge asymptotically to the true parameters, even when θ ̸∈M. To summarize, our method proceeds as follows: 0. Initial conditions: start with a log-concave (approximately Gaussian) posterior given t previous trials, summarized by the posterior mean, ⃗µt and covariance, Ct. 1. Compute ⃗µM,t, the projection of ⃗µt on the manifold. (The procedure for computing ⃗µM,t depends on the manifold.) 2. Compute the tangent space of M at ⃗µM,t using Eqn. 3. 3. Compute the posterior restricted to the tangent space, ptan(⃗θ|⃗µb,t, Cb,t), using the standard Gaussian conditioning formula (Eqn. 5). 4. Apply the methods in [8] to find the optimal t+1 stimulus, and observe the response rt+1. 5. Update the posterior by recursively updating the posterior mean and covariance: ⃗µt → ⃗µt+1 and Ct →Ct+1 (again, as in [8]), and return to step 1. 3 Results 3.1 Low rank models To test our methods in a realistic, high-dimensional setting, we simulated a typical auditory neurophysiology [17, 15, 18] experiment. Here, the objective is to to identify the spectro-temporal receptive field (STRF) of the neuron. The input and receptive field of the neuron are usually represented in the frequency domain because the cochlea is known to perform a frequency decomposition of sound. The STRF, θ(τ, ω), is a 2-d filter which relates the firing rate at time t to the amount of 5 energy at frequency ω and time t −τ in the stimulus. To incorporate this spectrotemporal model in the standard GLM setting, we simply vectorize the matrix θ(τ, ω). Estimating the STRF can be quite difficult due to its high dimensionality. Several researchers, however, have shown that low-rank assumptions can be used to produce accurate approximations of the receptive field while significantly reducing the number of unknown parameters [19, 13, 15, 20]. A low rank assumption is a more general version of the space-time separable assumption that is often used when studying visual receptive fields [21]. Mathematically, a low-rank assumption means that the matrix corresponding to the STRF can be written as a sum of rank one matrices, Θ = Mat ⃗θ = UV T (6) where Mat indicates the matrix formed by reshaping the vector ⃗θ to form the STRF. U and V are low-rank matrices with orthonormal columns. The columns of U and V are the principal components of the column and row spaces of Θ respectively, and encode the spectral and temporal properties of the STRF, respectively. We simulated an auditory experiment using an STRF fitted to the actual response of a neuron in the Mesencephalicus lateralis pars dorsalis (MLd) of an adult male zebra finch [18]. To reduce the dimensionality we sub-sampled the STRF in the frequency domain and shortened it in the time domain to yield a 20 × 21 STRF. We generated synthetic data by sampling a Poisson process whose instantaneous firing rate was set to the output of a GLM with exponential nonlinearity and ⃗θ proportional to the true measured zebra finch STRF. For the manifold we used the set of ⃗θ corresponding to rank-2 matrices. For the STRF we used, the rank-2 assumption turns out to be rather accurate. We also considered manifolds of rank-1 and rank-5 matrices (data not shown), but rank-2 did slightly better. The manifold of rank r matrices is convenient because we can easily project any ⃗θ onto M by reshaping ⃗θ as a matrix and then computing its singular-value-decomposition (SVD). ⃗µM,t is the matrix formed by the first r singular vectors of ⃗µt. To compute the tangent space, Eqn. 3, we compute the derivative of ⃗θ with respect to each component of the matrices U and V . Using these derivatives we can linearly approximate the effect on Θ of perturbing the parameters of its principal components. In Figure 3.1 we compare the effectiveness of different experimental designs by plotting the MAP estimate ⃗µt on several trials. The results clearly show that using the tangent space to design the experiments leads to much faster convergence to the true parameters. Furthermore, using the assumption that the STRF is rank-2 is beneficial even though the true STRF here is not in fact rank-2. 3.2 Real birdsong data We also tested our method by using it to reshuffle the data collected during an actual experiment to find an ordering which provided a faster decrease in the error of the fitted model. During the experiments, we recorded the responses of MLd neurons when the songs of other birds and ripple noise were presented to the bird (again, as previously described in [18]). We compared a design which randomly shuffled the trials to a design which used our info. max. algorithm to select the order in which the trials are processed. We then evaluated the fitted model by computing the expected log-likelihood of the spike trains, P τ E⃗θ|⃗µt,Ct log p(rτ|⃗sτ, ⃗θ). τ denotes all the observations made when inputs in a test set are played to the bird. To constrain the models we assume the STRF is low-rank and that its principal components are smooth. The smoothing prior means that if we take the Fourier transform of the principal components, the Fourier coefficients of high frequencies should be zero with high probability. In other words, each principal component (the columns of U and V ) should be a linear combination of sinusoidal functions with low frequencies. In this case we can write the STRF as Θ = FνωηT T T . (7) Each column of F and T is a sine or cosine function representing one of the basis functions of the principal spectral (columns of F) or temporal (columns of T ) components of the STRF. Each column of ν and η determines how we form one of the principal components by combining sine and cosine functions. ω is a diagonal matrix which specifies the projection of Θ onto each principal 6 10 3 10 4 −1 −0.5 0 trial Eθlog p(r|st,θt) shuffled: Info. Max. full: Info. Max. Tan: rank=2 Figure 3: Plots comparing the performance of an info. max. design, an info. max. design which uses the tangent space, and a shuffled design. The manifold was the set of rank 2 matrices. The plot shows the expected log-likelihood (prediction accuracy) of the spike trains in response to a birdsong in the test set. Using a rank 2 manifold to constrain the model produces slightly better fits of the data. component. The unknown parameters in this case are the matrices ν, η, and ω. The sinusoidal functions corresponding to the columns of F and T should have frequencies {0, . . . , fo,fmf} and {0, . . . , fo,tmt} respectively. fo,f and fo,t are the fundamental frequencies and are set so that 1 period corresponds to the dimensions of the STRF. mf and mt are the largest integers such that fo,fmf and fo,tmt are less than the Nyquist frequency. Now to enforce a smoothing prior we can simply restrict the columns of F and T to sinusoids with low frequencies. To project Θ onto the manifold we simply need to compute ν, ω and η by evaluating the SVD of FT ΘT . The results, Figure 3, show that both info. max. designs significantly outperform the randomly shuffled design. Furthermore, incorporating the low-rank assumption using the tangent space improves the info. max. design, albeit only slightly; the estimated STRF’s are shown in Figure 4. It is worth noting that in an actual online experiment, we would expect a larger improvement with the info. max. design, since during the experiment we would be free to pick any input. Thus, the different designs could choose radically different stimulus sets; in contrast, when re-analyzing the data offline, all we can do is reshuffle the trials, but the stimulus sets remain the same in the info. max. and iid settings here. 4 Conclusion We have provided a method for incorporating detailed prior information in existing algorithms for the information-theoretic optimal design of neurophysiology experiments. These methods use realistic assumptions about the neuron’s response function and choose significantly more informative stimuli, leading to faster convergence to the true response function using fewer experimental trials. We expect that the inclusion of this strong prior information will help experimentalists contend with the high dimensionality of neural response functions. 5 Acknowledgments We thank Vincent Vu and Bin Yu for helpful conversations. JL is supported by the Computational Science Graduate Fellowship Program administered by the DOE under contract DE-FG0297ER25308 and by the NSF IGERT Program in Hybrid Neural Microsystems at Georgia Tech via grant number DGE-0333411. LP is supported by an NSF CAREER award and a Gatsby Initiative in Brain Circuitry Pilot Grant. 7 shuffled Trial 1000 Trial 2500 Trial 5000 Trial 7500 Trial 10k Trial 20k Trial 50k Info. Max. full −40 −20 0 2 4 6 Info. Max. Tan rank=2 Frequency (KHz) Time(ms) −2 0 2 x 10 −3 Figure 4: The STRFs estimated using the bird song data. We plot ⃗µt for trials in the interval over which the expected log-likelihood of the different designs differed the most in Fig. 3. The info. max. designs converge slightly faster than the shuffled design. In these results, we smoothed the STRF by only using frequencies less than or equal to 10fo,f and 2fo,t. References [1] P. Foldiak, Neurocomputing 38–40, 1217 (2001). [2] R. C. deCharms, et al., Science 280, 1439 (1998). [3] T. Gollisch, et al., Journal of Neuroscience 22, 10434 (2002). [4] F. Edin, et al., Journal of Computational Neuroscience 17, 47 (2004). [5] C. Machens, et al., Neuron 47, 447 (2005). [6] K. N. O’Connor, et al., Journal of Neurophysiology 94, 4051 (2005). [7] D. L. Ringach, J Neurophysiol 88, 455 (2002). [8] J. Lewi, et al., Neural Computation 21 (2009). [9] E. Simoncelli, et al., The Cognitive Neurosciences, M. Gazzaniga, ed. (MIT Press, 2004). [10] L. Paninski, et al., Computational Neuroscience: Theoretical Insights into Brain Function (Elsevier, 2007), chap. Statistical models for neural encoding, decoding, and optimal stimulus design. [11] L. Paninski, Network: Computation in Neural Systems 15, 243 (2004). [12] L. Paninski, Neural Computation 17, 1480 (2005). [13] A. Qiu, et al., J Neurophysiol 90, 456 (2003). [14] C. Enroth-Cugell, et al., Journal of Physiology 187, 517 (1966). [15] J. F. Linden, et al., Journal of Neurophysiology 90, 2660 (2003). [16] J. M. Lee, Introduction to Smooth Manifolds (Springer, 2000). [17] F. E. Theunissen, et al., Journal of Neuroscience 20, 2315 (2000). [18] S. M. Woolley, et al., The Journal of Neuroscience 26, 2499 (2006). [19] D. A. Depireux, et al., Journal of Neurophysiology 85, 1220 (2001). [20] M. B. Ahrens, et al., Network 19, 35 (2008). [21] G. C. DeAngelis, et al., J Neurophysiol 69, 1091 (1993). 8
2008
6
3,548
Deep Learning with Kernel Regularization for Visual Recognition Kai Yu Wei Xu Yihong Gong NEC Laboratories America, Cupertino, CA 95014, USA {kyu, wx, ygong}@sv.nec-labs.com Abstract In this paper we aim to train deep neural networks for rapid visual recognition. The task is highly challenging, largely due to the lack of a meaningful regularizer on the functions realized by the networks. We propose a novel regularization method that takes advantage of kernel methods, where an oracle kernel function represents prior knowledge about the recognition task of interest. We derive an efficient algorithm using stochastic gradient descent, and demonstrate encouraging results on a wide range of recognition tasks, in terms of both accuracy and speed. 1 Introduction Visual recognition remains a challenging task for machines. This difficulty stems from the large pattern variations under which a recognition system must operate. The task is extremely easy for a human, largely due to the expressive deep architecture employed by human visual cortex systems. Deep neural networks (DNNs) are argued to have a greater capacity to recognize a larger variety of visual patterns than shallow models, because they are considered biologically plausible. However, training deep architectures is difficult because the large number of parameters to be tuned necessitates an enormous amount of labeled training data that is often unavailable. Several authors have recently proposed training methods by using unlabeled data. These methods perform a greedy layer-wise pre-training using unlabeled data, followed by a supervised fine-tuning [9, 4, 15]. Even though the strategy notably improves the performance, to date, the best reported recognition accuracy on popular benchmarks such as Caltech101 by deep models is still largely behind the results of shallow models. Beside using unlabeled data, in this paper we tackle the problem by leveraging additional prior knowledge. In the last few decades, researchers have developed successful kernel-based systems for a wide range of visual recognition tasks. Those sensibly-designed kernel functions provide an extremely valuable source of prior knowledge, which we believe should be exploited in deep learning. In this paper, we propose an informative kernel-based regularizer, which makes it possible to train DNNs with prior knowledge about the recognition task. Computationally, we propose to solve the learning problem using stochastic gradient descent (SGD), as it is the de facto method for neural network training. To this end we transform the kernel regularizer into a loss function represented as a sum of costs by individual examples. This results in a simple multi-task architecture where a number of extra nodes at the output layer are added to fit a set of auxiliary functions automatically constructed from the kernel function. We apply the described method to train convolutional neural networks (CNNs) for a wide range of visual recognition tasks, including handwritten digit recognition, gender classification, ethnic origin recognition, and object recognition. Overall our approach exhibits excellent accuracy and speed on all of these tasks. Our results show that incorporation of prior knowledge can boost the performance of CNNs by a large margin when the training set is small or the learning problem is difficult. 1 2 DNNs with Kernel Regularization In our setting, the learning model, a deep neural network (DNN), aims to learn a predictive function f : X →R that can achieve a low expected discrepancy E[ℓ(y, f(x))] over the distribution p(x, y). In the simplest case Y = {−1, 1} and ℓ(·, ·) is a differentiable hinge loss. Based on a set of labeled examples [(xi, yi)]n i=1, the learning is by minimizing a regularized loss L(β, θ) = n X i=1 ℓ yi, β⊤ 1 φi + β0  + λ∥β1∥2 (1) where φi = φ(xi; θ) maps xi to q-dimensional hidden units via a nonlinear deep architecture with parameters θ, including the connection weights and biases of all the intermediate layers, β = {β1, β0}, β1 includes all the parameters of the transformation from the last hidden layer to the output layer, β0 is a bias term, λ > 0, and ∥a∥2 = tr(a⊤a) is the usual weight decay regularization. Applying the well-known representor theorem, we derive the equivalence to a kernel system1 L(α, β0, θ) = n X i=1 ℓ  yi, n X j=1 αjKi,j + β0  + λ n X i,j=1 αiαjKi,j (2) where the kernel is computed by Ki,j = ⟨φ(xi; θ), φ(xj; θ)⟩= φ⊤ i φj We assume the network is provided with some prior knowledge, in the form of an m × m kernel matrix Σ, computed on n labeled training data, plus possibly additional m−n unlabeled data if m > n. We exploit this prior knowledge via imposing a kernel regularization on K(θ) = [Ki,j]m i,j=1, such that the learning problem seeks Problem 2.1. min β,θ L(β, θ) + γΩ(θ) (3) where γ > 0 and Ω(θ) is defined by Ω(θ) = tr  K(θ)−1Σ  + log det[K(θ)] (4) This is a case of semi-supervised learning if m > n. Though Ωis non-convex w.r.t. K, it has a unique minimum at K = Σ if Σ ≻0, suggesting that minimizing Ω(θ) encourages K to approach Σ. The regularization can be explained from an information-theoretic perspective. Let p(f|K) and p(f|Σ) be two Gaussian distributions N(0, K) and N(0, Σ).2 Ω(θ) is related to the KL-divergence DKL[p(f|Σ)∥p(f|K)]. Therefore, minimizing Ω(θ) forces the two distributions to be close. We note that the regularization does not require Σ to be positive definite — it can be semidefinite. 3 Kernel Regularization via Stochastic Gradient Descent The learning problem in Eq. (3) can be solved by using gradient-based methods. In this paper we emphasize large-scale optimizations using stochastic gradient descent (SGD), because the method is fast when the size m of total data is large and backpropagation, a typical SGD, has been the de facto method to train neural networks for large-scale learning tasks. SGD considers the problem where the optimization cost is the sum of the local cost of each individual training example. A standard batch gradient descent updates the model parameters by using the true gradient summed over the whole training set, while SGD approximates the true gradient by the gradient caused by a single random training example. Therefore, the parameters of the model 1In this paper we slightly abuse the notation, i.e., we use L to denote different loss functions. However their meanings should be uniquely identified by checking the input parameters. 2From a Gaussian process point of view, a kernel function defines the prior distribution of a function f, such that the marginal distribution of the function values f on any finite set of inputs is a multivariate Gaussian. 2 are updated after each training example. For large data sets, SGD is often much faster than batch gradient descent. However, because the regularization term defined by Eq. (4) does not consist of a cost function that can be expressed as a sum (or an average) over data examples, SGD is not directly applicable. Our idea is to transform the problem into an equivalent formulation that can be optimized stochastically. 3.1 Shrinkage on the Kernel Matrix We consider a large-scale problem where the data size m may grow over time, while the size of the last hidden layer (q) of the DNN is fixed. Therefore the computed kernel K can be rank deficient. In order to ensure that the trace term in Ω(θ) is well-defined, and that the log-determinant term is bounded from below, we instead use K +δI to replace K in Ω(θ), where δ > 0 is a small shrinkage parameter and I is an identity matrix. Thus the log-determinant acts on a much smaller q×q matrix3 log det(K + δI) = log det Φ⊤Φ + δI  + const where Φ = [φ1, . . . , φm]⊤and const = (m −q) · log δ. Omitting all the irrelevant constants, we then turn the kernel regularization into Ω(θ) = tr  (ΦΦ⊤+ δI)−1Σ  + log det(Φ⊤Φ + δI) (5) The kernel shrinkage not only remedies the ill-posedness, but also yields other conveniences in our later development. 3.2 Transformation of the Log-determinant Term By noticing that Φ⊤Φ = Pn i=1 φiφ⊤ i is a sum of quantities over data examples, we move it outside of the log determinant for the convenience of SGD. Theorem 3.1. Consider minθ{L(θ) = h(θ) + g(a)}, where g(·) is concave and a ≡a(θ) is a function of θ, if its local minimum w.r.t. θ exists, then the problem is equivalent to min θ,ψ  L(θ, ψ) = h(θ) + a(θ)⊤ψ −g•(ψ) (6) where g•(ψ) is the conjugate function of g(a), i.e. g•(ψ) = mina{ψ⊤a −g(a)}.4 Proof. For a concave function g(a), the conjugate function of its conjugate function is itself, i.e., g(a) = minψ{a⊤ψ −g•(ψ)}. Since g•(ψ) is concave, a⊤ψ −g•(ψ) is convex w.r.t. ψ and has the unique minimum g(a). Therefore minimizing L(θ, ψ) w.r.t. θ and ψ is equivalent to minimizing L(θ) w.r.t. θ. Since log-determinant is concave for q × q positive definite matrices A, the conjugate function of log det(A) is log det(Ψ) + q. We can use the above theorem to transform any loss function containing log det(A) into another loss, which is an upper bound and involves A in a linear term. Therefore the log-determinant in Eq. (5) is turned into a variational representation log det Φ⊤Φ + δI  = min Ψ∈S+ q " m X i=1 φ⊤ i Ψφi + δ · tr(Ψ) −log det(Ψ) + const # where Ψ ∈S+ q is a q × q positive definite matrix, and const = −q. As we can see, the upper bound is a convex function of auxiliary variables Ψ and more importantly, it amounts to a sum of local quantities caused by each of the m data examples. 3Hereafter in this paper, with a slight abuse of notation, we use “const” in equations to summarize the terms irrelevant to the variables of interest. 4If g(a) is convex, its conjugate function is g◦(ψ) = maxa{ψ⊤a −g(a)}. 3 3.3 Transformation of the Trace Term We assume that the kernel matrix Σ is presented in a decomposed form Σ = UU ⊤, with U = [u1, . . . , um]⊤, ui ∈Rp, and p ≤m. We have found that the trace term can be cast as a variational problem by introducing an q × p auxiliary variable matrix η. Proposition 3.1. The trace term in Eq. (5) is equivalent to a convex variational representation tr  (ΦΦ⊤+ δI)−1Σ  = min η∈Rq×p " m X i=1 ∥1 √ δ ui −η⊤φi∥2 + δ∥η∥2 F # Proof. We first obtain the analytical solution η∗= 1 √ δ(Φ⊤Φ + δI)−1Φ⊤U, where the variational representation reaches its unique minimum. Then, plugging it back into the function, we have tr 1 δ U ⊤U −2 1 √ δ U ⊤Φη∗+ η∗⊤Φ⊤Φη∗+ 1 δ U ⊤Φ(Φ⊤Φ + δI)−2Φ⊤U  = 1 δ tr  U ⊤U −U ⊤Φ(Φ⊤Φ + δI)−1Φ⊤U  = tr  (ΦΦ⊤+ δI)−1UU ⊤ where the last step is derived by applying the Woodbury matrix identity. Again, we note that the upper bound is a convex function of η, and consists of a sum of local costs over data examples. 3.4 An Equivalent Learning Framework Combining the previous results, we obtain the convex upper bound for the kernel regularization Eq. (5), which amounts to a sum of costs over examples under some regularization Ω(θ) ≤ " L(η, Ψ, θ) = m X i=1  ∥1 √ δ ui −η⊤φi∥2 + φ⊤ i Ψφi  + δ∥η∥2 F + δ · tr(Ψ) −log det(Ψ) # where we omit all the terms irrelevant to η, Ψ and θ. L(η, Ψ, θ) is convex w.r.t. η and Ψ, and has a unique minimum Ω(θ), hence we can replace Ω(θ) by instead minimizing the upper bound and formulate an equivalent learning problem min β,η,Ψ,θ h L(β, η, Ψ, θ) = L(β, θ) + γL(η, Ψ, θ) i (7) Clearly this new optimization can be solved by SGD. When applying the SGD method, each step based on one example needs to compute the inverse of Ψ. This can be computationally unaffordable when the dimensionality is large (e.g. q > 1000) — remember that the efficiency of SGD is dependent on the lightweight of each stochastic update. Our next result suggests that we can dramatically reduce this complexity from O(q3) to O(q). Proposition 3.2. Eq. (5) is equivalent to the convex variational problem Ω(θ) = min η,ψ " m X i=1  ∥1 √ δ ui −η⊤φi∥2 + ψ⊤φ2 i  + δ∥η∥2 F + δ · ψ⊤e − q X k=1 log ψk # (8) where ψ = [ψ1, . . . , ψq]⊤, and e = [1, . . . , 1]⊤. Proof. There is an ambiguity of the solutions up to rotations. Suppose {β∗, Φ∗, η∗, Ψ∗} is an optimal solution set, a transformation β∗←Rβ∗, Φ∗←RΦ∗, η∗←Rη∗, and Ψ∗←RΨ∗R⊤ results in the same optimality if R⊤R = I. Since there always exists an R to diagonalize Ψ∗, we can pre-restrict Ψ to be a diagonal positive definite matrix Ψ = diag[ψ1, . . . , ψq], which does not change our problem and gives rise to Eq. (8). We note that the variational form is convex w.r.t. the auxiliary variables η and ψ. Therefore we can formulate the whole learning problem as 4 Problem 3.1. min β,η,ψ,θ  L(β, η, ψ, θ) = 1 nL1(β, θ) + γ mnL2(η, θ) + γ mnL3(ψ, θ)  (9) where L1(β, θ) is defined by Eq. (1), and L2(η, θ) = m X i=1 ∥1 √ δ ui −η⊤φi∥2 + δ∥η∥2 F L3(ψ, θ) = m X i=1 ψ⊤φ2 i + δ · ψ⊤e − q X k=1 log ψk To ensure the estimator of β and θ is consistent, the effect of regularization should vanish as n →∞. Therefore we intentionally normalize L2(η, θ) and L3(ψ, θ) by 1/m. The overall loss function is averaged over the n labeled examples, consisting of three loss functions: the main classification task L1(β, θ), an auxiliary least-squares regression problem L2(η, θ), and an additional regularization term L3(ψ, θ), which can be interpreted as another least-squares problem. Since each of the loss functions amounts to a summation of local costs caused by individual data examples, the whole learning problem can be conveniently implemented by SGD, as described in Algorithm 1. In practice, the kernel matrix Σ = UU ⊤that represents domain knowledge can be obtained in three different ways: (i) In the easiest case, U is directly available by computing some hand-crafted features computed from the input data, which corresponds to a case of a linear kernel function; (ii) U can be results of some unsupervised learning (e.g. the self-taught learning [14] based on sparse coding), applied on a large set of unlabeled data; (iii) If a nonlinear kernel function is available, U can be obtained by applying incomplete Cholesky decomposition on an m × m kernel matrix Σ. In the third case, when m is so large that the matrix decomposition cannot be computed in the main memory, we apply the Nystr¨om method [19]: We first randomly sample m1 examples p < m1 < m, such that the computed kernel matrix Σ1 can be decomposed in the memory. Let V DV ⊤be the prank eigenvalue decomposition of Σ1, then the p-rank decomposition of Σ can be approximated by Σ ≈UU ⊤, U = Σ:,1V D−1 2 , where Σ:,1 is the m × m1 kernel matrix between all the m examples and the subset of size m1. Algorithm 1 Stochastic Gradient Descent repeat Generate a number a from uniform distribution [0, 1] if a < n m+n then Randomly pick a sample i ∈{1, · · · , n} for L1, and update parameter by [β, θ] ←[β, θ] −ϵ∂L1(xi, β, θ) ∂[β, θ] else Randomly pick a sample i ∈{1, · · · , m} for L2, and update parameter by [η, ψ, θ] ←[η, ψ, θ] −ϵ m ∂[L2(xi, η, θ) + L3(xi, ψ, θ)] ∂[η, ψ, θ] end if until convergence 4 Visual Recognition by Deep Learning with Kernel Regularization In the following, we apply the proposed strategy to train a class of deep models and convolutional neural networks (CNNs, [11]) for a range of visual recognition tasks including digit recognition on MNIST dataset, gender and ethnicity classification on the FRGC face dataset, and object recognition on the Caltech101 dataset. In each of these tasks, we choose a kernel function that has been reported to have state-of-the-art or otherwise good performances in the literature. We will see whether a kernel-regularizer can improve the recognition accuracy of the deep models, and how it is compared with the support vector machine (SVM) using the exactly the same kernel. 5 Table 1: Percentage error rates of handwritten digit recognition on MNIST Training Size 100 600 1000 3000 60000 SVM (RBF) 22.73 8.53 6.58 3.91 1.41 SVM (RBF, Nystr¨om) 24.73 9.15 6.92 5.51 5.16 SVM (Graph) 5.21 3.74 3.46 3.01 2.23 SVM (Graph, Cholesky) 7.17 6.47 5.75 4.28 2.87 CNN 19.40 6.40 5.50 2.75 0.82 kCNN (RBF) 14.49 3.85 3.40 1.88 0.73 kCNN (Graph) 4.28 2.36 2.05 1.75 0.64 CNN (Pretrain) [15] − 3.21 − − 0.64 EmbedO CNN [18] 11.73 3.42 3.34 2.28 − EmbedI5 CNN [18] 7.75 3.82 2.73 1.83 − EmbedA1 CNN [18] 7.87 3.82 2.76 2.07 − Throughout all the experiments, “kCNN” denotes CNNs regularized by nonlinear kernels, processed by either Cholesky or Nystr¨om approximation, with parameters p = 600, m1 = 5000, and m the size of each whole data set. The obtained ui are normalized to have unitary lengths. λ and δ are fixed by 1. The remaining two hyperparameters are: the learning rates ϵ = {10−3, 10−4, 10−5} and the kernel regularization weights γ = {102, 103, 104, 105}. Their values are set once for each of the 4 recognition tasks based on a 5-fold cross validation using 500 labeled examples. 4.1 Handwritten Digit Recognition on MNIST Dataset The data contains a training set with 60000 examples and a test set with 10000 examples. The CNN employs 50 filters of size 7 × 7 on 34 × 34 input images, followed by down-sampling by 1/2, then 128 filters of size 5×5, followed by down-sampling by 1/2, and then 200 filters of size 5×5, giving rise to 200 dimensional features that are fed to the output layer. Two nonlinear kernels are used: (1) RBF kernel, and (2) Graph kernel on 10 nearest neighbor graph [6]. We perform 600-dimension Cholesky decomposition on the whole 70000 × 70000 graph kernel because it is very sparse. In addition to using the whole training set, we train the models on 100, 600, 1000 and 3000 random examples from the training set and evaluate the classifiers on the whole test set, and repeat each setting by 5 times independently. The results are given in Tab. 1. kCNNs effectively improve over CNNs by leveraging the prior knowledge, and also outperform SVMs that use the same kernels. The results are competitive with the state-of-the-art results by [15], and [18] of a different architecture. 4.2 Gender and Ethnicity Recognition on FRGC Dataset The FRGC 2.0 dataset [13] contains 568 individuals’ 14714 face images under various lighting conditions and backgrounds. Beside person identities, each image is annotated with gender and ethnicity, which we put into 3 classes, “white”, “asian”, and “other”. We fix 114 persons’ 3014 images (randomly chosen) as the testing set, and randomly selected 5%, 10%, 20%, 50%, and “All” images from the rest 454 individuals’ 11700 images. For each training size, we randomize the training data 5 times and report the average error rates. In this experiment, CNNs operate on images represented by R/G/B planes plus horizontal and vertical gradient maps of gray intensities. The 5 input planes of size 140 × 140 are processed by 16 convolution filters with size 16 × 16, followed by max pooling within each disjoint 5 × 5 neighborhood. The obtained 16 feature maps of size 25 × 25 are connected to the next layer by 256 filters of size 6 × 6, with 50% random sparse connections, followed by max pooling within each 5 × 5 neighborhood. The resulting 256 × 4 × 4 features are fed to the output layer. The nonlinear kernel used in this experiment is the RBF kernel computed directly on images, which has demonstrated state-of-the-art accuracy for gender recognition [3]. The results shown in Tab. 2 and Tab. 3 demonstrate that kCNNs significantly boost the recognition accuracy of CNNs for both gender and ethnicity recognition. The difference is prominent when small training sets are presented. 4.3 Object Recognition on Caltech101 Dataset Caltech101 [7] contains 9144 images from 101 object categories and a background category. It is considered one of the most diverse object databases available today, and is probably the most popular benchmark for object recognition. We follow the common setting to train on 15 and 30 images per class and test on the rest. Following [10], we limit the number of test images to 30 per class. The 6 Table 2: Percentage error rates of gender recognition on FRGC Training Size 5% 10% 20% 50% All SVM (RBF) 16.7 13.4 11.3 9.1 8.6 SVM (RBF, Nystr¨om) 20.2 14.3 11.6 9.1 8.8 CNN 61.5 17.2 8.4 6.6 5.9 kCNN 17.1 7.2 5.8 5.0 4.4 Table 3: Percentage error rates of ethnicity recognition on FRGC Training Size 5% 10% 20% 50% All SVM (RBF) 22.9 16.9 14.1 11.3 10.2 SVM (RBF, Nystr¨om) 24.7 20.6 15.8 11.9 11.1 CNN 30.0 13.9 10.0 8.2 6.3 kCNN 15.6 8.7 7.3 6.2 5.8 recognition accuracy was normalized by class sizes and evaluated over 5 random data splits. The CNN has the same architecture as the one used in the FRGC experiment. The nonlinear kernel is the spatial pyramid matching (SPM) kernel developed in [10]. Tab. 4 shows our results together with those reported in [12, 15] using deep hierarchical architectures. The task is much more challenging than the previous three tasks for CNNs, because in each category the data size is very small while the visual patterns are highly diverse. Thanks to the regularization by SPM kernel, kCNN dramatically improves the accuracy of CNN, and outperforms SVM using the same kernel. This is perhaps the best performance by (trainable and hand-crafted) deep hierarchical models on the Caltech101 dataset. Some filters trained with and without kernel regularization are visualized in Fig. 1, which helps to understand the difference made by kCNN. 5 Related Work, Discussion, and Conclusion Recent work on deep visual recognition models includes [17, 12, 15]. In [17] and [12] the first layer consisted of hard-wired Gabor filters, and then a large number of patches were sampled from the second layer and used as the basis of the representation which was then used to train a discriminative classifier. Deep models are powerful in representing complex functions but very difficult to train. Hinton and his coworkers proposed training deep belief networks with layer-wise unsupervised pre-training, followed by supervised fine-tuning [9]. The strategy was subsequently studied for other deep models like CNNs [15], autoassociators [4], and for document coding [16]. In recent work [18], the authors proposed training a deep model jointly with an unsupervised embedding task, which led to improved results as well. Though using unlabeled data too, our work differs from previous work at the emphasis on leveraging the prior knowledge, which suggests that it can be combined with those approaches, including neighborhood component analysis [8], to further enhance the deep learning. This work is also related to transfer learning [2] that used auxiliary learning tasks to learn a linear feature mapping, and more directly, our previous work [1], which created pseudo auxiliary tasks based on hand-craft image features to train nonlinear deep networks. One may ask, why bother training with kCNN, instead of simply combining two independently trained CNN and SVM systems? The reason is computational speed – kCNN pays an extra cost to exploit a kernel matrix in the training phase, but in the prediction phase the system uses CNN alone. (a) CNN-Caltech101 (b) kCNN-Caltech101 Figure 1: First-layer filters on the B channel, learned from Caltech101 (30 examples per class) 7 Table 4: Percentage accuracy on Caltech101 Training Size 15 30 Training Size 15 30 SVM (SPM) [10] 54.0 64.6 CNN (Pretrain) [15] − 54.0 SVM (SPM, Nystr¨om) 52.1 63.1 CNN 26.5 43.6 HMAX [12] 51.0 56.0 kCNN 59.2 67.4 In our Caltech101 experiment, the SVM (SPM) needed several seconds to process a new image on a PC with a 3.0 GHz processor, while kCNN can process about 40 images per second. The latest record on Caltech101 was based on combining multiple kernels [5]. We conjecture that kCNN could be further improved by using multiple kernels without sacrificing recognition speed. To conclude, we proposed using kernels to improve the training of deep models. The approach was implemented by stochastic gradient descent, and demonstrated excellent performances on a range of visual recognition tasks. Our experiments showed that prior knowledge could significantly improve the performance of deep models when insufficient labeled data were available in hard recognition problems. The trained model was much faster than kernel systems for making predictions. Acknowledgment: We thank the reviewers and Douglas Gray for helpful comments. References [1] A. Ahmed, K. Yu, W. Xu, Y. Gong, and E. P. Xing. Training hierarchical feed-forward visual recognition models using transfer learning from pseudo tasks. European Conference on Computer Vision, 2008. [2] R. K. Ando and T. Zhang. A framework for learning predictive structures from multiple tasks and unlabeled data. Journal of Machine Learning Research, 2005. [3] S. Baluja and H. Rowley. Boosting sex identification performance. Journal of Computer Vision, 2007. [4] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. Greedy layer-wise training of deep networks. Neural Information Processing Systems, 2007. [5] A. Bosch, A. Zisserman, and X. Mun˜oz. Image classification using ROIs and multiple kernel learning. 2008. submitted to International Journal of Computer Vison. [6] O. Chapelle, J. Weston, and B. Sch¨olkopf. Cluster kernels for semi-supervised learning. Neural Information Processing Systems, 2003. [7] L. Fei-Fei, R. Fergus, and P. Perona. Learning generative visual models from few training examples: An incremental Bayesian approach tested on 101 object categories. CVPR Workshop, 2004. [8] J. Goldberger, S. Roweis, G. Hinton, and R. Salakhutdinov. Neighbourhood components analysis. Neural Information Processing Systems, 2005. [9] G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504 – 507, July 2006. [10] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. IEEE Conference on Computer Vision and Pattern Recognition, 2006. [11] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. [12] J. Mutch and D. G. Lowe. Multiclass object recognition with sparse, localized features. IEEE Conference on Computer Vision and Pattern Recognition, 2006. [13] P. J. Philips, P. J. Flynn, T. Scruggs, K. W. Bower, and W. Worek. Preliminary face recognition grand challenge results. IEEE Conference on Automatic Face and Gesture Recgonition, 2006. [14] R. Raina, A. Battle, H. Lee, B. Packer, and A. Y. Ng. Self-taught learning: Transfer learning from unlabeled data. International Conference on Machine Learning, 2007. [15] M. Ranzato, F.-J. Huang, Y.-L. Boureau, and Y. LeCun. Unsupervised learning of invariant feature hierarchies with applications to object recognition. IEEE Conference on Computer Vision and Pattern Recognition, 2007. [16] M. Ranzato and M. Szummer. Semi-supervised learning of compact document representations with deep networks. International Conferenece on Machine Learning, 2008. [17] T. Serre, L. Wolf, and T. Poggio. Object recognition with features inspired by visual cortex. IEEE Conference on Computer Vision and Pattern Recognition, 2005. [18] J. Weston, F. Ratle, and R. Collobert. Deep learning via semi-supervised embedding. International Conference on Machine Learning, 2008. [19] C. Williams and M. Seeger. Using the Nystr¨om method to speed up kernel machines. Neural Information Processing Systems, 2001. 8
2008
60
3,549
Learning Transformational Invariants from Natural Movies Charles F. Cadieu & Bruno A. Olshausen Helen Wills Neuroscience Institute University of California, Berkeley Berkeley, CA 94720 {cadieu, baolshausen}@berkeley.edu Abstract We describe a hierarchical, probabilistic model that learns to extract complex motion from movies of the natural environment. The model consists of two hidden layers: the first layer produces a sparse representation of the image that is expressed in terms of local amplitude and phase variables. The second layer learns the higher-order structure among the time-varying phase variables. After training on natural movies, the top layer units discover the structure of phase-shifts within the first layer. We show that the top layer units encode transformational invariants: they are selective for the speed and direction of a moving pattern, but are invariant to its spatial structure (orientation/spatial-frequency). The diversity of units in both the intermediate and top layers of the model provides a set of testable predictions for representations that might be found in V1 and MT. In addition, the model demonstrates how feedback from higher levels can influence representations at lower levels as a by-product of inference in a graphical model. 1 Introduction A key attribute of visual perception is the ability to extract invariances from visual input. In the realm of object recognition, the goal of invariant representation is quite clear: a successful object recognition system must be invariant to image variations resulting from different views of the same object. While spatial invariants are essential for forming a useful representation of the natural environment, there is another, equally important form of visual invariance, namely transformational invariance. A transformational invariant refers to the dynamic visual structure that remains the same when the spatial structure changes. For example, the property that a soccer ball moving through the air shares with a football moving through the air is a transformational invariant; it is specific to how the ball moves but invariant to the shape or form of the object. Here we seek to learn such invariants from the statistics of natural movies. There have been numerous efforts to learn spatial invariants [1, 2, 3] from the statistics of natural images, especially with the goal of producing representations useful for object recognition [4, 5, 6]. However, there have been few attempts to learn transformational invariants from natural sensory data. Previous efforts have either relied on using unnatural, hand-tuned stimuli [7, 8, 9], or unrealistic supervised learning algorithms using only rigid translation of an image [10]. Furthermore, it is unclear to what extent these models have captured the diversity of transformations in natural visual scenes or to what level of abstraction their representations produce transformational invariants. Previous work learning sparse codes of image sequences has shown that it is possible to recover local, direction-selective components (akin to translating Gabors) [11]. However, this type of model does not capture the abstract property of motion because each unit is bound to a specific orientation, spatial-frequency and location within the image—i.e., it still suffers from the aperture problem. 1 Here we describe a hierarchical probabilistic generative model that learns transformational invariants from unsupervised exposure to natural movies. A key aspect of the model is the factorization of visual information into form and motion, as compared to simply extracting these properties separately. The latter approach characterizes most models of form and motion processing in the visual cortical hierarchy [6, 12], but suffers from the fact that information about these properties is not bound together—i.e., it is not possible to reconstruct an image sequence from a representation in which form and motion have been extracted by separate and independent mechanisms. While reconstruction is not the goal of vision, the ability to interact with the environment is key, and thus binding these properties together is likely to be crucial for properly interacting with the world. In the model we propose here, form and motion are factorized, meaning that extracting one property depends upon the other. It specifies not only how they are extracted, but how they are combined to provide a full description of image content. We show that when such a model is adapted to natural movies, the top layer units learn to extract transformational invariants. The diversity of units in both the intermediate layer and top layer provides a set of testable predictions for representations that might be found in V1 and MT. The model also demonstrates how feedback from higher levels can influence representations at lower levels as a by-product of inference in a graphical model. 2 Hierarchical Model In this section we introduce our hierarchical generative model of time-varying images. The model consists of an input layer and two hidden layers as shown in Figure 1. The input layer represents the time-varying image pixel intensities. The first hidden layer is a sparse coding model utilizing complex basis functions, and shares many properties with subspace-ICA [13] and the standard energy model of complex cells [14]. The second hidden layer models the dynamics of the complex basis function phase variables. 2.1 Sparse coding with complex basis functions In previous work it has been shown that many of the observed response properties of neurons in V1 may be accounted for in terms of a sparse coding model of images [15, 16]: I(x,t) = X i ui(t) Ai(x) + n(x,t) (1) where I(x,t) is the image intensity as a function of space (x ∈R2) and time, Ai(x) is a spatial basis function with coefficient ui, and the term, n(x,t) corresponds to Gaussian noise with variance σ2 n that is small compared to the image variance. The sparse coding model imposes a kurtotic, independent prior over the coefficients, and when adapted to natural image patches the Ai(x) converge to a set of localized, oriented, multiscale functions similar to a Gabor wavelet decomposition of images. We propose here a generalization of the sparse coding model to complex variables that is primarily motivated from two observations of natural image statistics. The first observation is that although the prior is factorial, the actual joint distribution of coefficients, even after learning, exhibits strong statistical dependencies. These are most clearly seen as circularly symmetric, yet kurtotic distributions among pairs of coefficients corresponding to neighboring basis functions, as first described by Zetzsche [17]. Such a circularly symmetric distribution strongly suggests that these pairs of coefficients are better described in polar coordinates rather than Cartesian coordinates—i.e., in terms of amplitude and phase. The second observation comes from considering the dynamics of coefficients through time. As pointed out by Hyvarinen [3], the temporal evolution of a coefficient in response to a movie, ui(t), can be well described in terms of the product of a smooth amplitude envelope multiplied by a quickly changing variable. A similar result from Kording [1] indicates that temporal continuity in amplitude provides a strong cue for learning local invariances. These results are closely related to the trace learning rule of Foldiak [18] and slow feature analysis [19]. With these observations in mind, we have modified the sparse coding model by utilizing a complex basis function model as follows: I(x,t) = X i ℜ{z∗ i (t)Ai(x)} + n(x,t) (2) 2 where the basis functions now have real and imaginary parts, Ai(x) = AR i (x) + jAI i (x), and the coefficients are also complex, with zi(t) = ai(t)ejφi(t). (∗indicates the complex conjugate and the notation ℜ{.} denotes taking the ‘real part’ of the argument.) The resulting generative model can also be written as: I(x,t) = X i ai(t)  cos φi(t) AR i (x) + sin φi(t) AI i (x)  + n(x,t) (3) Thus, each pair of basis functions AR i ,AI i forms a 2-dimensional subspace and is controlled by an amplitude ai and phase φi that determine the position within each subspace. Note that the basis functions are only functions of space. Therefore, the temporal dynamics within image sequences will be expressed in the temporal dynamics of the amplitude and phase. The prior over the complex coefficients, z, is designed so as to enforce circularly symmetric distributions and smooth amplitude dynamics as observed from time-varying natural images: P(ai(t)|ai(t−1)) ∝e−Spa(ai(t)) −Sla(ai(t), ai(t−1)) (4) The first term in the exponential imposes a sparse prior on the coefficient amplitudes. Here we use Sp(ai(t)) = λai(t) (we have found other kurtotic priors to yield similar results). Since there is no prior over the phases, this will result in circularly symmetric kurtotic distributions over each subspace. The second term in the exponential imposes temporal stability on the time rate of change of the amplitudes and is given by: Sla(ai(t), ai(t−1)) = (ai(t) −ai(t−1))2. For a sequence of images the resulting negative log-posterior for the first hidden layer becomes: E1 = X t X x 1 σ2 N " I(x,t) − X i ℜ{z∗ i (t) Ai(x)} #2 + X i,t Sp(ai(t)) + X i,t Sl(ai(t), ai(t−1)) (5) While this model by no means captures the full joint distribution of coefficients, it does at least capture the circular symmetric dependencies among pairs of coefficients, which allows for the explicit representation of amplitude and phase. As we shall see, this representation serves as a staging ground for learning higher-order dependencies over space and time. 2.2 Phase Transformations Given the decomposition into amplitude and phase variables, we now have a non-linear representation of image content that enables us to learn its structure in another linear generative model. In particular, the dynamics of objects moving in continuous trajectories through the world over short epochs will be encoded in the population activity of the phase variables φi. Furthermore, because we have encoded these trajectories with an angular variable, many transformations in the image domain that would otherwise be nonlinear in the coefficients ui will now be linearized. This linear relationship allows us to model the time-rate of change of the phase variables with a simple linear generative model. We thus model the first-order time derivative of the phase variables as follows: ˙φi(t) = X k Dik wk(t) + νi(t) (6) where ˙φi = φi(t) −φi(t−1), and D is the basis function matrix specifying how the high-level variables wk influence the phase shifts ˙φi. The additive noise term, νi, represents uncertainty or noise in the estimate of the phase time-rate of change. As before, we impose a sparse, independent distribution on the coefficients wk, in this case with a sparse cost function given as: Sw(wk(t)) = β log  1 + wk(t) σ 2 (7) The uncertainty over the phase shifts is given by a von Mises distribution: p(νi) ∝exp(κ cos(νi)). Thus, the log-posterior over the second layer units is given by E2 = − X t X i∈{ai(t)>0} κ cos( ˙φi −[Dw(t)]i) + X k Sw(wk(t)) (8) 3 Figure 1: Graph of the hierarchical model showing the relationship among hidden variables. Because the angle of a variable with 0 amplitude is undefined, we exclude angles where the corresponding amplitude is 0 from our cost function. Note that in the first layer we did not introduce any prior on the phase variables. With our second hidden layer, E2 can be viewed as a log-prior on the time rate of change of the phase variables: ˙φi(t). For example, when [Dw(t)]i = 0, the prior on ˙φi(t) is peaked around 0, or no change in phase. Activating the w variables moves the prior away from ˙φi(t) = 0, encouraging certain patterns of phase shifting that will in turn produce patterns of motion in the image domain. The structure of the complete graphical model is shown in Figure 1. 2.3 Learning and inference A variational learning algorithm is used to adapt the basis functions in both layers. First we infer the maximum a posteriori estimate of the variables a, φ, and w for the current values of the basis functions. Given the map estimate of these variables we then perform a gradient update on the basis functions. The two steps are iterated until convergence. To infer coefficients in both the first and second hidden layers we perform gradient descent with respect to the coefficients of the total cost function (E1 + E2). The resulting dynamics for the amplitudes and phases in the first layer are given by ∆ai(t) ∝ ℜ{bi(t)} −Sp′(ai(t)) −Sl′(ai(t), ai(t−1)) (9) ∆φi(t) ∝ ℑ{bi(t)} ai(t) −κ sin( ˙φi(t) −[Dw(t)]i) + κ sin( ˙φi(t+1) −[Dw(t+1)]i) (10) with bi(t) = 1 σ2 N e−jφi(t) X x Ai(x) " I(x,t) − X i ℜ{z∗ i (t) Ai(x)} # . (ℑ{.} denotes the imaginary part.) The dynamics for the second layer coefficients wk are given by ∆wk(t) ∝ X i∈{ai(t)>0} κ sin( ˙φi −[Dw(t)]i) Dik + S′ w(wk(t)) (11) Note that the two hidden layers are coupled, since the inference of w depends on φ, and the inference of φ in turn depends on w, in addition to I and a. Thus, the phases are computed from a combination of bottom-up (I), horizontal (a) and top-down (w) influences. The learning rule for the first layer basis functions is given by the gradient of E1 with respect to Ai(x), using the values of the complex coefficients inferred in eqs. 9 and 10 above: ∆Ai(x) ∝ 1 σ2 N X t " I(x,t) − X i ℜ{z∗ i (t) Ai(x)} # zi(t) (12) The learning rule for the second layer basis functions is given by the gradient of E2 with respect to D, using the values of φ and w inferred above: ∆Dik = κ X t∈ai(t)>0 sin( ˙φi −[Dw(t)]i) wk(t) (13) After each gradient update the basis functions are normalized to have unit length. 4 3 Results 3.1 Simulation procedures The model was trained on natural image sequences obtained from Hans van Hateren’s repository at http://hlab.phys.rug.nl/vidlib/. The movies were spatially lowpass filtered and whitened as described previously [15]. Note that no whitening in time was performed since the temporal structure will be learned by the hierarchical model. The movies consisted of footage of animals in grasslands along rivers and streams. They contain a variety of motions due to the movements of animals in the scene, camera motion, tracking (which introduces background motion), and motion borders due to occlusion. We trained the first layer of the model on 20x20 pixel image patches, using 400 complex basis functions Ai in the first hidden layer initialized to random values. During this initial phase of learning only the terms in E1 are used to infer the ai and φi. Once the first layer reaches convergence, we begin training the second layer, using 100 bases, Di, initialized to random values. The second layer bases are initially trained on the MAP estimates of the first layer ˙φi inferred using E1 only. After the second layer begins to converge we infer coefficients in both the first layer and the second layer simultaneously using all terms in E1 + E2 (we observed that this improved convergence in the second layer). We then continued learning in both layers until convergence. The bootstrapping of the second layer was used to speed convergence and we did not observe much change in the first layer basis functions after the initial convergence. We have run the algorithm multiple times and have observed qualitatively similar results on each run. Here we describe the results of one run. 3.2 Learned complex basis functions After learning, the first layer complex basis functions converge to a set of localized, oriented, and bandpass functions with real and imaginary parts roughly in quadrature. The population of filters as a whole tile the joint spaces of orientation, position, and center spatial frequency. Not surprisingly, this result shares similarities to previous results described in [1] and [3]. Figure 2(a) shows the real part, imaginary part, amplitude, and angle of two representative basis functions as a function of space. Examining the amplitude of the basis function we see that it is localized and has a roughly Gaussian envelope. The angle as a function of space reveals a smooth ramping of the phase in the direction perpendicular to the basis functions’ orientation. ∠Ai |Ai| AI i AR i A292 A191 a(t) φ(t) R{A191z∗ 191} R{A292z∗ 292} (a) (b) (c) Figure 2: Learned Complex Basis Functions (for panel (b) see the animation in movie TransInv Figure2.mov). A useful way of visualizing what a generative model has learned is to generate images while varying the coefficients. Figure 2(b) displays the resulting image sequences produced by two representative basis functions as the amplitude and phase follow the indicated time courses. The amplitude has the effect of controlling the presence of the feature within the image and the phase is related to the position of the edge within the image. Importantly for our hierarchical model, the time derivative, or slope of the phase through time is directly related to the movement of the edge through time. Figure 2(c) shows how the population of complex basis functions tiles the space of position (left) and spatial-frequency (right). Each dot represents a different basis function according to its maximum amplitude in the space domain, or its maximum amplitude in the frequency domain computed via the 2D Fourier transform of each complex pair (which produces a single peak in the spatialfrequency plane). The basis functions uniformly tile both domains. This visualization will be useful for understanding what the phase shifting components D in the second layer have learned. 5 3.3 Learned phase-shift components Figure 3 shows a random sampling of 16 of the learned phase-shift components, Di, visualized in both the space domain and frequency domain depictions of the first-layer units. The strength of connection for each component is denoted by hue (red +, blue -, gray 0). Some have a global influence over all spatial positions within the 20x20 input array (e.g., row 1, column 1), while others have influence only over a local region (e.g., row 1, column 6). Those with a linear ramp in the Fourier domain correspond to rigid translation, since the higher spatial-frequencies will spin their phases at proportionally higher rates (and negative spatial-frequencies will spin in the opposite direction). Some functions we believe arise from aliased temporal structure in the movies (row 1, column 5), and others are unknown (row 2, column 4). We are actively seeking methods to quantify these classes of learned phase-shift components. Spatial Domain Frequency Domain Spatial Domain Frequency Domain Figure 3: Learned phase shifting components. The phase shift components generate movements within the image that are invariant to aspects of the spatial structure such as orientation and spatial-frequency. We demonstrate this in Figure 4 by showing the generated transforms for 4 representative phase-shift components. The illustrated transformation components produce: (a) global translation, (b) local translation, (c) horizontal dilation and contraction, and (d) local warping. See the caption of Figure 4 for a more detailed description of the generated motions. We encourage the reader to view the accompanying videos. 4 Discussion and conclusions The computational vision community has spent considerable effort on developing motion models. Of particular relevance to our work is the Motion-Energy model [14], which signals motion via the amplitudes of quadrature pair filter outputs, similar to the responses of complex neurons in V1. Simoncelli & Heeger have shown how it is possible to extract motion by pooling over a population of such units lying within a common plane in the 3D Fourier domain [12]. It has not been shown how the representations in these models could be learned from natural images. Furthermore, it is unclear how more complicated transformations, other than local translations, would be represented by such a model, or indeed how the entire joint space of position, direction and speed should be tiled to provide a complete description of time-varying images. Our model addresses each of these problems: it learns from the statistics of natural movies how to best tile the joint domain of position and motion, and it captures complex motion beyond uniform translation. Central to our model is the representation of phase. The use of phase information for computing motion is not new, and was used by Fleet and Jepson [20] to compute optic flow. In addition, as shown in Eero Simoncelli’s Thesis, one can establish a formal equivalence between phase-based methods and motion energy models. Here we argue that phase provides a convenient representation as it linearizes trajectories in coefficient space and thus allows one to capture the higher-order structure via a simple linear generative model. Whether or how phase is represented in V1 is not known, 6 (a) (b) (c) (d) Figure 4: Visualization of learned transformational invariants (best viewed as animations in movie TransInv Figure4x.mov, x=a,b,c,d). Each phase-shift component produces a pattern of motion that is invariant to the spatial structure contained within the image. Each panel displays the induced image transformations for a different basis function, Di. Induced motions are shown for four different image patches with the original static patch displayed in the center position. Induced motions are produced by turning on the respective coefficient wi positively (patches to the left of center) and negatively (patches to the right of center). The final image in each sequence shows the pixel-wise variance of the transformation (white values indicate where image pixels are changing through time, which may be difficult to discern in this static presentation). The example in (a) produces global motion in the direction of 45 deg. The strongly oriented structure within the first two patches clearly moves along the axis of motion. Patches with more complicated spatial structure (4th patch) also show similar motion. The next example (b) produces local vertical motion in the lower portion of the image patch only. Note that in the first patch the strong edge in the lower portion of the patch moves while the edge in the upper portion remains fixed. Again, this component produces similar transformations irrespective of the spatial structure contained in the image. The example in (c) produces horizontal motion in the left part of the image in the opposite direction of horizontal motion in the right half (the two halves of the image either converge or diverge). Note that the oriented structure in the first two patches becomes more closely spaced in the leftmost patch and is more widely spaced in the right most image. This is seen clearly in the third image as the spacing between the vertical structure is most narrow in the leftmost image and widest in the rightmost image. The example in (d) produces warping in the upper part of the visual field. This example does not lend itself to a simple description, but appears to produce a local rotation of the image patch. but it may be worth looking for units that have response properties similar to those of the ‘phase units’ in our model. Our model also has implications for other aspects of visual processing and cortical architecture. Under our model we may reinterpret the hypothesized split between the dorsal and ventral visual streams. Instead of independent processing streams focused on form perception and motion perception, the two streams may represent complementary aspects of visual information: spatial invariants and transformational invariants. Indeed, the pattern-invariant direction tuning of neurons in MT is strikingly similar to that found in our model [21]. Importantly though, in our model information about form and motion is bound together since it is computed by a process of factorization rather than by independent mechanisms in separate streams. Our model also illustrates a functional role for feedback between higher visual areas and primary visual cortex, not unlike the proposed inference pathways suggested by Lee and Mumford [22]. The first layer units are responsive to visual information in a narrow spatial window and narrow spatial frequency band. However, the top layer units receive input from a diverse population of first layer units and can thus disambiguate local information by providing a bias to the time rate of change of the phase variables. Because the second layer weights D are adapted to the statistics of natural movies, these biases will be consistent with the statistical distribution of motion occurring in the 7 natural environment. This method can thus deal with artifacts such as noise or temporal aliasing and can be used to disambiguate local motions confounded by the aperture problem. Our model could be extended in a number of ways. Most obviously, the graphical model in Figure 1 begs the question of what would be gained by modeling the joint distribution over the amplitudes, ai, in addition to the phases. To some degree, this line of approach has already been pursued by Karklin & Lewicki [2], and they have shown that the high level units in this case learn spatial invariants within the image. We are thus eager to combine both of these models into a unified model of higher-order form and motion in images. References [1] W. Einhauser, C. Kayser, P. Konig, and K.P. Kording. Learning the invariance properties of complex cells from their responses to natural stimuli. European Journal of Neuroscience, 15(3):475–486, 2002. [2] Y. Karklin and M.S. Lewicki. A hierarchical bayesian model for learning nonlinear statistical regularities in nonstationary natural signals. Neural Computation, 17(2):397–423, 2005. [3] A. Hyv¨arinen, J. Hurri, and J. V¨ayrynen. Bubbles: a unifying framework for low-level statistical properties of natural image sequences. Journal of the Optical Society of America A, 20(7):1237–1252, 2003. [4] G. Wallis and E.T. Rolls. Invariant face and object recognition in the visual system. Progress in Neurobiology, 51(2):167–194, 1997. [5] Y. LeCun, F.J. Huang, and L. Bottou. Learning methods for generic object recognition with invariance to pose and lighting. Computer Vision and Pattern Recognition, 2004. [6] T. Serre, L. Wolf, S. Bileschi, M. Riesenhuber, and T. Poggio. Robust object recognition with cortex-like mechanisms. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 411–426, 2007. [7] SJ Nowlan and T.J. Sejnowski. A selection model for motion processing in area MT of primates. Journal of Neuroscience, 15(2):1195–1214, 1995. [8] K. Zhang, M. I. Sereno, and M. E. Sereno. Emergence of position-independent detectors of sense of rotation and dilation with Hebbian learning: An analysis. Neural Computation, 5(4):597–612, 1993. [9] E.T. Rolls and S.M. Stringer. Invariant global motion recognition in the dorsal visual system: A unifying theory. Neural Computation, 19(1):139–169, 2007. [10] D.B. Grimes and R.P.N. Rao. Bilinear sparse coding for invariant vision. Neural Computation, 17(1):47– 73, 2005. [11] B.A. Olshausen. Probabilistic Models of Perception and Brain Function, chapter Sparse codes and spikes, pages 257–272. MIT Press, 2002. [12] E.P. Simoncelli and D.J. Heeger. A model of neuronal responses in visual area MT. Vision Research, 38(5):743–761, 1998. [13] A. Hyvarinen and P. Hoyer. Emergence of phase-and shift-invariant features by decomposition of natural images into independent feature subspaces. Neural Computation, 12(7):1705–1720, 2000. [14] E.H. Adelson and J.R. Bergen. Spatiotemporal energy models for the perception of motion. Journal of the Optical Society of America, A, 2(2):284–299, 1985. [15] B.A. Olshausen and D.J. Field. Sparse coding with an overcomplete basis set: A strategy employed by v1? Vision Research, 37:3311–3325, 1997. [16] A.J. Bell and T. Sejnowski. The independent components of natural images are edge filters. Vision Research, 37:3327–3338, 1997. [17] C. Zetzsche, G. Krieger, and B. Wegmann. The atoms of vision: Cartesian or polar? Journal of the Optical Society of America A, 16(7):1554–1565, 1999. [18] P. Foldiak. Learning invariance from transformation sequences. Neural Computation, 3(2):194–200, 1991. [19] L. Wiskott and T.J. Sejnowski. Slow feature analysis: Unsupervised learning of invariances. Neural Computation, 14(4):715–770, 2002. [20] D.J. Fleet and A.D. Jepson. Computation of component image velocity from local phase information. International Journal of Computer Vision, 5:77–104, 1990. [21] J.A. Movshon, E.H. Adelson, M.S. Gizzi, and W.T. Newsome. The analysis of moving visual patterns. Pattern Recognition Mechanisms, 54:117–151, 1985. [22] T.S. Lee and D. Mumford. Hierarchical bayesian inference in the visual cortex. Journal of the Optical Society of America A, 20(7):1434–1448, 2003. 8
2008
61
3,550
On Bootstrapping the ROC Curve Patrice Bertail CREST (INSEE) & MODAL’X - Universit´e Paris 10 pbertail@u-paris10.fr St´ephan Cl´emenc¸on Telecom Paristech (TSI) - LTCI UMR Institut Telecom/CNRS 5141 stephan.clemencon@telecom-paristech.fr Nicolas Vayatis ENS Cachan & UniverSud - CMLA UMR CNRS 8536 vayatis@cmla.ens-cachan.fr Abstract This paper is devoted to thoroughly investigating how to bootstrap the ROC curve, a widely used visual tool for evaluating the accuracy of test/scoring statistics in the bipartite setup. The issue of confidence bands for the ROC curve is considered and a resampling procedure based on a smooth version of the empirical distribution called the ”smoothed bootstrap” is introduced. Theoretical arguments and simulation results are presented to show that the ”smoothed bootstrap” is preferable to a ”naive” bootstrap in order to construct accurate confidence bands. 1 Introduction Since the seminal contribution of [14], so-called ROC curves (ROC standing for Receiving Operator Characteristic) have been extensively used in a wide variety of applications (anomaly detection in signal analysis, medical diagnosis, search engines, credit-risk screening) as a visual tool for evaluating the performance of a test statistic regarding its capacity of discrimination between two populations, see [8]. Whereas the statistical properties of their empirical counterparts have been only lately studied from the asymptotic angle, see [18, 13, 11, 16], ROC curves also have recently received much attention in the machine-learning literature through the development of statistical learning procedures tailored for the ranking problem, see [10, 2]. The latter consists of determining, based on training data, a test statistic s(X) (also called a scoring function) with a ROC curve ”as high as possible” at all points of the ROC space. Given a candidate s(X), it is thus of prime importance to assess its performance by computing a confidence band for the corresponding ROC curve, in a data-driven fashion preferably. Indeed, in such a functional setup, resampling-based procedures should naturally be preferred to those relying on computing/simulating the (gaussian) limiting distribution, as first observed in [19, 21, 20], where the use of the bootstrap is promoted for building confidence bands in the ROC space. By building on recent works, see [17, 12], it is the purpose of this paper to investigate how the bootstrap approach should be practically implemented based on a thorough analysis of the asymptotic properties of empirical ROC curves. Beyond the pointwise analysis developed in the studies mentioned above, here we tackle the problem from a functional angle, considering the entire ROC curve or parts of it. This viewpoint indeed appears as particularly relevant in scoring applications. Although the asymptotic results established in this paper are of a theoretical nature, they are considerably meaningful from a computational perspective. It turns out indeed that smoothing is the 1 key ingredient for the bootstrap confidence band to be accurate, whereas a naive bootstrap approach would yield bands of low coverage probability in this case and should be consequently avoided by practicioners for analyzing ROC curves. The rest of the paper is organized as follows. In Section 2, notations are first set out and certain key notions of ROC analysis are briefly recalled. The choice of an adequate (pseudo-)metric on the ROC space, a crucial point of the analysis, is also considered. The smoothed bootstrap algorithm is presented in Section 3, together with the theoretical results establishing its asymptotic accuracy as well as preliminary simulation results illustrating the impact of smoothing on the bootstrap performance. In Section 4, the gain in terms of convergence rate acquired by the smoothing step is thoroughly discussed. We refer to [1] for technical proofs. 2 Background Here we briefly recall basic concepts of the bipartite ranking problem as well as key results related to the statistical estimation of ROC curves. We also set out the notations that shall be needed throughout the paper. Although the results contained in this paper can be formulated without referring to the bipartite ranking framework, in the purpose of motivating the present analysis we intentionally connected them to this major statistical learning problem, which has recently revitalized the interest for the problem of assessing the accuracy of empirical ROC curves, see [4]. 2.1 Assumptions and notation In the bipartite ranking problem, the problem is to order all the elements X of a set X by degree of relevance, when relevancy may be observed through some binary indicator variable Y . Precisely, one has a system consisting of a binary random output Y , taking its values in {−1, 1} say, and a random input X, taking its values in a (generally high-dimensional) feature space X, which models some observation for predicting Y . The probabilistic model is the same as for standard binary classification but the prediction task is different. In the case of information retrieval for instance, the goal is to order all documents x of the list X by degree of relevance for a particular request (rather than simply classifying them as relevant or not as in classification). This amounts to assigning to each document x in X a score s(x) indicating its degree of relevance for this specific query. The challenge is thus to build a scoring function s : X →R from sampling data, so as to rank the observations x by increasing order of their score s(x) as accurately as possible: the higher the score s(X) is, the more likely one should observe Y = +1. True ROC curves. A standard way of measuring the ranking performance consists of plotting the ROC curve, namely the graph of the mapping ROCs : α ∈(0, 1) 7→1 −(Gs ◦H−1 s )(1 −α), where Gs (respectively Hs) denotes s(X)’s cdf conditioned on Y = +1 (resp. conditioned on Y = −1) and F −1(α) = inf{x ∈R/ F(x) ≥α} the generalized inverse of any cdf F on R. It boils down to plotting the true positive rate versus the false positive rate when testing the assumption ”H0 : Y = −1” based on the statistic s(X). This functional performance measure induces a partial order on the set of scoring functions, according to which it may be shown, by standard NeymanPearson’s arguments, that increasing transforms of the regression function η(x) = P(Y = +1 | X = x) are the optimal scoring functions (the test statistic η(X) is uniformly more powerful, i.e. ∀α ∈(0, 1), ROCη(α) ≥ROCs(α), for any scoring function s(x)). Empirical ROC curve estimates. Practical learning strategies for selecting a good scoring function are based on training data Dn = {(Xi, Yi)}1≤i≤n and should thus rely on accurate empirical estimates of the true ROC curves. Let p = P(Y = +1). For any scoring function candidate s(X), an empirical counterpart of ROCs is naturally obtained by computing ∀α ∈(0, 1), [ ROCs(α) = 1 −bGs ◦bH−1 s (1 −α) from empirical cdf estimates: bGs(x) = 1 n+ n X i=1 I{Yi=+1}K(x −s(Xi)) and bHs(x) = 1 n− n X i=1 I{Yi=−1}K(x −s(Xi)), 2 where n+ = Pn i=1 I{Yi = +1} = n −n−is the (random) number of positive instances among the sample (distributed as the binomial Bin(n, p)) and K(u) denotes the step function I{u≥0}. In order to obtain smoothed versions eGs(x) and eFs(x) of the latter cdfs, a typical choice consists of picking instead a function K(u) of the form R v≥0 Kh(u −v)dv, with Kh(u) = h−1K(h−1 · u) where K ≥0 is a regularizing Parzen-Rosenblatt kernel (i.e. a bounded square integrable function such that R K(v)dv = 1) and h > 0 is the smoothing bandwidth, see Remark 1 for a practical view of smoothing. Here and throughout, I{A} denotes the indicator function of any event A. Metrics on the ROC space. When it comes to measure closeness between curves in the ROC space, various metrics may be used, see [9]. Viewing the ROC space as a subset of the Skorohod’s space D([0, 1]) of c`ad-l`ag functions f : [0, 1] →R, the standard metric induced by the sup norm ||.||∞appears as a natural choice. As shall be seen below, asymptotic arguments for grounding the bootstrapping of the empirical ROC curve fluctuations, when measured in terms of the sup norm ||.||∞, are rather straightforward. However, given the geometry of empirical ROC curves, this metric is not always convenient for our purpose and may produce very wide, and thus non informative confidence bands. For analyzing stepwise graphs, such as empirical ROC curves, we shall consider the closely related pseudo-metric defined as follows: ∀(f1, f2) ∈D([0, 1])2, dB(f1, f2) = sup t∈[0,1] dB(f1, f2; t), where dB(f1, f2; t) = min{|f1(t) −f2(t)|, |f −1 2 ◦f1(t) −t|, |f −1 1 ◦f2(t) −t|. We clearly have dB(f1, f2) ≤||f1−f2||∞. The major advantage of considering this pseudo-metric is that it provides a control on vertical and horizontal jumps of ROC curves both at the same time, treating both types of error in a symmetric fashion. Equipped with this pseudo-metric, two piecewise constant ROC curves may be close to each other, even if their jumps do not exactly match. This is clearly appropriate for describing the fluctuations of the empirical ROC curve (and the deviation between the latter and its bootstrap counterpart as well). This way, dB permits to construct builds bands of reasonable size, well adapted to the stepwise shape of empirical ROC curves, with better coverage probabilities. In this respect, the closely related Hausdorff distance (i.e. the distance between the graphs completed by linear segments at jump points) would also be a pertinent choice. However, providing a theoretical basis in the case of the Hausdorff distance is very challenging and will not be addressed in this paper, owing to space limitations. As the goal pursued in the present paper is to build, in the ROC space viewed as a subspace of the Skorohod’s space D([0, 1]) equipped with a proper (pseudo-) metric, a confidence band for the ROC curve of a given diagnosis test statistic s(X), we shall omit to index by s the quantities considered and denote by Z the r.v. s(X) (and by Zi, 1 ≤i ≤n, the s(Xi)’s) for notational simplicity. Throughout the paper, we assume that H(dx) and G(dx) are continuous probability distributions, with densities h(x) and g(x) respectively. Eventually, denote by P the joint distribution of (Z, Y ) on R × {−1, +1} and by Pn its empirical version based on the sample Dn = {(Zi, Yi)}1≤i≤n. Equipped with the notations above, one may write P(dz, y) = pI{y=+1}G(dz) + (1 −p)I{y=−1}H(dz). 2.2 Asymptotic law - Gaussian approximation In the situation described above, the next theorem establishes the strong consistency of the empirical ROC curve in sup norm and provides a strong approximation at the rate 1/√n, up to logarithmic factors, for the fluctuation process: rn(α) = √n( [ ROCn(α) −ROC(α)), α ∈[0, 1]. This (gaussian) approximation plays a crucial role in understanding the asymptotic behavior of the empirical ROC curve and of its bootstrap counterpart. The following assumptions are required. H1 The slope of the ROC curve is bounded: supα∈[0,1]{g(H−1(α))/h(H−1(α))} < ∞. H2 H is twice differentiable on [0, 1]. Furthermore, ∀α ∈[0, 1], h(α) > 0 and there exists γ > 0 such that supα∈[0,1]{α(1 −α) · d log(h ◦H−1(α))/dα} ≤γ < ∞. Theorem. 1 (FUNCTIONAL LIMIT THEOREM) Suppose that H1 −H2 are fulfilled. Then, 3 (i) the empirical ROC curve is strongly consistent: sup α∈[0,1] | [ ROCn(α) −ROC(α)| →0 a.s. as n →∞, (ii) there exist a sequence of two independent brownian bridges {(B(n) 1 (α), B(n) 2 (α))}α∈[0,1] such that we almost surely have, uniformly over [0, 1], rn(α) = z(n)(α) + o  (log log n)ρ1(γ)(log n)ρ2(γ))/√n  , (1) where z(n)(α) = (1 −p)−1/2 g(H−1(1 −α)) h(H−1(1 −α))B(n) 1 (α) + p−1/2B(n) 2 (ROC(α)) and ( ρ1(γ) = 0, ρ2(γ) = 1, if γ < 1 ρ1(γ) = 0, ρ2(γ) = 2, if γ = 1 ρ1(γ) = γ, ρ2(γ) = γ −1 + ε, ε > 0, if γ > 1 . These results may be immediately derived from classical strong approximations for the empirical and quantile processes, see [5, 18]). Incidentally, we mention that the approximation rate is not always log2(n)/√n, contrarily to what is claimed in [18]. We point out that, owing to the presence of the term (g/h)(H−1(1 −α)) in it, the gaussian approximant can hardly be used for constructing ROC confidence bands. To avoid explicit computation of density estimates, bootstrap confidence sets should be certainly preferred in practice. 3 Bootstrapping empirical ROC curves Beyond consistency of the empirical curve in sup norm and the asymptotic normality of the fluctuation process, we now tackle the question of constructing confidence bands for the true ROC curve via the bootstrap approach introduced by [6], extending pointwise results established in [17]. The latter suggests to consider, as an estimate of the law of the fluctuation process rn = {rn(α)}α∈[0,1], the conditional law given Dn of the bootstrapped fluctuation process r∗ n = {√n(ROC∗(α) −[ ROC(α))}α∈[0,1], (2) where ROC∗is the ROC curve corresponding to a sample D∗ n = {(Z∗ i , Y ∗ i )}1≤i≤n of i.i.d. random pairs with a common distribution ePn close to Pn. We shall also consider d∗ n = √ndB(ROC∗, [ ROC), (3) whose random fluctuations, given Dn, are expected to mimic those of dn = √ndB( [ ROC, ROC). The difficulty is twofold. Firstly, the target of the bootstrap procedure is here a distribution on a path space, the ROC space being viewed as a subspace of Dn([0, 1]), equipped with either ||.||∞or else dB(., .). Secondly, both rn and dn are functionals of the quantile process { bH−1(α)}α∈[0,1]. It is well-known that the naive bootstrap (i.e. resampling from the raw empirical distribution) generally provides bad approximations of the distribution of empirical quantiles in practice: the rate of convergence for a given quantile is indeed of order OP(n−1/4), see [7], whereas the rate of the gaussian approximation is n−1/2. As shall be seen below, the same phenomenon may be naturally observed for ROC curves. In a similar fashion to what is generally recommended for empirical quantiles, we suggest to implement a smoothed version of the bootstrap algorithm in order to improve the approximation rate of ||rn||∞’s distribution, respectively of dn’s distribution . In short, this boils down to resampling the data from a smoothed version of the empirical distribution Pn. 3.1 The Algorithm Here we describe the algorithm for building a confidence band at level 1 −ϵ in the ROC space from sampling data Dn = {(Zi, Yi); 1 ≤i ≤n}. Set n+ = P 1≤i≤n I{Yi=1} = n −n−. It is performed in four steps as follows. 4 ALGORITHM - SMOOTHED ROC BOOTSTRAP 1. Based on Dn, compute the empirical class cdf estimates bG and bH, as well as their smoothed versions eG and eH. Plot the ROC curve estimate: \ ROC(α) = 1 −bG ◦bH−1(1 −α), α ∈[0, 1]. 2. From the smooth distribution estimate ePn(dz, y) = n− n I{y=+1} eG(dz) + n+ n I{y=−1} eH(dz), draw a bootstrap sample D∗ n = {(Z∗ i , Y ∗ i )}1≤i≤n conditioned on Dn. 3. Based on D∗ n, compute the bootstrap versions of the empirical class cdf estimates G∗ and H∗. Plot the bootstrap ROC curve ROC∗(α) = 1 −G∗◦H∗−1(1 −α), α ∈[0, 1]. 4. Eventually, get the bootstrap confidence bands at level 1−ϵ defined by the ball of center [ ROC and radius δϵ/√n in D([0, 1]), where δϵ is defined by P∗(||r∗ n||∞≤δϵ) = 1 −ϵ in the case of the sup norm or by P∗(d∗ n ≤δϵ) = 1 −ϵ, when considering the dB distance, denoting by P∗(.) the conditional probability given the original data Dn. Before turning to the theoretical properties of this algorithm and related numerical experiments, a few remarks are in order. Remark 1 (MONTE-CARLO APPROXIMATION) From a computational angle, the true smoothed bootstrap distribution must be approximated in its turn, using a Monte-Carlo approximation scheme. A convenient way of doing this in practice, while reproducing theoretical advantages of smoothing, consists of drawing B bootstrap samples, of size n, with replacement in the original data and then perturbating each drawn data by independent centered gaussian random variables of variance h2 (this procedure is equivalent to drawing bootstrap data from a smooth estimate ePn(dz, dy) computed using a gaussian kernel Kh(u) = (2πh2)−1/2 exp(−u2/(2h2))), see [22]. Regarding the choice of the number of bootstrap replications, picking B = n does not modify the rate of convergence. However, choosing B of magnitude comparable to n so that (1 + B)ϵ is an integer may be more appropriate: the ϵ-quantile of the approximate bootstrap distribution is the uniquely defined and this will not modify the rate of convergence neither, see [15]. Remark 2 (ON TUNING PARAMETERS) The primary tuning parameters of the Algorithm are those related to the smoothing stage. When using a gaussian regularizing kernel, one should typically choose a bandwidth hn of order n−1/5 in order to minimize the mean square error. Remark 3 (ON RECENTERING) From the asymptotic analysis viewpoint, it would be fairly equivalent to recenter by a smoothed version of the original empirical curve ] ROC(.) = 1−eG◦eH−1(1−.) in the computation of the bootstrap fluctuation process. However, numerically speaking, computing the sup norm of the estimate (2) is much more tractable, insofar as it solely requires to evaluate the distance between piecewise constant curves over the pooled set of jump points. It should also be noticed that smoothing the original curve, as proposed in [17], should be also avoided in practice, since it hides the jump locations, which constitute the essential part of the information. 3.2 Asymptotic analysis We now investigate the accuracy of the bootstrap estimate output by the Algorithm. The result stated in the next theorem extend those established in [17] in the pointwise framework. The functional nature of the approximation result below is essential, since it should be enhanced that, in most ranking applications, assessing the uncertainty about the whole estimated ROC curve, or some part of it at least, is what really matters. In the sequel, we assume that the kernel K used in the smoothing step is ”pyramidal” (e.g. gaussian or of the form I{u∈[−1,+1]}). 5 Theorem. 2 (ASYMPTOTIC ACCURACY) Suppose that the hypotheses of Theorem 1 are fulfilled. Assume further that smoothed versions of the cdf’s eG and eH are computed at step 1 using a scaled kernel Khn(u) with hn ↓0 as n →∞in a way that nh3 n →∞and nh5 n log2 n →0. Then, the bootstrap distribution estimates output by the Algorithm are such that sup t∈R |P∗(||r∗ n||∞≤t) −P(||rn||∞≤t)| and sup t∈R |P∗(d∗ n ≤t) −P(dn ≤t)| are of order oP log(h−1 n ) √nhn  . Hence, up to logarithmic factors, choosing hn ∼1/(log2+η n1/5) with η > 0 yields an approximation error of order n−2/5 for the bootstrap estimate. Although its rate is slower than the one of the gaussian approximation (1), the smoothed bootstrap method remains very appealing from a computational perspective, the construction of confidence bands from simulated brownian bridges being very difficult to implement in practice. As shall be seen below, the rate reached by the smoothed bootstrap distribution is nevertheless a great improvement, compared to the naive bootstrap approach (see the discussion below). Remark 4 (BOOTSTRAPPING SUMMARY STATISTICS) From Theorem 1 above, asymptotic validity of the smooth bootstrap method for estimating the distribution of the fluctuations of a functional Φ( [ ROC) of the empirical ROC curve may be deduced, as soon as the function Φ defined on D([0, 1]) is sufficiently smooth (namely continuously Hadamard differentiable). For instance, it could be applied to summary statistics involving a specific piece of the ROC curve only in order to focus on the ”best instances” [3], or more classically to the area under the ROC curve (AUC). However, in the latter case, due to the fact that this particular summary statistic is of the form of a U-statistic [2], the naive bootstrap rate is faster than the one we obtained here (of order n−1). 3.3 Simulation results The striking advantage of the smoothed bootstrap is the improved rate of convergence of the resulting estimator. Furthermore, choosing dB for measuring the magnitude order of curve fluctuations has an even larger impact on the accuracy of the empirical bands. As an illustration of this theoretical result, we now display simulation results, emphasizing the gain acquired by smoothing and considering the pseudo-metric dB. We present confidence bands for a single trajectory and the estimation of the coverage probability of the bands for a simple binormal model: Yi = +1 if β0 + β1X + ε > 0, and Yi = −1 otherwise, where ε and X are independent standard normal r.v.’s. In this example, the scoring function s(x) is the maximum likelihood estimator of the probit model on the training set. We choose here β0 = β1 = 1, n = 1000, B = 999 and γ = 0.95 for the targeted coverage probability. Coverage probabilities are obtained over 2000 replications of the procedure, using the package ROCR of statistical software R. As mentioned before, choosing ||.||∞yields very large bands with coverage probability close to 1! Though still large, bands based on the pseudo-metric dB are clearly much more informative (see Fig. 1). It should be noticed that the coverage improvement obtained by smoothing is clearer in the pontwise estimation setup (here α = 0.2) but much more difficult to evidence for confidence bands. Table 1: Empirical coverage probabilities for 95% empirical bands/intervals. METHOD COVERAGE (%) NAIVE BOOTSTRAP ||rn||∞ 100 SMOOTHED BOOTSTRAP ||rn||∞ 100 NAIVE BOOTSTRAP dn 90.3 SMOOTHED BOOTSTRAP dn 93.1 NAIVE BOOTSTRAP rn(0.2) 89.7 SMOOTHED BOOTSTRAP rn(0.2) 92.5 6 False positive rate True positive rate 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.02 0.22 0.41 0.61 0.8 1 Figure 1 : ||.||∞confidence band False positive rate True positive rate 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.01 0.21 0.41 0.6 0.8 1 Figure 2 : dB confidence band False positive rate True positive rate 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 0.01 0.21 0.41 0.6 0.8 1 Figure 3: Ponctual smooth bootstrap confidence interval Figure 1: ROC confidence bands. 4 Discussion Let us now give an insight into the reason why the smoothed bootstrap procedure outperforms the bootstrap without smoothing. In most statistical problems where the nonparametric bootstrap is useful, there is no particular reason for implementing it from a smoothed version of the empirical df rather from the raw empirical distribution itself, see [22]. However, in the present case, smoothing affects the rate of convergence. Suppose indeed that the bootstrap process (2) is built by drawing from the raw cdf’s bG and bH instead of their smoothed versions at step 2 of the Algorithm. Then, for any α ∈]0, 1[, supt∈R |P∗(r∗ n(α) ≤t) −P(rn(α) ≤t)| = OP(n−1/4). Hence, the naive bootstrap rate induces an error of order O(n−1/4) which cannot be improved, whereas it may be shown that the rate n−2/5 is attained by the smoothed bootstrap (in a similar fashion to the functional setup), provided that the amount of smoothing is properly chosen. Heuristically, this is a consequence of the oscillation behavior of the deviation between the bootstrap quantile H∗−1(1 −α) and its expected value bH−1(1 −α) given the data Dn, due to the fact that the step cdf bH is not regular around bH−1(1 −α): this corresponds to a jump with probability one. Higher-order accuracy. A classical way of improving the pointwise approximation rate consists of bootstrapping a standardized version of the r.v. rn(α). It is natural to consider, as standardization factor, the square root of an estimate of the asymptotic variance: σ2(α) = var(z(n)(α)) = α(1 −α) 1 −p g(H−1(1 −α))2 h(H−1(1 −α))2 + ROC(α)(1 −ROC(α)) p . (4) An estimate bσ2 n of plug-in type could be considered, obtained by plugging n+/n, ] ROC and smoothed density estimators ˜h = eH′ and ˜g = eG′ into (4) instead of their (unknown) theoretical counterparts. More interestingly, from a computational viewpoint, a bootstrap estimator of the variance could also be used. Following the argument used in [17] for a smoothed original estimate of the ROC curve, one may show that a smoothed bootstrap of the studentized statistic rn(α)/σn(α) yields a better pointwise rate of convergence than 1/√n, the one of the gaussian approximation in the Central Limit Theorem. Precisely, for a given α ∈]0, 1[, if the bandwidth used in the computation 7 of σ2 n(α) is chosen of order n−1/3, we have: sup t∈R P∗  r∗ n(α) σ∗n(α) ≤t  −P  rn(α) σn(α) ≤t  = OP  1 n2/3  , (5) denoting σ2 n(α)’s bootstrap counterpart by σ∗2 n (α). Notice that the bandwidth used in the standardization step (i.e. for estimating the variance) is not the same as the one used at the resampling stage of the procedure. This is a key point for achieving second-order accuracy. This time, the smoothed (studentized) bootstrap method widely outperforms the gaussian approach, when the matter is to build confidence intervals for the ordinate [ ROC(α) of a point of abciss α on the empirical ROC curve. However, it is not clear yet, whether this result remains true for confidence bands, when considering the whole ROC curve (this would actually require to establish an Edgeworth expansion for the supremum ||rn/bσn||∞). This will be the scope of further research. References [1] P. Bertail, S. Cl´emenc¸on, and N. Vayatis. On constructing accurate confidence bands for ROC curves through smooth resampling, http://hal.archives-ouvertes.fr/hal-00335232/fr/. Technical report, 2008. [2] S. Cl´emenc¸on, G. Lugosi, and N. Vayatis. Ranking and scoring using empirical risk minimization. Proceedings of COLT 2005, Eds P. Auer and R. Meir, LNAI 3559, Springer, 2005. [3] S. Cl´emenc¸on and N. Vayatis. Ranking the best instances. Journal of Machine Learning Research, 5:197–227, 2007. [4] W. Cohen, R. Schapire, and Y. Singer. Learning to order things. Journal of Artificial Intelligence Research, 10:243–270, 1999. [5] M. Csorgo and P. Revesz. Strong approximations in probability and statistics. Academic Press, 1981. [6] B. Efron. Bootstrap methods: another look at the jacknife. Annals of Statistics, 7:1–26, 1979. [7] M. Falk and R. Reiss. Weak convergence of smoothed and nonsmoothed bootstrap quantile estimates. Annals of Probability, 17:362–371, 1989. [8] T. Fawcett. ROC graphs: Notes and practical considerations for data mining researchers. Technical Report HPL 2003-4), 5:197–227, 2003. [9] P. Flach. The geometry of roc space: understanding machine learning metrics through roc isometrics. In T. Fawcett and N. Mishra, editors, Proc. 20th International Conference on Machine Learning (ICML’03), AAAI Press, 86:194–201, 2003. [10] Y. Freund, R. Iyer, R. Schapire, and Y. Singer. An efficient boosting algorithm for combining preferences. Journal of Machine Learning Research, 4:933–969, 2003. [11] P. Ghosal and J. Gu. Bayesian ROC curve estimation under binormality using a partial likelihood based on ranks. Submitted for publication, 2007. [12] P. Ghosal and J. Gu. Strong approximations for resample quantile process and application to ROC methodology. Submitted for publication, 2007. [13] A. Girling. ROC confidence bands: An empirical evaluation. Journal of the Royal Statistical Society, Series B, 62:367–382, 2000. [14] D. Green and J. Swets. Signal detection theory and psychophysics. Wiley, NY, 1966. [15] P. Hall. On the number of bootstrap simulations required to construct a confidence interval. Annals of Statistics, 14:1453–1462, 1986. [16] P. Hall and R. Hyndman. Improved methods for bandwidth selection when estimating ROC curves. Statistics and Probability Letters, 64:181–189, 2003. [17] P. Hall, R. Hyndman, and Y. Fan. Nonparametric confidence intervals for receiver operating characteristic curves. Biometrika, 91:743–750, 2004. [18] F. Hsieh and B. Turnbull. Nonparametric and semi-parametric statistical estimation of the ROC curve. The Annals of Statistics, 24:25–40, 1996. [19] S. Macskassy and F. Provost. Confidence bands for ROC curves: methods and an empirical study. In Proceedings of the first Workshop on ROC Analysis in AI (ROCAI-2004) at ECAI-2004, 2004. [20] S. Macskassy, F. Provost, and S. Rosset. Bootstrapping the ROC curve: an empirical evaluation. In Proceedings of ICML-2005 Workshop on ROC Analysis in Machine Learning (ROCML-2005), 2005. [21] S. Macskassy, F. Provost, and S. Rosset. ROC confidence bands: An empirical evaluation. In Proceedings of the 22nd International Conference on Machine Learning (ICML-2005), 2005. [22] B. Silverman and G. Young. The bootstrap: to smooth or not to smooth? Biometrika, 74:469–479, 1987. 8
2008
62
3,551
QUIC-SVD: Fast SVD Using Cosine Trees Michael P. Holmes, Alexander G. Gray and Charles Lee Isbell, Jr. College of Computing Georgia Tech Atlanta, GA 30327 {mph, agray, isbell}@cc.gatech.edu Abstract The Singular Value Decomposition is a key operation in many machine learning methods. Its computational cost, however, makes it unscalable and impractical for applications involving large datasets or real-time responsiveness, which are becoming increasingly common. We present a new method, QUIC-SVD, for fast approximation of the whole-matrix SVD based on a new sampling mechanism called the cosine tree. Our empirical tests show speedups of several orders of magnitude over exact SVD. Such scalability should enable QUIC-SVD to accelerate and enable a wide array of SVD-based methods and applications. 1 Introduction The Singular Value Decomposition (SVD) is a fundamental linear algebraic operation whose abundant useful properties have placed it at the computational center of many methods in machine learning and related fields. Principal component analysis (PCA) and its kernel and nonlinear variants are prominent examples, and countless other instances are found in manifold and metric learning, clustering, natural language processing/search, collaborative filtering, bioinformatics and more. Notwithstanding the utility of the SVD, it is critically bottlenecked by a computational complexity that renders it impractical on massive datasets. Yet massive datasets are increasingly common in applications, many of which require real-time responsiveness. Such applications could use SVDbased methods more liberally if the SVD were not so slow to compute. We present a new method, QUIC-SVD, for fast, sample-based SVD approximation with automatic relative error control. This algorithm is based on a new type of data partitioning tree, the cosine tree, that shows excellent ability to home in on the subspaces needed for good SVD approximation. We demonstrate several-orderof-magnitude speedups on medium-sized datasets, and verify that approximation error is properly controlled. Based on these results, QUIC-SVD seems able to help address the scale of modern problems and datasets, with the potential to benefit a wide array of methods and applications. 2 Background For A ∈Rm×n, we write A(i) for the ith row of A and A(j) for the jth column. We use Om×n to represent the subset of Rm×n whose columns are orthonormal. Since the columns of V ∈Om×n are an orthonormal basis, we sometimes use expressions such as “the subspace V ” to refer to the subspace spanned by the columns of V . Throughout this paper we assume m ≥n, such that sampling rows gives bigger speedup than sampling columns. This is no loss of generality, since whenever m < n we can perform SVD on the transpose, then swap U and V to get the SVD of the original matrix. Alternatively, row-sampling-based methods have analogous column-sampling versions that can be used in place of transposition; we leave this implicit and develop only the row-sampling version of our algorithm. 1 Algorithm 1 Optimal approximate SVD within a row subspace bV . EXTRACTSVD Input: target matrix A ∈Rm×n, subspace basis bV ∈On×k Output: U, Σ, V , the SVD of the best approximation to A within the subspace spanned by bV ’s columns 1. Compute AbV , then (AbV )T AbV and its SVD: U ′Σ′V ′T = (AbV )T AbV 2. Let V = bV V ′, Σ = (Σ′)1/2, and U = (AbV )V ′Σ−1 3. Return U, Σ, V The singular value decomposition is defined as follows: Definition 1. Let A be an m × n real matrix of rank ρ. Then there exists a factorization of the form A = UΣV T , (1) where U and V each have orthonormal columns and are of size m × ρ and n × ρ, respectively, and Σ is diagonal with entries σ1 ≥σ2 ≥. . . ≥σρ > 0. Equivalently, we can write the SVD as a weighted sum of rank-one outer products: A = Pρ i=1 σiuivT i , where ui and vi represent the ith columns of U and V . The columns ui and vi are referred to as the left and right singular vectors, while the weights σi are the singular values. Though it is sometimes overkill, the SVD can be used to solve essentially any problem in numerical linear algebra. Instances of such problems abound in machine learning. Given m ≥n, the exact SVD has O(mn2) runtime (O(n3) for square matrices). This is highly unscalable, rendering exact SVD impractical for large datasets. However, it is often the case that good approximations can be found using subsets of the rows or columns. Of significant interest are low-rank approximations to a matrix. The optimal k-rank approximation, in the sense of minimizing the squared error ||A −bA||2 F , is the k-rank truncation of the SVD: Ak = k X i=1 σiuivT i = UkΣkVk . (2) Ak is the projection of A’s rows onto the subspace spanned by the top k right singular vectors, i.e., Ak = AVkV T k . The optimality of Ak implies that the columns of Vk span the subspace of dimension at most k in which the squared error of A’s row-wise projection is minimized. This leads us to a formulation of SVD approximation in which we seek to find a subspace in which A’s projection has sufficiently low error, then perform the SVD of A in that subspace. If the subspace is substantially lower in rank/dimension than A, the SVD of the projection can be computed significantly faster than the SVD of the original A (quadratically so, as we will have decreased the n in O(mn2)). An important procedure we will require is the extraction of the best approximate SVD within a given subspace bV . Algorithm 1 describes this process; portions of this idea appeared in [1] and [2], but without enumeration of its properties. We state some of the key properties as a lemma. Lemma 1. Given a target matrix A and a row subspace basis stored in the columns of bV , EXTRACTSVD has the following properties: 1. Returns a full SVD, meaning U and V with orthonormal columns, and Σ diagonal. 2. UΣV T = AbV bV T , i.e., the extracted SVD reconstructs exactly to the projection of A’s rows onto the subspace spanned by bV . 3. UΣV T minimizes squared-error reconstruction of A among all SVDs whose rows are restricted to the span of bV . We omit the fairly straightforward proof. The runtime of the procedure is O(kmn), where k is the rank of bV . As this SVD extraction will constitute the last and most expensive step of our algorithm, we therefore require a subspace discovery method that finds a subspace of sufficient quality with as low a rank k as possible. This motivates the essential idea of our approach, which is to leverage the 2 Table 1: Distinctions between whole-matrix SVD approximation and LRMA. Whole-Matrix SVD Approximation Low-Rank Matrix Approximation True SVD: U, Σ, and V bA or unaligned bV & bΣ only Addresses full-rank matrix Fixed low-rank k Full-rank relative error bound k-rank error bound, additive or relative Table 2: Distinctions between subspace construction in QUIC-SVD and previous LRMA methods. QUIC-SVD Previous LRMA Methods Iterative buildup, fast empirical error control One-off computation, loose error bound Adaptive sample size minimization Fixed a priori sample size (loose) Cosine tree sampling Various sampling schemes geometric structure of a matrix to efficiently derive compact (i.e., minimal-rank) subspaces in which to carry out the approximate SVD. Previous Work. A recent vein of work in the theory and algorithms community has focused on using sampling to solve the problem of low-rank matrix approximation (LRMA). The user specifies a desired low rank k, and the algorithms try to output something close to the optimal k-rank approximation. This problem is different from the whole-matrix SVD approximation we address, but a close relationship allow us to draw on some of the LRMA ideas. Table 1 highlights the distinctions between whole-matrix SVD approximation and LRMA. Table 2 summarizes the differences between our algorithmic approach and the more theoretically-oriented approaches taken in the LRMA work. Each LRMA algorithm has a way of sampling to build up a subspace in which the matrix projection has bounded error. Our SVD also samples to build a subspace, so the LRMA sampling methods are directly comparable to our tree-based approach. Three main LRMA sampling techniques have emerged,1 and we will discuss each from the perspective of iteratively sampling a row, updating a subspace so it spans the new row, and continuing until the subspace captures the input matrix to within a desired error threshold. This is how our method works, and it is similar to the framework used by Friedland et al. [1]. The key to efficiency (i.e., rank-compactness) is for each sampled row to represent well the rows that are not yet well represented in the subspace. Length-squared (LS) sampling. Rows are sampled with probability proportional to their squared lengths: pi = ||A(i)||2 F /||A||2 F . LS sampling was used in the seminal work of Frieze, Kannan, and Vempala [3], and in much of the follow-on work [4, 5]. It is essentially an importance sampling scheme for the squared error objective. However, it has two important weaknesses. First, a row can have high norm while not being representative of other rows. Second, the distribution is nonadaptive, in that a point is equally likely to be drawn whether or not it is already well represented in the subspace. Both of these lead to wasted samples and needless inflation of the subspace rank. Residual length-squared (RLS) sampling. Introduced by Deshpande and Vempala [2], RLS modifies the LS probabilities after each subspace update by setting pi = ||A(i) −ΠV (A(i))||2 F /||A − ΠV (A)||2 F , where ΠV represents projection onto the current subspace V . By adapting the LS distribution to be over residuals, this method avoids drawing samples that are already well represented in the subspace. Unfortunately, there is still nothing to enforce that any sample will be representative of other high-residual samples. Further, updating residuals requires an expensive s passes through the matrix for every s samples that are added, which significantly limits practical utility. Random projections (RP). Introduced by Sarl´os [6], the idea is to sample linear combinations of rows, with random combination coefficients drawn from a Gaussian. This method is strong where LS and RLS are weak — because all rows influence every sample, each sample is likely to represent a sizeable number of rows. Unfortunately the combination coefficients are not informed by importance (squared length), and the sampling distribution is non-adaptive. Further, each linear combination requires a full matrix pass, again limiting practicality. Also deserving mention is the randomized sparsification used by Achlioptas et al. [7]. Each of the LRMA sampling methods has strengths we can draw on and weaknesses we can improve upon. In particular, our cosine tree sampling method can be viewed as combining the representativeness of RP sampling with the adaptivity of RLS, which explains its empirically dominant rank efficiency. 1Note that our summary of related work is necessarily incomplete due to space constraints; our intent is to summarize the essential results from the LRMA literature inasmuch as they pertain to our approach. 3 Algorithm 2 Cosine tree construction. CTNODE Input: A ∈Rm×n Output: cosine tree node containing the rows of A 1. N ←new cosine tree node 2. N.A ←A 3. N.splitPt ←ROWSAMPLELS(A) // split point sampled from length-squared distribution 4. return N CTNODESPLIT Input: cosine tree node N Output: left and right children obtained by cosine-splitting of N 1. for each N.A(i), compute ci = |cos(N.A(i), N.splitPt)| 2. if ∀i, ci = 1, return nil 3. cmax = max{ci|ci < 1}; cmin = min{ci} 4. Al ←[ ]; Ar ←[ ] 5. for i = 1 to N.nRows (a) if cmax −ci ≤ci −cmin, Al ←  Al N.A(i)  (b) else Ar ←  Ar N.A(i)  6. return CTNODE(Al), CTNODE(Ar) 3 Our Approach Rather than a fixed low-rank matrix approximation, our objective is to approximate the whole-matrix SVD with as high a rank as is required to obtain the following whole-matrix relative error bound: ||A −bA||2 F ≤ϵ||A||2 F , (3) where bA = UΣV T is the matrix reconstructed by our SVD approximation. In contrast to the error bounds of previous methods, which are stated in terms of the unknown low-rank Ak, our error bound is in terms of the known A. This enables us to use a fast, empirical Monte Carlo technique to determine with high confidence when we have achieved the error target, and therefore to terminate with as few samples and as compact a subspace as possible. Minimizing subspace rank is crucial for speed, as the final SVD extraction is greatly slowed by excess rank when the input matrix is large. We use an iterative subspace buildup as described in the previous section, with sampling governed by a new spatial partitioning structure we call the cosine tree. Cosine trees are designed to leverage the geometrical structure of a matrix and a partial subspace in order to quickly home in on good representative samples from the regions least well represented. Key to the efficiency of our algorithm is an efficient error checking scheme, which we accomplish by Monte Carlo error estimation at judiciously chosen stages. Such a combination of spatial partitioning trees and Monte Carlo estimation has been used before to good effect [8], and we find it to be a successful pairing here as well. Cosine Trees for Efficient Subspace Discovery. The ideal subspace discovery algorithm would oracularly choose as samples the singular vectors vi. Each vi is precisely the direction that, added to the subspace spanned by the previous singular vectors, will maximally decrease residual error over all rows of the matrix. This intuition is the guiding idea for cosine trees. A cosine tree is constructed as follows. Starting with a root node, which contains all points (rows), we take its centroid as a representative to include in our subspace span, and randomly sample a point to serve as the pivot for splitting. We sample the pivot from the basic LS distribution, that being the cheapest source of information as to sample importance. The remaining points are sorted by their absolute cosines relative to the pivot point, then split according to whether they are closer to the high or low end of the cosines. The two groups are assigned to two child nodes, which are placed in a 4 Algorithm 3 Monte Carlo estimation of the squared error of a matrix projection onto a subspace. MCSQERROR Input: A ∈Rm×n, bV ∈On×k, s ∈{1 . . . m}, δ ∈[0, 1] Output: sqErr ∈R s.t. with probability at least 1 −δ, ||A −AbV bV T ||2 F ≤sqErr 1. S = rowSamplesLS(A, s) // sample s rows from the length-squared distribution 2. for i = 1 to s : // compute weighted sq. mag. of each sampled row’s projection onto V (a) wgtMagSq[i] = 1 pS(i) ||S(i)V ||2 F // pS(i) is prob. of drawing Si under LS sampling 3. ˆµ = avg(wgtMagSq); ˆσ2 = var(wgtMagSq); magSqLB = lowBound(ˆµ, ˆσ2, s, δ) 4. return ||A||2 F −magSqLB Algorithm 4 QUIC-SVD: fast whole-matrix approximate SVD with relative error control. QUIC-SVD Input: A ∈Rm×n, ϵ ∈[0, 1], and δ ∈[0, 1] Output: an SVD U, Σ, V s.t. bA = UΣV T satisfies ||A −bA||2 F ≤ϵ||A||2 F with probability at least 1 −δ 1. V = [ ]; mcSqErr = ||A||2 F ; Nroot = CTNODE(A) 2. Q = EMPTYPRIORITYQUEUE(); Q.insert(Nroot, 0) 3. do until mcSqErr ≤ϵ||A||2 F : (a) N = Q.pop(); C = CTNODESPLIT(N) // C = {Nl, Nr}, the children of N (b) Remove N’s contributed basis vector from V (c) for each Nc ∈C : i. V = [V MGS(V, Nc.centroid)] // MGS = modified Gram-Schmidt orthonormalization (d) for each Nc ∈C : i. errC = MCSQERROR(Nc.A, V, O(log[Nc.nRows]), δ) ii. Q.insert(Nc, errC) (e) mcSqErr = MCSQERROR(A, V, O(log m), δ) 4. return EXTRACTSVD(A, V ) queue prioritized by the residual error of each node. The process is then repeated according to the priority order of the queue. Algorithm 2 defines the splitting process. Why do cosine trees improve sampling efficiency? By prioritizing expansion by the residual error of the frontier nodes, sampling is always focused on the areas with maximum potential for error reduction. Since cosine-based splitting guides the nodes toward groupings with higher parallelism, the residual magnitude of each node is increasingly likely to be well captured along the direction of the node centroid. Expanding the subspace in the direction of the highest-priority node centroid is therefore a good guess as to the direction that will maximally reduce residual error. Thus, cosine tree sampling approximates the ideal of oracularly sampling the true singular vectors. 3.1 QUIC-SVD Strong error control. Algorithm 4, QUIC-SVD (QUantized Iterative Cosine tree)2, specifies a way to leverage cosine trees in the construction of an approximate SVD while providing a strong probabilistic error guarantee. The algorithm builds a subspace by expanding a cosine tree as described above, checking residual error after each expansion. Once the residual error is sufficiently low, we return the SVD of the projection into the subspace. Note that exact error checking would require an expensive O(k2mn) total cost, where k is the final subspace rank, so we instead use a Monte Carlo error estimate as specified in Algorithm 3. We also employ Algorithm 3 for the error estimates used in node prioritization. With Monte Carlo instead of exact error computations, the total cost for error checking decreases to O(k2n log m), a significant practical reduction. 2Quantized alludes to each node being represented by a single point that is added to the subspace basis. 5 The other main contributions to runtime are: 1) k cosine tree node splits for a total of O(kmn), 2) O(k) single-vector Gram-Schmidt orthonormalizations at O(km) each for a total of O(k2m), and 3) final SVD extraction at O(kmn). Total runtime is therefore O(kmn), with the final projection onto the subspace being the costliest step since the O(kmn) from node splitting is a very loose worst-case bound. We now state the QUIC-SVD error guarantee. Theorem 1. Given a matrix A ∈Rm×n and ϵ, δ ∈[0, 1], the algorithm QUIC-SVD returns an SVD U, Σ, V such that bA = UΣV T satisfies ||A −bA||2 F ≤ϵ||A||2 F with probability at least 1 −δ. Proof sketch. The algorithm terminates after mcSqErr ≤ϵ||A||2 F with a call to EXTRACTSVD. From Lemma 1 we know that EXTRACTSVD returns an SVD that reconstructs to A’s projection onto V (i.e., bA = AV V T ). Thus, we have only to show that mcSqErr in the terminal iteration is an upper bound on the error ||A −bA||2 F with probability at least 1 −δ. Note that intermediate error checks do not affect the success probability, since they only ever tell us to continue expanding the subspace, which is never a failure. From the Pythagorean theorem, ||A −AV V T ||2 F = ||A||2 F −||AV V T ||2 F , and, since rotations do not affect lengths, ||AV V T ||2 F = ||AV ||2 F . The call to MCSQERROR (step 3(e)) performs a Monte Carlo estimate of ||AV ||2 F in order to estimate ||A||2 F −||AV ||2 F . It is easily verified that the length-squared-weighted sample mean used by MCSQERROR produces an unbiased estimate of ||AV ||2 F . By using a valid confidence interval to generate a 1 −δ lower bound on ||AV ||2 F from the sample mean and variance (e.g., Theorem 1 of [9] or similar), MCSQERROR is guaranteed to return an upper bound on ||A||2 F −||AV ||2 F with probability at least 1 −δ, which establishes the theorem. Relaxed error control. Though the QUIC-SVD procedure specified in Algorithm 4 provides a strong error guarantee, in practice its error checking routine is overconservative and is invoked more frequently than necessary. For practical usage, we therefore approximate the strict error checking of Algorithm 4 by making three modifications: 1. Set mcSqErr to the mean, rather than the lower bound, of the MCSQERROR estimate. 2. At each error check, estimate mcSqErr with several repeated Monte Carlo evaluations (i.e., calls to MCSQERROR), terminating only if they all result in mcSqErr ≤ϵ||A||2 F . 3. In each iteration, use a linear extrapolation from past decreases in error to estimate the number of additional node splits required to achieve the error target. Perform this projected number of splits before checking error again, thus eliminating needless intermediate error checks. Although these modifications forfeit the strict guarantee of Theorem 1, they are principled approximations that more aggressively accelerate the computation while still keeping error well under control (this will be demonstrated empirically). Changes 1 and 2 are based on the fact that, because mcSqErr is an unbiased estimate generated by a sample mean, it obeys the Central Limit Theorem and thus approaches a normal distribution centered on the true squared error. Under such a symmetric distribution, the probability that a single evaluation of mcSqErr will exceed the true error is 0.5. The probability that, in a series of x evaluations, at least one of them will exceed the true error is approximately 1 −0.5x (1 minus the probability that they all come in below the true error). The probability that at least one of our mcSqErr evaluations results in an upper bound on the true error (i.e., the probability that our error check is correct) thus goes quickly to 1. In our experiments, we use x = 3, corresponding to a success probability of approximately 0.9 (i.e., δ ≈0.1). Change 3 exploits that fact that the rate at which error decreases is typically monotonically nonincreasing. Thus, extrapolating the rate of error decrease from past error evaluations yields a conservative estimate of the number of splits required to achieve the error target. Naturally, we have to impose limits to guard against outlier cases where the estimated number is unreasonably high. Our experiments limit the size of the split jumps to be no more than 100. 4 Performance We report the results of two sets of experiments, one comparing the sample efficiency of cosine trees to previous LRMA sampling methods, and the other evaluating the composite speed and error performance of QUIC-SVD. Due to space considerations we give results for only two datasets, and 6 madelon relative squared error 0 0.005 0.01 0.015 0.02 0.025 subspace rank 0 10 20 30 40 50 60 LS RLS RP CT Opt (a) madelon kernel (2000 × 2000) declaration 0 0.005 0.01 0.015 0.02 0.025 0.03 0.035 subspace rank 0 50 100 150 200 250 300 350 400 LS RLS RP CT Opt relative squared error (b) declaration (4656 × 3923) Figure 1: Relative squared error vs. subspace rank for various subspace discovery methods. LS is length-squared, RLS is residual length-squared, RP is random projection, and CT is cosine tree. due to the need to compute the exact SVD as a baseline we limit ourselves to medium-sized matrices. Nonetheless, these results are illustrative of the more general performance of the algorithm. Sample efficiency. Because the runtime of our algorithm is O(kmn), where k is the final dimension of the projection subspace, it is critical that we use a sampling method that achieves the error target with the minimum possible subspace rank k. We therefore compare our cosine tree sampling method to the previous sampling methods proposed in the LRMA literature. Figure 1 shows results for the various sampling methods on two matrices, one a 2000 × 2000 Gaussian kernel matrix produced by the Madelon dataset from the NIPS 2003 Workshop on Feature Extraction (madelon kernel), and the other a 4656 × 3923 scan of the US Declaration of Independence (declaration). Plotted is the relative squared error of the input matrix’s projection onto the subspaces generated by each method at each subspace rank. Also shown is the optimal error produced by the exact SVD at each rank. Both graphs show cosine trees dominating the other methods in terms of rank efficiency. This dominance has been confirmed by many other empirical results we lack space to report here. It is particularly interesting how closely the cosine tree error can track that of the exact SVD. This would seem to give some justification to the principle of grouping points according to their degree of mutual parallelism, and validates our use of cosine trees as the sampling mechanism for QUIC-SVD. Speedup and error. In the second set of experiments we evaluate the runtime and error performance of QUIC-SVD. Figure 2 shows results for the madelon kernel and declaration matrices. On the top row we show how speedup over exact SVD varies with the target error . Speedups range from 831 at  = 0.0025 to over 3,600 at  = 0.023 for madelon kernel, and from 118 at  = 0.01 to nearly 20,000 at  = 0.03 for declaration. On the bottom row we show the actual error of the algorithm in comparison to the target error. While the actual error is most often slightly above the target, it nevertheless hugs the target line quite closely, never exceeding the target by more than 10%. Overall, the several-order-of-magnitude speedups and controlled error shown by QUIC-SVD would seem to make it an attractive option for any algorithm computing costly SVDs. 5 Conclusion We have presented a fast approximate SVD algorithm, QUIC-SVD, and demonstrated severalorder-of-magnitude speedups with controlled error on medium-sized datasets. This algorithm differs from previous related work in that it addresses the whole-matrix SVD, not low-rank matrix approximation, it uses a new efficient sampling procedure based on cosine trees, and it uses empirical Monte Carlo error estimates to adaptively minimize needed sample sizes, rather than fixing a loose sample size a priori. In addition to theoretical justifications, the empirical performance of QUIC-SVD argues for its effectiveness and utility. We note that a refined version of QUIC-SVD is forthcoming. The new version is greatly simplified, and features even greater speed with a deterministic error guarantee. More work is needed to explore the SVD-using methods to which QUIC-SVD can be applied, particularly with an eye to how the introduction of controlled error in the SVD will 7 madelon speedup 0 1,000 2,000 3,000 4,000 epsilon 0 0.005 0.01 0.015 0.02 0.025 (a) speedup - madelon kernel declaration speedup 0 5,000 1e+04 1.5e+04 2e+04 2.5e+04 epsilon 0.005 0.01 0.015 0.02 0.025 0.03 0.035 128 477 (b) speedup - declaration madelon relative squared error 0 0.2 0.4 0.6 0.8 1 epsilon 0 0.005 0.01 0.015 0.02 0.025 actual error target error (c) relative error - madelon kernel declaration relative squared error 0 0.2 0.4 0.6 0.8 1 epsilon 0.01 0.015 0.02 0.025 0.03 0.035 actual error target error (d) relative error - declaration Figure 2: Speedup and actual relative error vs.  for QUIC-SVD on madelon kernel and declaration. affect the quality of the methods using it. We expect there will be many opportunities to enable new applications through the scalability of this approximation. References [1] S. Friedland, A. Niknejad, M. Kaveh, and H. Zare. Fast Monte-Carlo Low Rank Approximations for Matrices. In Proceedings of Int. Conf. on System of Systems Engineering, 2006. [2] A. Deshpande and S. Vempala. Adaptive Sampling and Fast Low-Rank Matrix Approximation. In 10th International Workshop on Randomization and Computation (RANDOM06), 2006. [3] A. M. Frieze, R. Kannan, and S. Vempala. Fast Monte-Carlo Algorithms for Finding Low-Rank Approximations. In IEEE Symposium on Foundations of Computer Science, pages 370–378, 1998. [4] P. Drineas, R. Kannan, and M. W. Mahoney. Fast Monte Carlo Algorithms for Matrices II: Computing a Low-Rank Approximation to a Matrix. SIAM Journal on Computing, 36(1):158–183, 2006. [5] P. Drineas, E. Drinea, and P. S. Huggins. An Experimental Evaluation of a Monte-Carlo Algorithm for Singular Value Decomposition. Lectures Notes in Computer Science, 2563:279–296, 2003. [6] T. Sarlos. Improved Approximation Algorithms for Large Matrices via Random Projections. In 47th IEEE Symposium on Foundations of Computer Science (FOCS), pages 143–152, 2006. [7] D. Achlioptas, F. McSherry, and B. Scholkopf. Sampling Techniques for Kernel Methods. In Advances in Neural Information Processing Systems (NIPS) 17, 2002. [8] M. P. Holmes, A. G. Gray, and C. L.Isbell, Jr. Ultrafast Monte Carlo for Kernel Estimators and Generalized Statistical Summations. In Advances in Neural Information Processing Systems (NIPS) 21, 2008. [9] J. Audibert, R. Munos, and C. Szepesvari. Variance estimates and exploration function in multi-armed bandits. Technical report, CERTIS, 2007. 8
2008
63
3,552
A Massively Parallel Digital Learning Processor Hans Peter Graf Srihari Cadambi Igor Durdanovic hpg@nec-labs.com cadambi@nec-labs.com igord@nec-labs.com Venkata Jakkula Murugan Sankardadass Eric Cosatto Srimat Chakradhar Jakkula@nec-labs.com murugs@nec-labs.com cosatto@nec-labs.com chak@nec-labs.com NEC Laboratories, America 4 Independence Way, Suite 200; Princeton, NJ 07738, USA Abstract We present a new, massively parallel architecture for accelerating machine learning algorithms, based on arrays of vector processing elements (VPEs) with variable-resolution arithmetic. Groups of VPEs operate in SIMD (single instruction multiple data) mode, and each group is connected to an independent memory bank. The memory bandwidth thus scales with the number of VPEs, while the main data flows are local, keeping power dissipation low. With 256 VPEs, implemented on two FPGAs (field programmable gate array) chips, we obtain a sustained speed of 19 GMACS (billion multiplyaccumulate per sec.) for SVM training, and 86 GMACS for SVM classification. This performance is more than an order of magnitude higher than that of any FPGA implementation reported so far. The speed on one FPGA is similar to the fastest speeds published on a Graphics Processor for the MNIST problem, despite a clock rate that is an order of magnitude lower. Tests with Convolutional Neural Networks show similar compute performances. This massively parallel architecture is particularly attractive for embedded applications, where low power dissipation is critical. 1 Introduction Machine learning demands higher and higher compute-performance, but serial processors are not improving that much anymore - at least not as quickly as they used to. Mainstream processor development is moving to multi-core systems, using shared memory technology to hide the parallel nature of the processors. But shared memory technology does not scale to hundreds or thousands of cores. In order to reach such levels of parallelization alternative approaches have to be developed. Massively parallel general-purpose computers had limited success so far, because of difficulties programming these machines, and they remain a niche market, mostly in highperformance computing. Yet processors specialized for certain application domains, such as graphics processors or routing processors1 1 e.g. Nvidia, Quadro FX 5600 graphics processor; Cisco, CRS-1 routing processor , have been parallelized to several hundred cores and are successful mass products. They improve performance over general-purpose processors by focusing on a few key algorithmic elements, yet still maintain enough flexibility that they can be programmed for a variety of applications. We explore in this paper if a similar approach can lead to efficient machine learning processors. Several processors optimized for machine learning, in particular for neural networks, were developed during the 1980’s and 90’s. Examples are the Synapse-1 architecture [1], or the Connectionist Network Supercomputer, CNS1 [2]. Recently there has been less activity in this field, but some accelerators are sold today for specific applications, such as the Axeon [3] processor for power train control of cars. Beside digital processors a large number of analog circuits were built, emulating neural network structures. Extremely high performance with low power dissipation is achievable, see e.g. [4][5], but these networks have little flexibility. SVM implementations on FPGA have been demonstrated in recent years [6-8], yet reached only low compute-performances. All machine learning processors had only limited success so far, indicating how difficult it is to find a good combination of performance, flexibility, price and ease of use. An important consideration is that many applications of machine learning, such as video analysis, data mining, or personalization of services, show the most promise in embedded systems. Embedded learning requires high compute performance while dissipating little power, a combination that is difficult to achieve, and so far required application specific IC (ASIC). Our aim is to develop architectures that meet the requirements for embedded learning, but are programmable and therefore can be used in a wide range of applications. With the goal of analyzing different architectures we designed a development and testing environment where the parallel computation is mapped onto FPGA’s. Initially this system was intended only for experimentation, but its performance is so high that this platform is useful in its own right as accelerator for high-performance systems. While the experiments shown here emphasize high performance, the architecture has been designed from the start for low power dissipation. The main features for achieving this goal are: low-resolution arithmetic, keeping the main data flow local, low operating frequencies, and a modular design, so that unused parts can be powered down dynamically. All results shown here are from the test platform; migration to lowpower FPGA or chip designs are done in a later stage. 2 Algorithms - Arithmetic - Architecture For a substantial improvement over a general purpose processor, the algorithms, the arithmetic units, as well as the architecture have to be optimized simultaneously. This is not just an exercise in hardware design, but algorithms and their software implementations have to be developed concurrently. Most machine learning algorithms have not been developed with parallelization in mind. Therefore, we first need to find good parallel versions, identify their performance bottlenecks, and then extract common computational patterns that can be mapped into accelerator hardware. 2.1 Algorithms Characteristic for machine learning is that large amounts of data need to be processed, often with predictable data access patterns and no dependency between operations over large segments of the computation. This is why data-parallelization can often provide good accelerations on multi-core chips, clusters of machines, or even on loosely coupled networks of machines. Using MapReduce, speedups linear with the number of processors have been reported in [9] for several machine learning algorithms. Up to 16 cores were tested, and simulations indicate good scaling to more processors in some cases. Many algorithms, such as KNN, K-means clustering, LVQ, and Neural Networks can be reduced to forms where the computation is dominated by vector-matrix multiplications, which are easily parallelizable. For Convolutional Neural Networks (CNN) the data flow can be complex, yet the core of the computation is a convolution, an operation which has been studied extensively for parallel implementations. For Support Vector Machines (SVM), several parallel algorithms were described, but most saturate quickly for more than 16 processors. Scaling to larger numbers of processors has been demonstrated, applying MapReduce on a graphics processor with 128 cores [10]. Another implementation on a cluster of 48 dual-core machines (with 384 MMX units) [11] scales even super-linearly, and, according to simulations, scales to thousands of cores. Based on this analysis it is clear that vector-matrix and matrix-matrix multiplications for large vector dimensionalities and large numbers of vectors must be handled efficiently. Yet this alone is not sufficient since data access patterns vary greatly between algorithms. We analyze this here in more detail for SVM and CNN. These algorithms were chosen, because they are widely used for industrial applications and cover a broad range of computation, I/O, and memory requirements. The characteristics of the SVM training are summarized in Table 1. We use an approach similar to the one described in [11] to split different parts of the computation between a host CPU and the FPGA accelerator. For large dimensions d of the vectors the calculation of the columns of the kernel matrix dominates by far. This is needed to update the gradients, and in the present implementation, only this part is mapped onto the FPGA. If the dimensionality d is smaller than around 100, operations 2 and 5 can become bottlenecks and should also be mapped onto the accelerator. Challenging is that for each kernel computation a new data vector has to be loaded into the processor, leading to very high I/O requirements. We consider here dimensions of 10 - 10 4 and numbers of training data of 10 5 - 10 7, resulting easily in Gigabytes that need to be transferred to the processors at each iteration. Operation Computation IO Unit 1 Initialize all αx, Gx 2n 2n CPU - Do 2 Find working set αi, αj I * 2n I * 2n CPU 3 Update αi, αj I * 10 I * 2 CPU 4 Get 2 columns of kernel matrix I * 2nd I * (2d+2dn) FPGA 5 Update gradients Gx I * n I * n CPU 6 While not converged Table 1: Compute- and IO-requirements of each step for SVM training (SMO algorithm). n: number of training data; d: dimension of the vectors; G: gradients; α: support vector factors; I: number of iterations. The last column indicates whether the execution happens on the host CPU or the accelerator FPGA. It is assumed that the kernel computation requires a dot product between vectors (e.g. rbf, polynomial, tanh kernels). Neural network algorithms are essentially sequences of vector-matrix multiplications, but networks with special connectivity patterns, such as convolutional networks have very different IO characteristics than fully connected networks. Table 2 shows the computation and IO requirements for scanning several convolution kernels over one input plane. A full network requires multiple of these operations for one layer, with nonlinearities between layers. We map all operations onto the FPGA accelerator, since intermediate results are re-used right away. The most significant difference to between the SVM and CNN is the Compute/IO ratio: SVM: ~ 1; CNN: ~ L*k 2 > 100. Therefore the requirements for these two algorithms are very different, and handling both cases efficiently is quite a challenge for an architecture design. Operation Computation IO Unit 1 Load L kernels L* k 2 FPGA For all input pixels FPGA 2 Shift in new pixel n* m FPGA 3 Multiply kernels n * m * L * k 2 FPGA 4 Shift out result n*m FPGA Table 2: Compute- and IO-requirements for CNN computation (forward pass), where l kernels of size k*k are scanned simultaneously over an input plane of size n*m. This is representative for implementations with kernel unrolling (kernel pixels processed in parallel). Internal shifts, computation of the non-linearity, and border effects not shown. 2.2 Arithmetic Hardware can be built much more compactly and runs with lower power dissipation, if it uses fixed-point instead of floating-point operations. Fortunately, many learning algorithms tolerate a low resolution in most of the computations. This has been investigated extensively for neural networks [12][13], but less so for other learning algorithms. Learning from data is inherently a noisy process, because we see only a sparse sampling of the true probability distributions. A different type of noise is introduced in gradient descent algorithms, when only a few training data are used at a time to move the optimization forward iteratively. This noise is particularly pronounced for stochastic gradient descent. There is no point in representing noisy variables with high resolution, and it is therefore a property inherent to many algorithms that low-resolution computation can be used. It is important, not to confuse this tolerance to low resolution with the resolution required to avoid numeric instabilities. Some of the computations have to be performed with a high resolution, in particular for variables that are updated incrementally. They maintain the state of the optimization and may change in very small steps. But usually by far the largest part of the computation can be executed at a low resolution. Key is that the hardware is flexible enough and can take advantage of reduced resolution while handling high resolution where necessary. Problem Kernel: Float Kernel: 16 bit fixed point Obj. f. # SV F-score Obj. f. # SV F-score F-sc. (4b in) Adult 31,930.77 11,486 77.58 31,930.1 11,490 77.63 NA Forest 653,170.7 49,333 98.29 652,758 49,299 98.28 NA MNIST 4,960.13 6,172 99.12 4,959.64 6,166 99.11 99.11 NORB 1,243.71 3,077 93.34 1,244.76 3,154 93.26 92.78 Table 3: Comparison of the results of SVM training when the kernels are represented with floating point numbers (32 or 64 bits) (left half) and with 16 bit fixed point (right half). The last column shows the results when the resolution of the training data is reduced from 8 bit to 4 bit. For NORB this reduces the accuracy; all other differences in accuracy are not significant. All are two class problems: Adult: n=32,562, d=122; Forest: n=522,000, d=54 (2 against the rest); MNIST: n=60,000, d=784 (odd–even); NORB: n=48,560, d=5,184. We developed a simulator that allows running the training algorithms with various resolutions in each of the variables. A few examples for SVM training are shown in Table 3. Reducing the resolution of the kernel values from double or float to 16 bit fixed point representations does not affect the accuracy for any of the problems. Therefore all the multiplications in the dot products for the kernel computation can be done in low resolutions (4–16 bit in the factors), but the accumulator needs sufficient resolution to avoid over/under flow (48 bit). Once the calculation of the kernel value is completed, it can be reduced to 16 bit. A low resolution of 16 bit is also tolerable for the α values, but a high resolution is required for the gradients (double). For Neural Networks, including CNN, several studies have confirmed that states and gradients can be kept at low resolutions (<16 bit), but the weights must be maintained at a high resolution (float) (see e.g. [12]). In our own evaluations 24 bits in the weights tend to be sufficient. Once the network is trained, for the classification low resolutions can be used for the weights as well (<16 bit). 2.3 Architecture Figure 1: Left: Schematic of the architecture with the main data flows; on one FPGA 128 VPE are configured into four SIMD groups; L-S: Load-store units. Right: Picture of an FPGA board; in our experiments one or two of them are used, connected via PCI bus to a host CPU. Based on the analysis above, it is clear that the architecture must be optimized for processing massive amounts of data with relatively low precision. Most of the time, data access patterns are predictable and data are processed in blocks that can be stored contiguously. This type of computation is well suited for vector processing, and simple vector processing elements (VPE) with fixed-point arithmetic can handle the operations. Since typically large blocks of data are processed with the same operation, groups of VPE can work in SIMD (single instruction multiple data) mode. Algorithms must then be segmented to map the highvolume, low precision parts onto the vector accelerators and parts requiring high precision arithmetic onto the CPU. The most important design decision is the organization of the memory. Most memory accesses are done in large blocks, so that the data can be streamed, making complex caching unnecessary. This is fortunate, since the amounts of data to be loaded onto the processor are so large that conventional caching strategies would be overwhelmed anyway. Because the blocks tend to be large, a high data bandwidth is crucial, but latency for starting a block transfer is less critical. Therefore we can use regular DDR memories and still get high IO rates. This led to the design shown schematically in Figure 1, where independent memory banks are connected via separate IO ports for each group of 32 VPE. By connecting multiple of the units shown in Figure 1 to a CPU, this architecture scales to larger numbers of VPE. Parallel data IO and parallel memory access scale simultaneously with the number of parallel cores, and we therefore refer to this as the P3 (P-cube) architecture. Notice also that the main data flow is only local between a group of VPE and its own memory block. Avoiding movements of data over long distances is crucial for low power dissipation. How far this architecture can reasonably scale with one CPU depends on the algorithms, the amount of data and the vector dimensionality (see below). A few hundred VPE per CPU have provided good accelerations in all our tests, and much higher numbers are possible with multi-core CPUs and faster CPU-FPGA connections. 3 Implementation of the P3 Architecture This architecture fits surprisingly well onto some of the recent FPGA chips that are available with several hundred Digital Signal Processors (DSP) units and over 1,000 IO pins for data transfers. The boards used here contain each one Xilinx Virtex 5 LX330T-2 FPGA coupled to 4 independent DDR2 SDRAM with a total of 1GB, and 2 independent 4MB SSRAM memory banks (commercial board from AlphaData). One FPGA chip contains 192 DSP with a maximum speed of 550MHz, which corresponds to a theoretical compute-performance of 105.6 GMACS (18 bit and 25 bit operands). There is a total of 14 Mbit of on-chip memory, and the chip incorporates 960 pins for data IO. Due to routing overhead, not all DSP units can be used and the actual clock frequencies tend to be considerably lower than what is advertised for such chips (typically 230MHz or less for our designs). Nevertheless, we obtain high performances because we can use a large number of DSP units for executing the main computation. The main architecture features are: • Parallel processing (on one chip): 128 VPE (hardware DSP) are divided into 4 blocks of 32, each group controlled by one sequencer with a vector instruction set. • Custom Precision: Data are represented with 1 to 16 bit resolution. Higher resolutions are possible by operating multiple DSP as one processor. • Overlapping Computation and Communication: CPU-FPGA communication is overlapped with the FPGA computation. • Overlap Memory Operations with Computation: All loads and stores from the FPGA to off-chip memory are performed concurrently with computations. • High Off-chip Memory Bandwidth: 6 independent data ports, each 32 bits wide, access banked memories concurrently (12GB/s per chip). • Streaming Data Flow, Simple Access Patterns: Load/store units are tailored for streaming input and output data, and for simple, bursty access patterns. Caching is done under application control with dual-port memory on chip. • Load/store with (de)compression: For an increase of effective IO bandwidth the load/store units provide compression and decompression in hardware. Figure 2 shows the configuration of the VPEs for vector dot product computation used for SVM training and classification. For training, the main computation is the calculation of one column of the kernel matrix. One vector is pre-fetched and stored in on-chip memory. All other vectors are streamed in from off-chip memory banks 1-4. Since this is a regular and predictable access pattern, we can utilize burst-mode, achieving a throughput of close to one memory word per cycle. But the speed is nevertheless IO bound. When several vectors can be stored on-chip, as is the case for classification, then the speed becomes compute-bound. Figure 2: Architecture for vector dot-product computation. The left side shows a high-level schematic with the main data flow. The data are streamed from memory banks 1-4 to the VPE arrays, while memory banks 5 and 6, alternatively receive results or stream them back to the host. The right side shows how a group of VPE is pipelined to improve clock speed. The operation for SVM training on the FPGA corresponds to a vector-matrix multiplication and the one for classification to a matrix-matrix multiplication. Therefore the configuration of Figure 2 is useful for many other algorithms as well, where operations with large vectors and matrices are needed, such as Neural Networks. We implemented a specialized configuration for Convolutional Neural Networks, for more efficiency and lower power dissipation. The VPE are daisy-chained and operate as systolic array. In this way we can take advantage of the high computation to IO ratio (Table 2) to reduce the data transfers from memory. 4 Evaluations We evaluated SVM training and classification with the NORB and MNIST problems, the latter with up to 2 million training samples (data from [11]). Both are benchmarks with vectors of high dimensionality, representative for applications in image and video analysis. The computation is split between CPU and FPGA as indicated by Table 1. The DDR2 memory banks are clocked at 230MHz, providing double that rate for data transfers. The data may be compressed to save IO bandwidth. On the FPGA they are decompressed first and distributed to the VPE. In our case, a 32 bit word contains eight 4-bit vector components. Four 32 bit words are needed to feed all 32 VPEs of a group; therefore clocking the VPE faster than 115MHz does not improve performance. A VPE executes a multiplication plus add operation in one clock cycle, resulting in a theoretical maximum of 14.7 GMACS per chip. The sustained compute-rate is lower, about 9.4 GMACS, due to overhead (see Table 4). The computation on the host CPU overlaps with that on the FPGA, and has no effect on the speed in the experiments shown here. For the classification the VPE can be clocked higher, at 230 MHz. By using 4-bit operands we can execute 2 multiply-accumulates simultaneously on one DSP, resulting in speed that is more than four times higher and a sustained 43.0 GMACS limited by the number and speed of the VPE. Adding a second FPGA card doubles the speed, showing little saturation effects yet, but for more FPGA per CPU there will be saturation (see Fig. 3). The compute speed in GMACS obtained for NORB is almost identical. CPU CPU+MMX CPU+FPGA CPU+2 FPGA # Iterations time speed time speed time speed time speed 60k 8,000 754s 0.5 240 s 1.57 40 s 9.42 21 s 17.9 2M 266,900 -- -- 531,534 s 1.58 88,589 s 9.48 48,723 s 17.2 Table 4: Training times and average compute speed for SVM training. Systems tested: CPU, Opteron, 2.2GHz; CPU using MMX; CPU with one FPGA; CPU with two FPGA boards. Results are shown for training sizes of 60k and 2M samples. Compute speed is in GMACS (just kernel computations). Training algorithm: SMO with second order working set selection. Parallelizations of SVM training have been reported recently for a GPU [10] and for a cluster [11], both using the MNIST data. In [10] different bounds for stopping were used than here and in [11]. Nevertheless, a comparison of the compute performance is possible, because based on the number of iterations we can compute the average GMACS for the kernel computations. As can be seen in Table 5 a single FPGA is similar in speed to a GPU with 128 stream processors, despite a clock rate that is about 5.5 times lower for I/O and 11 times lower for the VPE. The cluster with 384 MMX units is about 6 times faster than one FPGA with 128 VPE, but dissipates about two orders of magnitude more electric power. For the FPGA this calculation includes only the computation of the kernel values while the part on the CPU is neglected. This is justified for this study, because the rest of the calculations can be mapped on the FPGA as well and will increase the power dissipation only minimally. Table 5: Comparison of performances for SVM training (MNIST data). GPU: Nvidia 8800 GTX. Cluster: 48 dual core CPU (Athlon), 384 MMX units. The GPU was training with 60k samples ([10], table 2, second order), the cluster trained with 2 million samples. Figure 3: Acceleration of SVM training as a function of the number of VPE. MNIST n: 2,000,000, d=784; NORB: n=48,560, d=5,184. The points for 128 and 256 VPE are experimental, the higher ones are simulations. Curves MNIST, NORB: Multiple FPGA are attached to one CPU. Curve MNIST C: Each FPGA is attached to a separate host CPU. Scaling of the acceleration with the number of VPEs is shown in Figure 3. The reference speed is that of one FPGA attached to a CPU. The evaluation has been done experimentally for 128 and 256 VPEs, and beyond that with a simulator. The onset of saturation depends on the dimensionality of the vectors, but to a much lesser extent on the number of training vectors (up to the limit of the memory on the FPGA card). MNIST saturates for more than two FPGAs because Processor Number of cores Clock speed Operand type Power dissipation Average compute speed CPU (Opteron) 1 2.2 GHz float 40 W 0.5 GMACS GPU (from [10]) 128 1.35 GHz float 80 W 7.4 GMACS Cluster (from [11]) 384 1.6 GHz byte > 1 kW 54 GMACS FPGA 128 0.12 GHz 4 bit nibble 9 W 9.4 GMACS then the CPU and FPGA computation times become comparable. For the larger vectors of NORB (d=5,184) this saturation starts to be noticeable for more than 4 FPGA. Alternatively, a system can be scaled by grouping multiple CPU, each with one attached FPGA accelerator. Then the scaling follows a linear or even super-linear acceleration (MNIST C) to several thousand VPE. If the CPUs are working in a cluster arrangement, the scaling is similar to the one described in [11]. For convolutional neural networks, the architecture of Figure 2 is modified to allow a block of VPE to operate as systolic array. In this way convolutions can be implemented with minimal data movements. In addition to the convolution, also sub-sampling and non-linear functions plus the logistics to handle multiple layers with arbitrary numbers of kernels in each layer are done on the FPGA. Four separate blocks of such convolvers are packed onto one FPGA, using 100 VPE. Clocked at 115MHz, this architecture provides a maximum of 11.5 GMACS. Including all the overhead the sustained speed is about 10 GMACS. 5 Conclusions By systematically exploiting characteristic properties of machine learning algorithms, we developed a new massively parallel processor architecture that is very efficient and can be scaled to thousands of processing elements. The implementation demonstrated here is more than an order of magnitude higher in performance than previous FPGA implementations of SVM or CNN. For the MNIST problem it is comparable to the fastest GPU implementations reported so far. These results underline the importance of flexibility over raw compute-speed for massively parallel systems. The flexibility of the FPGA allows more efficient routing and packing of the data and the use of computations with the lowest resolution an algorithm permits. The results of Table 5 indicate the potential of this architecture for low-power operation in embedded applications. References [1] Ramacher, et al. (1995) Synapse-1: A high-speed general purpose parallel neurocomputer system. In Proc. 9th Intl. Symposium on Parallel Processing (IPPS'95), pp. 774-781. [2] Asanovic, K., Beck, Feldman, J., Morgan, N. & Wawrzynek, J. (1994) A Supercomputer for Neural Computation, Proc. IEEE Intl. Joint Conference on Neural Networks, pp. 5-9, Orlando, Florida. [3] Neil, P., (2005) Combining hardware with a powerful automotive MCU for powertrain applications. In Industrial Embedded Resource Guide, p. 88. [4] Korekado, et al. (2003) A Convolutional Neural Network VLSI for Image Recognition Using Merged/Mixed Analog-Digital Architecture, in Proc. 7th KES 2003, Oxford, pp 169-176. [5] Murasaki, M., Arima, Y. & Shinohara, H. (1993) A 20 Tera-CPS Analog Neural Network Board. In Proc. Int. Joint Conf. Neural Networks, pp. 3027 – 3030. [6] Pedersen, R., Schoeberl, M. (2006), An Embedded Support Vector Machine, WISE 2006. [7] Dey, S., Kedia, M. Agarwal, N., Basu, A., Embedded Support Vector Machine: Architectural Enhancements and Evaluation, in Proc 20th Int. Conf. VLSI Design. [8] Anguita, D., Boni, A., Ridella, S., (2003) A Digital Architecture for Support Vector Machines: Theory, Algorithm, and FPGA Implementation, IEEE Trans. Neural Networks, 14/5, pp.993-1009. [9] Chu, C., Kim, S., Lin, Y., Yu, Y., Bradski, G., Ng, A. & Olukotun, K. (2007) Map-Reduce for Machine Learning on Multicore, Advances in Neural Information Processing Systems 19, MIT Press. [10] Catanzaro, B., Sundaram, N., & Keutzer, K. (2008) Fast Support Vector Machine Training and Classification on Graphics Processors, Proc. 25th Int. Conf. Machine Learning, pp 104-111. [11] Durdanovic, I., Cosatto, E. & Graf, H. (2007) Large Scale Parallel SVM Implementation. In L. Bottou, O. Chapelle, D. DeCoste, J. Weston (eds.), Large Scale Kernel Machines, pp. 105-138, MIT Press. [12] Simard, P & Graf, H. (1994) Backpropagation without Multiplication. In J. Cowan, G. Tesauro, J. Alspector, (eds.), Neural Information Processing Systems 6, pp. 232 – 239, Morgan Kaufmann. [13] Savich, A., Moussa, M., Areibi, S., (2007) The Impact of Arithmetic Representation on Implementing MLP-BP on FPGAs: A Study, IEEE Trans. Neural Networks, 18/1, pp. 240-252.
2008
64
3,553
Characterizing response behavior in multi-sensory perception with conflicting cues Rama Natarajan1 Iain Murray1 Ladan Shams2 Richard S. Zemel1 1Department of Computer Science, University of Toronto, Canada {rama,murray,zemel}@cs.toronto.edu 2Department of Psychology, University of California Los Angeles, USA ladan@psych.ucla.edu Abstract We explore a recently proposed mixture model approach to understanding interactions between conflicting sensory cues. Alternative model formulations, differing in their sensory noise models and inference methods, are compared based on their fit to experimental data. Heavy-tailed sensory likelihoods yield a better description of the subjects’ response behavior than standard Gaussian noise models. We study the underlying cause for this result, and then present several testable predictions of these models. 1 Introduction A natural scene contains several multi-modal sensory cues to the true underlying values of its physical properties. There is substantial evidence that the brain deals with the sensory information from multiple modalities simultaneously, to form a coherent and unified percept of the world and to guide action. A major focus of multi-sensory perceptual studies has been in exploring the synergistic as well as modulatory interactions between individual sensory cues. The perceptual consequences of these interactions can be effectively explored in cases where the cues are in conflict with each other, resulting in potentially illusory percepts such as the “ventriloquism effect” [1]. A well-tested hypothesis with regards to multi-sensory cue interaction is that the individual sensory estimates are combined in a linear fashion, weighted by their relative reliabilities. Most studies that expound this linear approach assume that sensory noise in the different modalities are independent of each other, and that the sensory likelihoods can be well approximated by Gaussian distributions. Under these assumptions, the maximum-likelihood estimator of the underlying physical variable is an affine combination of the sensory estimates weighted in proportion to their precisions. This linear model predicts that the variance of the posterior distribution is always lower than that of individual cues. However, data from several psychophysical studies contradict this prediction, necessitating non-linear computational strategies to deal with the inputs. Recent studies [2; 3; 4; 5] have proposed a particular form of mixture model to address response behavior in situations with a large conflict between sensory stimuli. Conflicts arise when corresponding cues suggest very different estimates of an underlying variable. The basic intuition behind these models is that large stimulus disparities might be a consequence of the stimuli having resulted from multiple underlying causal factors. We evaluate the different formulations in their ability to model experimental data [6] that exhibit very interesting non-linear response behavior under conflicting stimulus conditions. The formulations differ in how perceptual estimates are derived from sensory data. We demonstrate some inadequacies of the current models and propose an alternative formulation that employs heavy-tailed sensory likelihoods. The proposed model not only achieves better fits to non-linear response behavior in the experimental data but also makes several quantitatively testable predictions. 2 A Mixture Model for Evaluating Cue Interactions In this section, we present an overview of a recently proposed mixture model approach [3] to dealing with conflicting sensory inputs. We describe two approaches to inference under this model — causal averaging and causal selection — and analyze the model predictions on our simulation of an auditory localization task [6]. The environmental variables of interest are the spatial locations of an auditory and visual stimulus, denoted by sa and sv respectively. Information about the stimuli is provided by noisy sensory cues xa and xv. The model evaluates sensory cues under two discrete hypotheses (C = {1, 2}) regarding the causal structure underlying the generation of the stimuli. The hypotheses are that the two stimuli could arise from the same (C = 1) or different (C = 2) causal events. This mixture model instantiates a simple idea: if there is a common cause, cues are combined; otherwise they are segregated. The model is characterized by (i) the sensory likelihoods P(xv|sv) and P(xa|sa), (ii) the prior distributions P(sv, sa) over true stimulus positions and (iii) the prior over hypotheses P(C). 2.1 Generating sensory data The standard model assumes Gaussian sensory likelihoods and prior distributions. The true auditory and visual stimulus positions are assumed to be the same for C = 1, i.e., sa = sv = s drawn from a zero-mean Gaussian prior distribution: s ∼N(0, σ2 p) where σp is standard deviation of the distribution. The noisy sensory evidence xa is a sample from a Gaussian distribution with mean sa = s and standard deviation σa: xa ∼N(xa; sa = s, σ2 a). Similarly for the visual evidence: xv ∼N(xv; sv = s, σ2 v). When there are C = 2 underlying causes, they are drawn independently from the zero-mean Gaussian prior distribution: sv ∼N(0, σ2 p); sa ∼N(0, σ2 p). Then xv ∼N(xv; sv, σ2 v) and xa ∼N(xa; sa, σ2 a). The belief in each hypothesis given the cues xa and xv is defined by the posterior distribution: P(C|xv, xa) = P(xv, xa|C)P(C) P(xv, xa) (1) When the hypotheses are discrete C = {1, 2}, the normalization constant P(xv, xa) = P(xv, xa|C = 1)P(C = 1) + P(xv, xa|C = 2)(1 −P(C = 1)). Given this particular causal generative model, the conditional likelihoods in Equation 1 are defined as P(xv, xa|C = 1) = R P(xv|sv = s)P(xa|sa = s)P(s)ds and P(xv, xa|C = 2) = R P(xv|sv)P(sv)dsv R P(xa|sa)P(sa)dsa. The conditional sensory likelihoods are specified as: P(xv, xa|sv, sa, C) = P(xv|sv)P(xa|sa). 2.2 Inference methods 2.2.1 Causal averaging The conditional posterior over stimulus variables is calculated for each hypothesis as P(sv, sa|xv, xa, C = 1) and P(sv, sa|xv, xa, C = 2). The standard approach to computing the full posterior distribution of interest P(sa, sv|xa, xv) is by integrating the evidence over both hypotheses weighted by the posterior distribution over C (Equation 1). Such a model averaging approach to causal inference is specified by the following identity: Pavg(sv, sa|xv, xa) = X C P(sv, sa|xv, xa, C)P(C|xv, xa) (2) = X C P(xv, xa|sv, sa, C)P(sv, sa|C)P(C|xv, xa) P(xv, xa|C) (3) Here, P(C = 1|xv, xa) = πc is the posterior mixing proportion and (1 −πc) = P(C = 2|xv, xa). 2.2.2 Causal selection An alternative approach is to calculate an approximate posterior distribution by first selecting the hypothesis C∗that maximizes the posterior distribution P(C|xv, xa). Under this model selection approach, subsequent inference is based on the selected hypothesis alone. C∗= argmax C={1,2} P(C|xv, xa) (4) Then the posterior distribution over stimulus location is approximated as follows: Psel(sv, sa|xv, xa) ≈ P(sv, sa|xv, xa, C = C∗) (5) = P(xv, xa|sv, sa, C = C∗)P(sv, sa|C = C∗) P(xv, xa|C = C∗) (6) 2.3 Evaluating the models on experimental data Here, we evaluate the causal averaging and selection models on an auditory localization task [6] where visual and auditory stimuli were presented at varying spatial and temporal disparities. In addition to reporting the location of the auditory target, subjects were also asked to report on whether they perceived the two stimuli to be perceptually unified. The variables examined were the bias and variance of the subjects’ estimates for each stimulus condition. The data exhibit very interesting non-linear response behavior (solid lines in Figures 1A and 1D). In our simulation of the task, the auditory target was presented at locations {0◦, 5◦, 10◦} left or right of fixation. Although the real experiment varied the fixation location from trial to trial, it was found to have no effect on subsequent analyses and data were collapsed across all fixation locations. Hence, we assume the fixation point to be at the center of space (0◦). The visual stimuli were assumed to be temporally coincident with the auditory stimuli and presented at varying spatial disparities {0◦, 5◦, 10◦, 15◦, 20◦, 25◦} left or right of sound. Sensory evidence xa and xv were corrupted by Gaussian noise as described earlier. Each stimulus combination {sa, sv} was presented with equal probability 2000 times. The spatial axis ranged from −25◦to 25◦and was divided into 1◦width bins. On each trial, the model computes a posterior probability distribution over stimulus locations conditioned on the noisy cues xa and xv according to one of Equations 3 or 6. It then estimates visual and auditory locations ˆsa and ˆsv as the peak of the posterior distribution (maximum aposteriori estimate): ˆsa = argmaxsa P(sa, sv|xa, xv). We have simulated estimators using other criteria, such as minimizing the squared error of the estimates (i.e, expected value of the posterior distribution). The results were very similar using the different estimators. Percent bias is given by: ˆsa−sa sv−sa ∗100. Goodness of fit was computed using squared error loss to quantify the amount by which model estimates differed from the behavioral data. For analysis, the trials were dichotomized into unity and non-unity trials based on the perception of spatial unity. A trial was classified as unity if the posterior probability P(C = 1|xv, xa) was greater than some threshold ρ and non-unity otherwise. The simulation results (i.e., the estimates ˆsa and ˆsv) were averaged across trials in each category. The parameters of the model are: 1) the stimulus location variance σ2 p, 2–3) the observation variances σ2 a and σ2 v, 4) the prior mixture proportion ω = P(C =1), and 5) the unity perception threshold ρ. The parameter values were estimated to fit the experimental data and are provided in the figure captions. 2.4 Simulation results for the Gaussian model Figure 1 presents predictions made by both the theoretical models. The behavioral data [6] (solid lines in all plots) range from spatial disparities −15◦to 15◦; error bars represent standard errors across 5 subjects. Model predictions (dashed lines) extend to a wider range of −25◦to 25◦. Some of the predicted trends are similar to the behavioral data. Regardless of stimulus disparity, whenever visual and auditory stimuli were perceived as unity, the predicted response bias was very high (dashed gray; Figure 1A). This means that the auditory location was perceived to be very near to the visual stimulus. When the stimuli appeared to not be unified, the auditory location was biased away from the visual stimulus — increasingly so as disparity decreased (dashed black; Figure 1A). −25 −20 −15 −10 −5 0 5 10 15 20 25 −100 −80 −60 −40 −20 0 20 40 60 80 100 Spatial disparity sv−sa (deg.) Percent bias A: Localisation biases Unity Non−unity −25 −20 −15 −10 −5 0 5 10 15 20 25 0 2 4 6 8 10 12 14 Spatial disparity sv−sa (deg.) Std dev.(+/− deg) B: Causal averaging model Dat Unity Dat Non−unity −25 −20 −15 −10 −5 0 5 10 15 20 25 0 2 4 6 8 10 12 14 Spatial disparity sv−sa (deg.) Std dev.(+/− deg) C: Causal selection model Dat Unity Dat Non−unity −25 −20 −15 −10 −5 0 5 10 15 20 25 0 5 10 15 20 Localisation error (deg.) Percent of trials E: Causal averaging model Unity trials Non−unity trials −25 −20 −15 −10 −5 0 5 10 15 20 25 0 5 10 15 20 Localisation error (deg.) Percent of trials F: Causal selection model Unity trials Non−unity trials Figure 1: Simulation results - Gaussian sensory likelihoods: In this, and all subsequent figures, solid lines plot the actual behavioral data reported in [6] and dashed lines are the model predictions. (A) Localization biases in the data, plotted alongside predictions from both models. (B) Causal averaging model, response variability: σa = 8, σv = 0.05, ω = 0.15. (C) Causal selection model: σa = 6, σv = 2.5, ω = 0.2. For both models: σp = 100, ρ = 0.5. (D) Distribution of localization errors in data, for sv −sa = 0; re-printed with permission from [6]. (E,F) Localization errors predicted by the causal averaging and causal selection models respectively. However, both the models exhibit one or more significant differences from the experimental observations. The predicted curves for unity trials (dashed gray; Figures 1B,C) are all concave, whereas they were actually observed to be convex (solid gray lines). On non-unity trials too, the predicted response variabilities (dashed black lines) are an inadequate fit to the real data (solid black lines). An additional test for the appropriateness of the models is the predictions they make with regards to the distribution of localisation errors. An analysis of the behavioral data derived from the spatially coincident stimulus conditions (sv −sa = 0) revealed a distinct pattern (Figure 1D). On unity trials, localization error was 0◦implying that the responses were clustered around the auditory target. On non-unity trials, the errors were bi-modally distributed and failed the test for normality [6]. Causal selection predicts a qualitatively similar distribution of errors (Figure 1F), suggesting that it may be the most appropriate inference strategy under the given task and model assumptions. 3 An Alternative Model for Sensory Likelihoods 3.1 Heavy-tailed likelihood formulation In this section, we re-formulate the sensory likelihoods P(xa|sa) and P(xv|sv) as a mixture of Gaussian and uniform distributions. This mixture creates a likelihood function with heavy tails. xv ∼πN(xv; sv, σ2 v) + (1 −π) rl ; xa ∼πN(xa; sa, σ2 a) + (1 −π) rl (7) 3.2 Simulation results with heavy-tailed sensory likelihoods Figure 2 presents predictions made by the theoretical models based on heavy-tailed likelihoods. Both models now provide a much better fit to bias and variance, compared to their −25 −20 −15 −10 −5 0 5 10 15 20 25 −100 −80 −60 −40 −20 0 20 40 60 80 100 Spatial disparity sv−sa (deg.) Percent bias A: Localisation biases Dat Unity Dat Non−unity −25 −20 −15 −10 −5 0 5 10 15 20 25 0 2 4 6 8 10 12 14 Spatial disparity sv−sa (deg.) Std dev.(+/− deg) B: Causal averaging model Dat Unity Dat Non−unity −25 −20 −15 −10 −5 0 5 10 15 20 25 0 2 4 6 8 10 12 14 Spatial disparity sv−sa (deg.) Std dev.(+/− deg) C: Causal selection model Dat Unity Dat Non−unity −25 −20 −15 −10 −5 0 5 10 15 20 25 0 5 10 15 20 Localisation error (deg.) Percent of trials E: Causal averaging model Unity trials Non−unity trials −25 −20 −15 −10 −5 0 5 10 15 20 25 0 5 10 15 20 Localisation error (deg.) Percent of trials F: Causal selection model Unity trials Non−unity trials Figure 2: Simulation results - heavy-tailed likelihoods: (A) Localization biases in the data, plotted alongside model predictions. (B) Causal averaging model, response variability: σa = 3.5, σv = 2. (C) Causal selection model: σa = 5, σv = 2.5. In both models, σp = 100, ω = 0.2, ρ = 0.5, rl = 180◦. (D) Distribution of localization errors in data, for sv −sa = 0. (E,F) Localization errors predicted by the heavy-tailed causal averaging and causal selection models. Gaussian counterparts. The heavy-tailed causal averaging model (Figure 2B) makes reasonable predictions with regards to variability. However, both the amount and the trend of predicted biases for non-unity trials (dotted line; 2A) do not match observations. Here too, the best-fitting model is causal selection (dashed line; Figures 2A,C). The localization error distribution (Figure 2F) very closely matches the true observations (Figure 2D) in how the unity responses are uni-modally distributed about the target location sa, and nonunity responses are bi-modally distributed either side of the target. Visually, this is a better prediction of the true distribution of errors, compared to the prediction made by the Gaussian causal selection model (Figure 1F); we are unable to make a quantitative comparison for want of access to the raw data. Compared with the results in Figure 1, our models make very different bias and variance predictions for spatial disparities not tested. This is discussed in detail in Section 4. The heavy-tailed likelihood model has two more free parameters (rp and mixing proportion π; Equation 7) than the Gaussian, which is essentially a subset of the heavy-tailed mixture when π = 1. Although the Gaussian model may be preferred for its computational simplicity, it is a demonstrably poor fit to the data and the heavy-tailed model is a worthwhile improvement. 3.3 Analyzing the likelihood models Existence of the heavy tails in the likelihood function seems to be a critical feature that supports the non-linear behavior in the data. We substantiate this suggestion using Figure 3, and attempt to give some intuition behind the qualitative differences in variability and bias between Figures 1 and 2. The discussion below focuses on 3 disparity conditions. The congruent case |sv −sa| = 0 is chosen for reference; |sv −sa| = 10 and |sv −sa| = 25 are chosen since the Gaussian and heavy-tailed models tend to differ most in their predictions at these disparities. Let us first consider the unity case. In general, most of the samples on unity trials are from the region of space where both the auditory and visual likelihoods overlap. When true disparity |sv −sa| = 0, it means that the two likelihoods overlap maximally (Figures 3Aii and 3Cii). Hence regardless of the form of the likelihood, variability on unity trials at |sv −sa| = 0 should be roughly between σv and σa. This can be verified in Figures 1C, 2C. −25 −20 −15 −10 −5 0 5 10 15 20 25 0 50 100 i.sv−sa=10 A: Gaussian likelihoods, unity −25 −20 −15 −10 −5 0 5 10 15 20 25 0 50 100 Number of samples ii.sv−sa=0 −25 −20 −15 −10 −5 0 5 10 15 20 25 0 50 100 iii.sv−sa=−25 xa−sa (deg.) −25 −20 −15 −10 −5 0 5 10 15 20 25 0 50 100 i.sv−sa=10 B: Gaussian likelihoods, non−unity −25 −20 −15 −10 −5 0 5 10 15 20 25 0 50 100 ii.sv−sa=0 −25 −20 −15 −10 −5 0 5 10 15 20 25 0 50 100 xa−sa (deg.) iii.sv−sa=−25 −25 −20 −15 −10 −5 0 5 10 15 20 25 0 50 100 150 200 i.sv−sa=10 C: Heavy−tailed likelihoods, unity −25 −20 −15 −10 −5 0 5 10 15 20 25 0 50 100 150 200 Number of samples ii.sv−sa=0 −25 −20 −15 −10 −5 0 5 10 15 20 25 0 50 100 150 200 iii.sv−sa=−25 xa−sa (deg.) −25 −20 −15 −10 −5 0 5 10 15 20 25 0 50 100 150 200 i.sv−sa=10 D: Heavy−tailed likelihoods, non−unity −25 −20 −15 −10 −5 0 5 10 15 20 25 0 50 100 150 200 ii.sv−sa=0 −25 −20 −15 −10 −5 0 5 10 15 20 25 0 50 100 150 200 xa−sa (deg.) iii.sv−sa=−25 Figure 3: Analyzing the likelihood models: Results from the causal selection models. In all plots, light-gray histograms are samples xv from visual likelihood distribution; dark-gay histograms plot xa. Black histograms are built only from samples xa on which either unity (A,C) or non-unity (B,D) judgment was made. Each panel corresponds to one of three chosen disparities; histograms in the panel plot samples from all stimulus conditions that correspond to that particular disparity. Now one of the biggest differences between the likelihood models is what happens to this variability as |sv −sa| increases. In the case of the Gaussian, the amount of overlap between the two likelihoods decreases (Figures 3Ai,3Aiii). Consequently, the samples are from a somewhat smaller region in space and hence the variability also decreases. This corresponds to the concave curves predicted by the Gaussian model (Figures 1C; dashed gray). Whereas for the heavy-tailed likelihood, the overlapping regions roughly increase with increasing disparity, due to the long tails (Figures 3Ci,3Ciii). This is reflected in the gradually increasing variability on unity trials corresponding to the better matching convex curves predicted by the heavy-tailed model (Figure 2C). On the non-unity trials, most of the samples are from non-overlapping regions of space. Here, the biggest difference between the likelihood models is that in the Gaussian case, after a certain spatial limit, the variability tends to increase with increasing |sv −sa|. We also see this trend in simulation results presented in [2; 4]. This is because as disparity increases, the degree of overlap between two likelihoods decreases and variability approaches σa (Figures 3Bi,3Biii). However, the behavior in the real data suggests that variability continues to be a constant. With heavy-tailed likelihoods, the tails of the two likelihoods continue to overlap even as disparity increases; hence the variability is roughly constant (Figures 3Di,3Diii). 4 Model Predictions Quantitative predictions — variance and bias: Our heavy-tailed causal selection model makes two predictions with regards to variability and bias for stimulus conditions not yet tested. One prediction is that on non-unity trials, as spatial disparity sv −sa increases, the localisation variability continues to remain constant at roughly a value equivalent to the standard deviation of the auditory likelihood (Figure 2C; black dashed plot). However, response percent bias approaches zero (Figure 2A; black dashed plot), indicating that when spatial disparity is very high and the stimuli are perceived as being independent, auditory localisation response is consistent with auditory dominance. A second prediction is that percent bias gradually decreases with increasing disparity on unity trials as well. This suggests that even when highly disparate stimuli are perceived as being unified, perception may be dominated by the auditory cues. Our results also predict that the variability in this case continues to increase very gradually with increasing disparity up to some spatial limits (|sv−sa| = 20◦in our simulations) after which it begins to decrease. This accords with intuition, since for very large disparities, the number of trials in which the the stimuli are perceived as being unified will be very small. Qualitative prediction — distribution of localization errors: Our model also makes a qualitative prediction concerning the distribution of localisation errors for incongruent (sv− sa ̸= 0) stimulus conditions. In both Figures 4A and B, localization error on unity trials is equivalent to the stimulu disparity sv −sa = 10◦, indicating that even at this high disparity, responses are cluttered closer to the visual stimulus location. On non-unity trials, the error is about 5◦here; responses are more broadly distributed and the bias is highly reduced. The Gaussian and heavy-tailed predictions differ in how quickly the error distributions go to zero. −25 −20 −15 −10 −5 0 5 10 15 20 25 0 5 10 15 20 Localisation error (deg.) Percent of trials A: Gaussian predictions (Psel) sv−sa=10 Unity trials Non−unity trials −25 −20 −15 −10 −5 0 5 10 15 20 25 0 5 10 15 20 Localisation error (deg.) Percent of trials B: Heavy−tailed predictions (Psel) sv−sa=10 Unity trials Non−unity trials −20 −10 0 10 20 1 2 3 4 5 6 7 8 Spatial disparity sv−sa (deg.) Std. dev (+/− deg) C: Heavy−tailed predictions: variability Causal averaging Causal selection −20 −15 −10 −5 0 20 40 60 80 100 Spatial disparity sv−sa (deg.) Percent bias D: Heavy−tailed predictions: biases Data Causal averaging Causal selection Figure 4: Model predictions: (A,B) Localization error distributions predited by the Gaussian and heavy-tailed causal selection models. Plots correspond to stimulus condition sv = 20;sa = 10. (C,D) Response variability and bias predicted by they heavy-tailed causal averaging and selection models on simulation of an audio-visual localization task [3]. Specificity to experimental task: In the experimental task we have examined here [6], subjects were subjects were asked to first indicate the perceived location of sound on each trial and then to report their judgement of unity. The requirement to explicitly make a unity judgement may incur an experimental bias towards the causal selection model. To explore the potential influence of task instructions on subjects’ inference strategy, we tested our models on a simulation of a different audio-visual spatial localisation task [3]. Here, subjects were asked to report on both visual and auditory stimulus locations and were not explicitly instructed to make unity judgements. The authors employed model averaging to explain the results [3] and the data were found to have a very high likelihood under their model. However, they do not analyse variability in the subjects’ responses and this aspect of behavior as a function of spatial disparity is not readily obvious in their published data. We evaluated both our heavy-tailed causal averaging as well as causal selection models on a simulation of this experiment. The two models make very different predictions. Causal averaging predicts that response variability will monotonically increase with increasing disparity, while selection predicts a less straightforward trend (Figure 4C). Both models predict a similar amount of response bias and that it will decrease with increasing disparity (Figure 4C). This particular prediction is confirmed by the response bias in their behavioral data plot made available in [3]. Considering the paradigmatic differences between the two studies ([6] and [3]) and the wide range in bias, applying both inference methods and likelihood models on this data could be very informative. Adaptation of the prior: One interesting aspect of inference under this generative model is that as the value of ω = P(C = 1) increases, the variability also increases for both unity and non-unity trials across all disparities. However, the response bias remains unchanged. Given this correlation between response variability and the prior over hypotheses, our approach may be used to understand whether and how subjects’ priors change during the course of an experimental session. Considering that the best value across all trials for this prior is quite small (ω ∼0.2), we hypothesize that this value will be quite high at the start of an experiment, and gradually reduce. This hypothesis leads to a prediction that variability decreases during an experimental session. 5 Discussion In this paper, we ventured to understand the computational mechanisms underlying sensory cue interactions that give rise to a particular pattern of non-linear response behavior [6], using a mixture of two different models that could have generated the sensory data. We proposed that the form of the sensory likelihood is a critical feature that drives non-linear behavior, especially at large stimulus disparities. In particular, a heavy-tailed likelihood function more accurately fits subjects’ bias and variance in a cue combination task. Heavy-tailed distributions have been used previously in modeling cue interactions [7; 8]. In this paper, we went further by comparing the ability of heavy-tailed and Gaussian likelihood models to describe behavior. Qualitative fits of summarised statistics such as bias and variance are insufficient to make any strong claims about human perceptual processes; nevertheless, this work provides some insight into the potential functional role of sensory noise. Another significant contribution in this paper is the critical evaluation of model selection versus averaging approaches to inference. These two inference methods may predict different variances in their estimates, as a function of stimulus conflict. As suggested in Section 4, having these different models at hand allows one to examine how task instructions affect subject behavior. We noted in Section 3.2 that the heavy-tailed model is more complex than the Gaussian model. Although we have not included any complexity penalty, this formulation was supported by two aspects: (i) it was relatively insensitive to parameter settings, providing a better fit to the data than the Gaussian model for a wide range of parameter values; (ii) optimizing the fit of the Gaussian model required implausible values for parameters σa, σv (Fig 1B), whereas parameters for the heavy-tailed model accorded well with published data. One downside about our results is that even though the model bias for unity trials captures the slightly increasing trend as disparity decreases, it is not as large as in the behavioral data (close to 100%) or as that predicted by the Gaussian models. This does not seem to be a consequence of the parameter values chosen. One interpretation provided by [6] of the large bias in the data is that a perceptual decision (unity or non-unity) determines a sensorimotor action (localization response). Then one response strategy might be to ignore the posterior probability P(sa|xv, xa) once unity is judged and then set ˆsa = ˆsv; although this results in prediction of higher bias, the strategy is not Bayes-optimal. Yet another potential limitation of our approach is that the only form of noise we consider is sensory; we do not yet take into account any motor component that may drive target localization. Currently, we have access to only an estimate of the average variance in subjects’ auditory target location estimates. On the computational side, one interesting avenue for future work would be to evaluate the model averaging and selection hypothesis based on a likelihood model derived directly from the raw data. On the experimental side, one of the major inadequacies of most experimental paradigms is that the only (approximate) measure of a subject’s perceptual uncertainty involves measuring the response variability across a large number of trials. An alternative paradigm that allows measurement of the perceptual uncertainty on a single trial could provide important constraints on computational models of the perceptual phenomena. At the neural level, a key step entails exploring biologically plausible neural implementations of the mixture model approach. Acknowledgments The authors would like to thank National Sciences and Engineering Research Council of Canada and Canadian Institute For Advanced Research (RN and RZ), the government of Canada (IM), UCLA Faculty Grants Program and UCLA Faculty Career Development (LS). References [1] I P Howard and W B Templeton. Human spatial orientation. Wiley, New York, 1966. [2] Konrad P K¨ording and Joshua B Tenenbaum. Causal inference in sensorimotor integration. In NIPS, pages 737–744. MIT Press, 2006. [3] Konrad P K¨ording, Ulrik Beierholm, Wei Ji Ma, Steven Quartz, Joshua B Tenenbaum, and Ladan Shams. Causal inference in multisensory perception. PLoS ONE, 2(9), 2007. [4] Y Sato, T Toyoizumi, and K Aihara. Bayesian inference explains perception of unity and ventriloquism aftereffect. Neural Comp., 19:3335–55, 2007. [5] Alan Stocker and Eero Simoncelli. A Bayesian model of conditioned perception. In NIPS 20, pages 1409–1416. MIT Press, Cambridge, MA, 2008. [6] MT Wallace, GE Roberson, WE Hairston, BE Stein, JW Vaughan, and JA Schirillo. Unifying multisensory signals across time and space. Exp Brain Res., 158(2):252–8, 2004. [7] David C Knill. Robust cue integration: A Bayesian model and evidence from cue-conflict studies with stereoscopic and figure cues to slant. Journal of Vision, 7(7):1–24, 2007. [8] Alan A Stocker and Eero P Simoncelli. Noise characteristics and prior expectations in human visual speed perception. Nat. Neurosci., 9:578–585, 2006.
2008
65
3,554
The Gaussian Process Density Sampler Ryan Prescott Adams∗ Cavendish Laboratory University of Cambridge Cambridge CB3 0HE, UK rpa23@cam.ac.uk Iain Murray Dept. of Computer Science University of Toronto Toronto, Ontario. M5S 3G4 murray@cs.toronto.edu David J.C. MacKay Cavendish Laboratory University of Cambridge Cambridge CB3 0HE, UK mackay@mrao.cam.ac.uk Abstract We present the Gaussian Process Density Sampler (GPDS), an exchangeable generative model for use in nonparametric Bayesian density estimation. Samples drawn from the GPDS are consistent with exact, independent samples from a fixed density function that is a transformation of a function drawn from a Gaussian process prior. Our formulation allows us to infer an unknown density from data using Markov chain Monte Carlo, which gives samples from the posterior distribution over density functions and from the predictive distribution on data space. We can also infer the hyperparameters of the Gaussian process. We compare this density modeling technique to several existing techniques on a toy problem and a skullreconstruction task. 1 Introduction We present the Gaussian Process Density Sampler (GPDS), a generative model for probability density functions, based on a Gaussian process. We are able to draw exact and exchangeable data from a fixed density drawn from the prior. Given data, this generative prior allows us to perform inference of the unnormalized density. We perform this inference by expressing the generative process in terms of a latent history, then constructing a Markov chain Monte Carlo algorithm on that latent history. The central idea of the GPDS is to allow nonparametric Bayesian density estimation where the prior is specified via a Gaussian process covariance function that encodes the intuition that “similar data should have similar probabilities.” One way to perform Bayesian nonparametric density estimation is to use a Dirichlet process to define a distribution over the weights of the components in an infinite mixture model, using a simple parametric form for each component. Alternatively, Neal [1] generalizes the Dirichlet process itself, introducing a spatial component to achieve an exchangeable prior on discrete or continuous density functions with hierarchical characteristics. Another way to define a nonparametric density is to transform a simple latent distribution through a nonlinear map, as in the Density Network [2] and the Gaussian Process Latent Variable Model [3]. Here we use the Gaussian process to define a prior on the density function itself. 2 The prior on densities We consider densities on an input space X that we will call the data space. In this paper, we assume without loss of generality that X is the d-dimensional real space Rd. We first construct a Gaussian process prior with the data space X as its input and the one-dimensional real space R as its output. The Gaussian process defines a distribution over functions from X to R. We define a mean function m(·) : X → R and a positive definite covariance function K(·, ·) : X × X → R. We ∗http://www.inference.phy.cam.ac.uk/rpa23/ −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 (a) ℓx =1, ℓy =1, α=1 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 (b) ℓx =1, ℓy =1, α=10 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 (c) ℓx =0.2, ℓy =0.2, α=5 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 (d) ℓx =0.1, ℓy =2, α=5 Figure 1: Four samples from the GPDS prior are shown, with 200 data samples. The contour lines show the approximate unnormalized densities. In each case the base measure is the zero-mean spherical Gaussian with unit variance. The covariance function was the squared exponential: K(x, x′) = α exp(−1 2 P i ℓ−2 i (xi −x′ i)2), with parameters varied as labeled in each subplot. Φ(·) is the logistic function in these plots. assume that these functions are together parameterized by a set of hyperparameters θ. Given these two functions and their hyperparameters, for any finite subset of X with cardinality N there is a multivariate Gaussian distribution on RN [4]. We will take the mean function to be zero. Probability density functions must be everywhere nonnegative and must integrate to unity. We define a map from a function g(x) : X →R, x ∈X, to a proper density f(x) via f(x) = 1 Zπ[g] Φ(g(x)) π(x) (1) where π(x) is an arbitrary base probability measure on X, and Φ(·) : R →(0, 1) is a nonnegative function with upper bound 1. We take Φ(·) to be a sigmoid, e.g. the logistic function or cumulative normal distribution function. We use the bold notation g to refer to the function g(x) compactly as a vector of (infinite) length, versus its value at a particular x. The normalization constant is a functional of g(x): Zπ[g] = Z dx′ Φ(g(x′)) π(x′). (2) Through the map defined by Equation 1, a Gaussian process prior becomes a prior distribution over normalized probability density functions on X. Figure 2 shows several sample densities from this prior, along with sample data. 3 Generating exact samples from the prior We can use rejection sampling to generate samples from a common density drawn from the the prior described in Section 2. A rejection sampler requires a proposal density that provides an upper bound for the unnormalized density of interest. In this case, the proposal density is π(x) and the unnormalized density of interest is Φ(g(x))π(x). If g(x) were known, rejection sampling would proceed as follows: First generate proposals {˜xq} from the base measure π(x). The proposal ˜xq would be accepted if a variate rq drawn uniformly from (0, 1) was less than Φ(g(˜xq)). These samples would be exact in the sense that they were not biased by the starting state of a finite Markov chain. However, in the GPDS, g(x) is not known: it is a random function drawn from a Gaussian process prior. We can nevertheless use rejection sampling by “discovering” g(x) as we proceed at just the places we need to know it, by sampling from the prior distribution of the latent function. As it is necessary only to know g(x) at the {xq} to accept or reject these proposals, the samples are still exact. This retrospective sampling trick has been used in a variety of other MCMC algorithms for infinite-dimensional models [5, 6]. The generative procedure is shown graphically in Figure 2. In practice, we generate the samples sequentially, as in Algorithm 1, so that we may be assured of having as many accepted samples as we require. In each loop, a proposal is drawn from the base measure π(x) and the function g(x) is sampled from the Gaussian process at this proposed coordinate, conditional on all the function values already sampled. We will call these data the conditioning set for the function g(x) and will denote the conditioning inputs X and the conditioning          (c) (e) (d) (b) (a) 1 0 0 1 0 1 0 1 0 1 {˜xq} {˜gq} {rq} 0 1 Figure 2: These figures show the procedure for generating samples from a single density drawn from the GP-based prior. (a): Draw Q samples {˜xq}Q from the base measure π(x), which in this case is uniform on [0, 1]. (b): Sample the function g(x) at the randomly chosen locations, generating the set {˜gq = g(˜xq)}Q. The squashed function Φ(g(x)) is shown. (c): Draw a set of variates {rq}Q uniformly beneath the bound in the vertical coordinate. (d): Accept only the points whose uniform draws are beneath the squashed function value, i.e. rq < Φ(˜gq). (e): The accepted points (˜xq, rq) are uniformly drawn from the shaded area beneath the curve and the marginal distribution of the accepted ˜xq is proportional to Φ(g(x))π(x). function values G. After the function is sampled, a uniform variate is drawn from beneath the bound and compared to the Φ-squashed function at the proposal location. The sequential procedure is exchangeable, which means that the probability of the data is identical under reordering. First, the base measure draws are i.i.d.. Second, conditioned on the proposals from the base measure, the Gaussian process is a simple multivariate Gaussian distribution, which is exchangeable in its components. Finally, conditioned on the draw from the Gaussian process, the acceptance/rejection steps are independent Bernoulli samples, and the overall procedure is exchangeable. This property is important because it ensures that the sequential procedure generates data from the same distribution as the simultaneous procedure described above. More broadly, exchangeable priors are useful in Bayesian modeling because we may consider the data conditionally independent, given the latent density. Algorithm 1 Generate P exact samples from the prior Purpose: Draw P exact samples from a common density on X drawn from the prior in Equation 1 Inputs: GP hyperparameters θ, number of samples to generate P 1: Initialize empty conditioning sets for the Gaussian process: X = ∅and G = ∅ 2: repeat 3: Draw a proposal from the base measure: ˜x ∼π(x) 4: Sample the function from the Gaussian process at ˜x: ˜g ∼GP(g | X, G, ˜x, θ) 5: Draw a uniform variate on [0, 1]: r ∼U(0, 1) 6: if r < Φ(˜g) (Acceptance rule) then 7: Accept ˜x 8: else 9: Reject ˜x 10: end if 11: Add ˜x and ˜g to the conditioning sets: X = X ∪˜x and G = G ∪˜g 12: until P samples have been accepted 4 Inference We have N data D = {xn}N n=1 which we model as having been drawn independently from an unknown density f(x). We use the GPDS prior from Section 2 to specify our beliefs about f(x), and we wish to generate samples from the posterior distribution over the latent function g(x) corresponding to the unknown density. We may also wish to generate samples from the predictive distribution or perform hierarchical inference of the prior hyperparameters. By using the GPDS prior to model the data, we are asserting that the data can be explained as the result of the procedure described in Section 3. We do not, however, know what rejections were made en route to accepting the observed data. These rejections are critical to defining the latent function g(x). One might think of defining a density as analogous to putting up a tent: pinning the canvas down with pegs is just as important as putting up poles. In density modeling, defining regions with little probability mass is just as important as defining the areas with significant mass. Although the rejections are not known, the generative procedure provides a probabilistic model that allows us to traverse the posterior distribution over possible latent histories that resulted in the data. If we define a Markov chain whose equilibrium distribution is the posterior distribution over latent histories, then we may simulate plausible explanations of every step taken to arrive at the data. Such samples capture all the information available about the unknown density, and with them we may ask additional questions about g(x) or run the generative procedure further to draw predictive samples. This approach is related to that described by Murray [7], who performed inference on an exactly-coalesced Markov chain [8], and by Beskos et al. [5]. We model the data as having been generated exactly as in Algorithm 1, with P = N, i.e. run until exactly N proposals were accepted. The state space of the Markov chain on latent histories in the GPDS consists of: 1) the values of the latent function g(x) at the data, denoted GN = {gn}N n=1, 2) the number of rejections M, 3) the locations of the M rejected proposals, denoted M = {xm}M m=1, and 4) the values of the latent function g(x) at the M rejected proposals, denoted GM = {gm = g(xm)}M m=1. We will address hyperparameter inference in Section 4.3. We perform Gibbs-like sampling of the latent history by alternating between modification of the number of rejections M and block updating of the rejection locations M and latent function values GM and GN. We will maintain an explicit ordering of the latent rejections for reasons of clarity, although this is not necessary due to exchangeability. We will also assume that Φ(·) is the logistic function, i.e. Φ(z) = (1 + exp{−z})−1. 4.1 Modifying the number of latent rejections We propose a new number of latent rejections ˆ M by drawing it from a proposal distribution q( ˆ M ← M). If ˆ M is greater than M, we must also propose new rejections to add to the latent state. We take advantage of the exchangeability of the process to generate the new rejections: we imagine these proposals were made after the last observed datum was accepted, and our proposal is to call them rejections and move them before the last datum. If ˆ M is less than M, we do the opposite by proposing to move some rejections to after the last acceptance. When proposing additional rejections, we must also propose times for them among the current latent history. There are ˆ M+N−1 ˆ M−M  such ways to insert these additional rejections into the existing latent history, such that the sampler terminates after the Nth acceptance. When removing rejections, we must choose which ones to place after the data, and there are M M−ˆ M  possible sets. Upon simplification, the proposal ratios for both addition and removal of rejections are identical: ˆ M>M z }| { q(M ←ˆ M) ˆ M+N−1 ˆ M−M  q( ˆ M ←M) ˆ M ˆ M−M  = ˆ M<M z }| { q(M ←ˆ M) M M−ˆ M  q( ˆ M ←M) M+N−1 M−ˆ M  = q(M ←ˆ M)M!( ˆ M + N −1)! q( ˆ M ←M) ˆ M!(M + N −1)! . When inserting rejections, we propose the locations of the additional proposals, denoted M+, and the corresponding values of the latent function, denoted G+ M. We generate M+ by making ˆ M −M independent draws from the base measure. We draw G+ M jointly from the Gaussian process prior, conditioned on all of the current latent state, i.e. (M, GM, D, GN). The joint probability of this state is p(D, M, M+, GN, GM, G+ M) = " N Y n=1 π(xn)Φ(gn) # " M Y m=1 π(xm)(1 −Φ(gm)) #   ˆ M Y m=M+1 π(xm)   × GP(GM, GN, G+ M | D, M, M+). (3) The joint in Equation 3 expresses the probability of all the base measure draws, the values of the function draws from the Gaussian process, and the acceptance or rejection probabilities of the proposals excluding the newly generated points. When we make an insertion proposal, exchangeability allows us to shuffle the ordering without changing the probability; the only change is that now we must account for labeling the new points as rejections. In the acceptance ratio, all terms except for the “labeling probability” cancel. The reverse proposal is similar, however we denote the removed proposal locations as M−and the corresponding function values as G− M. The overall acceptance ratios for insertions or removals are a =        q(M←ˆ M) M! ( ˆ M+N−1)! q( ˆ M←M) ˆ M! (M+N−1)! Q g∈G+ M (1 −Φ(g)) if ˆ M > M q(M←ˆ M) M! ( ˆ M+N−1)! q( ˆ M←M) ˆ M! (M+N−1)! Q g∈G− M (1 −Φ(g))−1 if ˆ M < M. (4) 4.2 Modifying rejection locations and function values Given the number of latent rejections M, we propose modifying their locations M, their latent function values GM, and the values of the latent function at the data GN. We will denote these proposals as ˆ M = {ˆxm}M m=1, ˆGM = {ˆgm = ˆg(ˆxm)}M m=1 and ˆGN = {ˆgn = ˆg(xn)}N n=1, respectively. We make simple perturbative proposals of M via a proposal density q( ˆ M ←M). For the latent function values, however, perturbative proposals will be poor, as the Gaussian process typically defines a narrow mass. To avoid this, we propose modifications to the latent function that leave the prior invariant. We make joint proposals of ˆ M, ˆGM and ˆGN in three steps. First, we draw new rejection locations from q( ˆ M ←M). Second, we draw a set of M intermediate function values from the Gaussian process at ˆ M, conditioned on the current rejection locations and their function values, as well as the function values at the data. Third, we propose new function values at ˆ M and the data D via an underrelaxation proposal of the form ˆg(x) = α g(x) + p 1 −α2 h(x) where h(x) is a sample from the Gaussian process prior and α is in [0, 1). This is a variant of the overrelaxed MCMC method discussed by Neal [9]. This procedure leaves the Gaussian process prior invariant, but makes conservative proposals if α is near one. After making a proposal, we accept or reject via the ratio of the joint distributions: a = q(M ←ˆ M) hQM m=1 π(ˆxm)(1 −Φ(ˆgm)) i hQN n=1 Φ(ˆgn) i q( ˆ M ←M) hQM m=1 π(xm)(1 −Φ(gm)) i hQN n=1 Φ(gn) i. 4.3 Hyperparameter inference Given a sample from the posterior on the latent history, we can also perform a Metropolis–Hasting step in the space of hyperparameters. Parameters θ, governing the covariance function and mean function of the Gaussian process provide common examples of hyperparameters, but we might also introduce parameters φ that control the behavior of the base measure π(x). We denote the proposal distributions for these parameters as q(ˆθ ←θ) and q(ˆφ ←φ), respectively. With priors p(θ) and p(φ), the acceptance ratio for a Metropolis–Hastings step is a = q(θ←ˆθ) q(φ←ˆφ) p(ˆθ) p(ˆφ) N({GM, GN} | M, D, ˆθ) q(ˆθ←θ) q(ˆφ←φ) p(θ) p(φ) N({GM, GN} | M, D, θ) " M Y m=1 π(xm | ˆφ) π(xm | φ) # " N Y n=1 π(xn | ˆφ) π(xn | φ) # . 4.4 Prediction The predictive distribution is the one that arises on the space X when the posterior on the latent function g(x) (and perhaps hyperparameters) is integrated out. It is the expected distribution of the next datum, given the ones we have seen and taking into account our uncertainty. In the GPDS we sample from the predictive distribution by running the generative process of Section 3, initialized to the current latent history sample from the Metropolis–Hastings procedure described above. It may also be desirable to estimate the actual value of the predictive density. We use the method of Chib and Jeliazkov [10], and observe by detailed balance of a Metropolis–Hastings move: p(x | g, θ, φ)π(x′) min  1, Φ(g(x′)) Φ(g(x))  = p(x′ | g, θ, φ)π(x) min  1, Φ(g(x)) Φ(g(x′))  . (c) (b) (a) {g(ˆxm)} ˆGM={ˆg(ˆxm)} ˆGN={ˆg(xn)} D={xn} M={xm} ˆ M={ˆxm} GM={g(xm)} GN={g(xn)} Figure 3: These figures show the sequence of proposing new rejection locations, new function values at those locations, and new function values at the data. (a): The current state, with rejections labeled M = {xm} on the left, along with the values of the latent function GM = {gm}. On the right side are the data D = {xn} and the corresponding values of the latent function GN = {gn}. (b): New rejections ˆ M = {ˆxm} are proposed via q( ˆ M ←M), and the latent function is sampled at these points. (c): The latent function is perturbed at the new rejection locations and at the data via an underrelaxed proposal. We find the expectation of each side under the posterior of g and the hyperparameters θ and φ: Z dθ Z dφ p(θ, φ | D) Z dg p(g | θ, D) Z dx′ p(x | g, θ, φ)π(x′) min  1, Φ(g(x′)) Φ(g(x))  = Z dθ Z dφ p(θ, φ | D) Z dg p(g | θ, D) Z dx′ p(x′ | g, θ, φ)π(x) min  1, Φ(g(x)) Φ(g(x′))  . This gives an expression for the predictive density: p(x | D) = R dθ R dφ R dg R dx′ p(θ, φ, g, x′ | D) π(x) min  1, Φ(g(x)) Φ(g(x′))  R dθ R dφ R dg R dx′ p(θ, φ, g | x, D) π(x′) min  1, Φ(g(x′)) Φ(g(x))  (5) Both the numerator and the denominator in Equation 5 are expectations that can be estimated by averaging over the output from the GPDS Metropolis–Hasting sampler. The denominator requires sampling from the posterior distribution with the data augmented by x. 5 Results We examined the GPDS prior and the latent history inference procedure on a toy data set and on a skull reconstruction task. We compared the approach described in this paper to a kernel density estimate (Parzen windows), an infinite mixture of Gaussians (iMoG), and Dirichlet diffusion trees (DFT). The kernel density estimator used a spherical Gaussian with the bandwidth set via ten-fold cross validation. Neal’s Flexible Bayesian Modeling (FBM) Software [1] was used for the implementation of both iMoG and DFT. The toy data problem consisted of 100 uniform draws from a two-dimensional ring with radius 1.5, and zero-mean Gaussian noise added with σ = 0.2. The test data were 50 additional samples, and comparison used mean log probability of the test set. Each of the three Bayesian methods improved on the Parzen window estimate by two or more nats, with the DFT approach being the most successful. A bar plot of these results is shown in Figure 5. We also compared the methods on a real-data task. We modeled the the joint density of ten measurements of linear distances between anatomical landmarks on 228 rhesus macaque (Macaca mulatta) skulls. These linear distances were generated from three-dimensional coordinate data of anatomical landmarks taken by a single observer from dried skulls using a digitizer [11]. Linear distances are commonly used in morphological studies as they are invariant under rotation and translation of the objects being compared [12]. Figure 5 shows a computed tomography (CT) scan reconstruction of a macaque skull, along with the ten linear distances used. Each skull was measured three times in different trials, and these were modeled separately. 200 randomly-selected skulls were used as a training set and 28 were used as a test set. To be as fair as possible, the data was logarithmically transformed and whitened as a preprocessing step, to have zero sample mean and spherical sample covariance. Each of the Bayesian approaches outperformed the Parzen window technique in mean log probability of the test set, with comparable results for each. This result is not surprising, as flexible nonparametric Bayesian models should have roughly similar expressive capabilities. These results are shown in Figure 5. Figure 4: The macaque skull data are linear distances calculated between three-dimensional coordinates of anatomical landmarks. These are superior and inferior views of a computed tomography (CT) scan of a male macaque skull, with the ten linear distances superimposed. The anatomical landmarks are based on biological relevance and repeatability across individuals. Ring Mac T1 Mac T2 Mac T3 GPDS iMoG DFT Improvement Over Parzen (nats) 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Figure 5: This bar plot shows the improvement of the GPDS, infinite mixture of Gaussians (iMoG), and Dirichlet diffusion trees (DFT) in mean log probability (base e) of the test set over crossvalidated Parzen windows on the toy ring data and the macaque data. The baseline log probability of the Parzen method for the ring data was −2.253 and for the macaque data was −15.443, −15.742, and −15.254 for each of three trials. 6 Discussion Valid MCMC algorithms for fully Bayesian kernel regression methods are well-established. This work introduces the first such prior that enables tractable density estimation, complementing alternatives such as Dirichlet Diffusion Trees [1] and infinite mixture models. Although the GPDS has similar motivation to the logistic Gaussian process [13, 14, 15, 16], it differs significantly in its applicability and practicality. All known treatments of the logistic GP require a finite-dimensional proxy distribution. This proxy distribution is necessary both for tractability of inference and for estimation of the normalization constant. Due to the complexity constraints of both the basis-function approach of Lenk [15] and the lattice-based approach of [16], these have only been implemented on single-dimensional toy problems. The GPDS construction we have presented here not only avoids numerical estimation of the normalization constant, but allows infinite-dimensional inference both in theory and in practice. 6.1 Computational complexity The inference method for the GPDS prior is “practical” in the sense that it can be implemented without approximations, but it has potentially-steep computational costs. To compare two latent histories in a Metropolis–Hastings step we must evaluate the marginal likelihood of the Gaussian process. This requires a matrix decomposition whose cost is O((N + M)3). The model explicitly allows M to be any nonnegative integer and so this cost is unbounded. The expected cost of an M–H step is determined by the expected number of rejections M. For a given g(x), the expected M is N(Zπ[g]−1 −1). This expression is derived from the observation that π(x) provides an upper bound on the function Φ(g(x))π(x) and the ratio of acceptances to rejections is determined by the proportion of the mass of π(x) contained by Φ(g(x))π(x). We are optimistic that more sophisticated Markov chain Monte Carlo techniques may realize constant-factor performance gains over the basic Metropolis–Hasting scheme presented here, without compromising the correctness of the equilibrium distribution. Sparse approaches to Gaussian process regression that improve the asymptotically cubic behavior may also be relevant to the GPDS, but it is unclear that these will be an improvement over other approximate GP-based schemes for density modeling. 6.2 Alternative inference methods In developing inference methods for the GPDS prior, we have also explored the use of exchange sampling [17, 7]. Exchange sampling is an MCMC technique explicitly developed for the situation where there is an intractable normalization constant that prevents exact likelihood evaluation, but exact samples may be generated for any particular parameter setting. Undirected graphical models such as the Ising and Potts models provide common examples of cases where exchange sampling is applicable via coupling from the past [8]. Using the exact sampling procedure of Section 3, it is applicable to the GPDS as well. Exchange sampling for the GPDS, however, requires more evaluations of the function g(x) than the latent history approach. In practice the latent history approach of Section 4 does perform better. Acknowledgements The authors wish to thank Radford Neal and Zoubin Ghahramani for valuable comments. Ryan Adams’ research is supported by the Gates Cambridge Trust. Iain Murray’s research is supported by the government of Canada. The authors thank the Caribbean Primate Research Center, the University of Puerto Rico, Medical Sciences Campus, Laboratory of Primate Morphology and Genetics, and the National Institutes of Health (Grant RR03640 to CPRC) for support. References [1] R. M. Neal. Defining priors for distributions using Dirichlet diffusion trees. Technical Report 0104, Department of Statistics, University of Toronto, 2001. [2] D. J. C. MacKay. Bayesian neural networks and density networks. Nuclear Instruments and Methods in Physics Research, Section A, 354(1):73–80, 1995. [3] N. Lawrence. Probabilistic non-linear principal component analysis with Gaussian process latent variable models. Journal of Machine Learning Research, 6:1783–1816, 2005. [4] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, Cambridge, MA, 2006. [5] A. Beskos, O. Papaspiliopoulos, G. O. Roberts, and P. Fearnhead. Exact and computationally efficient likelihood-based estimation for discretely observed diffusion processes (with discussion). Journal of the Royal Statistical Society: Series B, 68:333–382, 2006. [6] O. Papaspiliopoulos and G. O. Roberts. Retrospective Markov chain Monte Carlo methods for Dirichlet process hierarchical models. Biometrika, 95(1):169–186, 2008. [7] I. Murray. Advances in Markov chain Monte Carlo methods. PhD thesis, Gatsby Computational Neuroscience Unit, University College London, London, 2007. [8] J. G. Propp and D. B. Wilson. Exact sampling with coupled Markov chains and applications to statistical mechanics. Random Structures and Algorithms, 9(1&2):223–252, 1996. [9] R. M. Neal. Supressing random walks in Markov chain Monte Carlo using ordered overrelaxation, 1998. [10] S. Chib and I. Jeliazkov. Marginal likelihood from the Metropolis–Hastings output. Journal of the American Statistical Association, 96(453):270–281, 2001. [11] K. E. Willmore, C. P. Klingenberg, and B. Hallgrimsson. The relationship between fluctuating asymmetry and environmental variance in rhesus macaque skulls. Evolution, 59(4):898–909, 2005. [12] S. R. Lele and J. T. Richtsmeier. An invariant approach to statistical analysis of shapes. Chapman and Hall/CRC Press, London, 2001. [13] T. Leonard. Density estimation, stochastic processes and prior information. Journal of the Royal Statistical Society, Series B, 40(2):113–146, 1978. [14] D. Thorburn. A Bayesian approach to density estimation. Biometrika, 73(1):65–75, 1986. [15] P. J. Lenk. Towards a practicable Bayesian nonparametric density estimator. Biometrika, 78(3):531–543, 1991. [16] S. T. Tokdar and J. K. Ghosh. Posterior consistency of logistic Gaussian process priors in density estimation. Journal of Statistical Planning and Inference, 137:34–42, 2007. [17] I. Murray, Z. Ghahramani, and D. J.C. MacKay. MCMC for doubly-intractable distributions. In Proceedings of the 22nd Annual Conference on Uncertainty in Artificial Intelligence (UAI), pages 359–366, 2006.
2008
66
3,555
Cyclizing Clusters via Zeta Function of a Graph Deli Zhao and Xiaoou Tang Department of Information Engineering, Chinese University of Hong Kong Hong Kong, China {dlzhao,xtang}@ie.cuhk.edu.hk Abstract Detecting underlying clusters from large-scale data plays a central role in machine learning research. In this paper, we tackle the problem of clustering complex data of multiple distributions and multiple scales. To this end, we develop an algorithm named Zeta l-links (Zell) which consists of two parts: Zeta merging with a similarity graph and an initial set of small clusters derived from local l-links of samples. More specifically, we propose to structurize a cluster using cycles in the associated subgraph. A new mathematical tool, Zeta function of a graph, is introduced for the integration of all cycles, leading to a structural descriptor of a cluster in determinantal form. The popularity character of a cluster is conceptualized as the global fusion of variations of such a structural descriptor by means of the leave-one-out strategy in the cluster. Zeta merging proceeds, in the hierarchical agglomerative fashion, according to the maximum incremental popularity among all pairwise clusters. Experiments on toy data clustering, imagery pattern clustering, and image segmentation show the competitive performance of Zell. The 98.1% accuracy, in the sense of the normalized mutual information (NMI), is obtained on the FRGC face data of 16028 samples and 466 facial clusters. 1 Introduction Pattern clustering is a classic topic in pattern recognition and machine learning. In general, algorithms for clustering fall into two categories: partitional clustering and hierarchical clustering. Hierarchical clustering proceeds by merging small clusters (agglomerative) or dividing large clusters into small ones (divisive). The key point of agglomerative merging is the measurement of structural affinity between clusters. This paper is devoted to handle the problem of data clustering via hierarchical agglomerative merging. 1.1 Related work The representative algorithms for partitional clustering are the traditional K-means and the latest Affinity Propagation (AP) [1]. It is known that the K-means is sensitive to the selection of initial K centroids. The AP algorithm addresses this issue by that each sample is initially viewed as an examplar and then examplar-to-member and member-to-examplar messages competitively transmit among all samples until a group of good examplars and their corresponding clusters emerge. Besides the superiority of finding good clusters, AP exhibits the surprising ability of handling large-scale data. However, AP is computationally expensive to acquire clusters when the number of clusters is set in advance. Both K-means and AP encounter difficulty on multiple manifolds mixed data. The classic algorithms for agglomerative clustering include three kinds of linkage algorithms: the single, complete, and average Linkages. Linkages are free from the restriction on data distributions, but are quite sensitive to local noisy links. A novel agglomerative clustering algorithm was recently developed by Ma et al. [2] with the lossy coding theory of multivariate mixed data. The core of their algorithm is to characterize the structures of clusters by means of the variational coding length of coding arbitrary two merged clusters against only coding them individually. The coding length Figure 1: A small graph with four vertices and five edges can be decomposed into three cycles. The complexity of the graph can be characterized by the collective dynamics of these basic cycles. based algorithm exhibits the exceptional performance for clustering multivariate Gaussian data or subspace data. However, it is not suitable for manifold-valued data. Spectral clustering algorithms are another group of popular algorithms developed in recent years. The Normalized Cuts (Ncuts) algorithm [3] was developed for image segmentation and data clustering. Ng et al.’s algorithm [4] is mainly for data clustering, and Newman’s work [5] is applied for community detection in complex networks. Spectral clustering can handle complex data of multiple distributions. However, it is sensitive to noise and the variation of local data scales. In general, the following four factors pertaining to data are still problematic for most clustering algorithms: 1) mixing distributions such as multivariate Gaussians of different derivations, subspaces of different dimensions, or globally curved manifolds of different dimensions; 2) multiple scales; 3) global sampling densities; and 4) noise. To attack these problems, it is worthwhile to develop new approaches that are conceptually different from existing ones. 1.2 Our work To address issues for complex data clustering, we develop a new clustering approach called Zeta l-links, or Zell. The core of the algorithm is based on a new cluster descriptor that is essentially the integration of all cycles in the cluster by means of the Zeta function of the corresponding graph. The Zeta function leads to a rational form of cyclic interactions of members in the cluster, where cycles are employed as primitive structures of clusters. With the cluster descriptor, the popularity of a cluster is quantified as the global fusion of variations of the structural descriptor by the leaveone-out strategy in the cluster. This definition of the popularity is expressible by diagonals of matrix inverse. The structural inference between clusters may be performed with this popularity character. Based on the novel popularity character, we propose a clustering method, named Zeta merging in the hierarchical agglomerative fashion. This method has no additional assumptions on data distributions and data scales. As a subsidiary procedure for Zeta merging, we present a simple method called llinks, to find the initial set of clusters as the input of Zeta merging. The Zell algorithm is the combination of Zeta merging and l-links. Directed graph construction is derived from l-links. 2 Cyclizing a cluster with Zeta function Our ideas are mainly inspired by recent progress on study of collective dynamics of complex networks. Experiments have validated that the stochastic states of a neuronal network is partially modulated by the information that cyclically transmits [6], and that proportions of cycles in a network is strongly relevant to the level of its complexity [7]. Recent studies [8], [9] unveil that short cycles and Hamilton cycles in graphs play a critical role in the structural connectivity and community of a network. These progress inspires us to formalize the structural complexity of a cluster by means of cyclic interactions of its members. As illustrated in Figure 1, the relationship between samples can be characterized by the combination of all cycles in the graph. Thus the structural complexity of the graph can be conveyed by the collective dynamics of these basic cycles. Therefore, we may characterize a cluster with the global combination of structural cycles in the associated graph. To do so, we need to model cycles of different lengths and combine them together as a structural descriptor. 2.1 Modeling cycles of equal length We here model cycles using the sum-product codes to structurize a cluster. Formally, let C = {x1, . . . , xn} denote the set of sample vectors in a cluster C. Suppose that W is the weighted adjacency matrix of the graph associated with C. A vertex of the graph represents a member in C. For generality, the graph is assumed to be directed, meaning that W may be asymmetric. Let γℓ= {p1 →p2 →· · · →pℓ−1 →pℓ, pℓ→p1} denote any cycle γℓof length ℓdefined on W. We apply the factorial codes to retrieve the structural information of cycle γℓ, thus defining νγℓ= Wpℓ→p1 Qℓ−1 k=1 Wpk→pk+1, where Wpk→pk+1 is the (pk, pk+1) entry of W. The value νγℓ provides a kind of degree measure of interactions among γℓ-associated vertices. For the set Kℓof all cycles of length ℓ, the sum-product code νℓis written as: νℓ= X γℓ∈Kℓ νγℓ= X γℓ∈Kℓ Wpℓ→p1 ℓ−1 Y k=1 Wpk→pk+1. (1) The value νℓmay be viewed as the quantified indication of global interactions among C in the ℓcycle scale. The structural complexity of the graph is measured by these quantities of cycles of all different lengths, i.e., {ν1, . . . , νℓ, . . . , ν∞}. Further, we need to perform the functional integration of these individual measures. The Zeta function of a graph may play a role for such a task. 2.2 Integrating cycles using Zeta function Zeta functions are widely applied in pure mathematics as tools of performing statistics in number theory, computing algebraic invariants in algebraic geometry, measuring complexities in dynamic systems. The forms of Zeta functions are diverse. The Zeta function we use here is defined as: ζz = exp ∞ X ℓ=1 νℓ zℓ ℓ ! , (2) where z is a real-valued variable. Here ζz may be viewed as a kind of functional organization of all cycles in {K1, . . . , Kℓ, . . . , K∞} in a global sense. What’s interesting is that ζz admits a rational form [10], which makes the intractable manipulations arising in (1) tractable. Theorem 1. ζz = 1/ det(I −zW), where z < ρ(W) and ρ(W) denotes the spectral radius of the matrix W. From Theorem 1, we see that the global interaction of elements in C is quantified by a quite simple expression of determinantal form. 2.3 Modeling popularity The popularity of a group of samples means how much these samples in the group is perceived to be a whole cluster. To model the popularity, we need to formalize the complexity descriptor of the cluster C. With the cyclic integration ζz in the preceding section, the complexity of the cluster can be measured by the polynomial entropy εC of logarithm form: εC = ln ζz = ∞ X ℓ=1 νℓ zℓ ℓ= −ln det(I −zW). (3) The entropy εC will be employed to model the popularity of C. As we analyze at the beginning of Section 2, cycles are strongly associated with structural communities of a network. To model the popularity, therefore, we may investigate the variational information of cycles by successively leaving one member in C out. More clearly, let χC denote the popularity character of C. Then χC is defined as the averaged sum of the reductive entropies: χC = 1 n n X p=1 εC −εC\xp  = εC −1 n n X p=1 εC\xp. (4) Let T denote the transpose operator of a matrix and ep is the p-th standard basis whose p-th element is 1 and 0 elsewhere. We have the following theorem. Theorem 2. χC = 1 n ln Qn p=1 eT p (I −zW)−1ep. By analysis of inequalities , we may obtain that χC is bounded as 0 < χC ≤(εC/n). The popularity measure χC is a structural character of C , which can be exploited to handle problems in learning such as clustering, ranking, and classification. The computation of χC is involved with that of the inverse of (I −zW). In general, the complexity of computing (I −zW)−1 is O(n3). However, χC is only related to the diagonals of (I −zW)−1 instead of a full dense matrix. This unique property leads the computation of χC to the complexity of O(n1.5) by a specialized algorithm for computing diagonals of the inverse of a sparse matrix [11]. 2.4 Structural affinity measurement Given a set of initial clusters Cc = {C1, . . . , Cm} and the adjacency matrix P of the corresponding samples, the affinities between clusters or data groups can be measured via the corresponding popularity character χC. Under our framework, an intuitive inference is that the two clusters that share the largest reciprocal popularity have the most consistent structures, meaning the two clusters are most relevant from the structural point of view. Formally, for two given data groups Ci and Cj from Cc, the criterion of reciprocal popularity may be written as δχCi∪Cj = δχCi + δχCj = (χCi|Ci∪Cj −χCi) + (χCj|Ci∪Cj −χCj), (5) where the conditional popularity χCi|Ci∪Cj is defined as χCi|Ci∪Cj = 1 |Ci| ln Q xp∈Ci eT p (I − zPCi∪Cj)−1ep and PCi∪Cj is the submatrix of P corresponding to the samples in Ci and Cj. The incremental popularity δχCi embodies the information gain of Ci after being merged with Cj. The larger the value of δχCi∪Cj is, the more likely the two data groups Ci and Cj are perceived to be one cluster. Therefore, δχCi∪Cj may be exploited to measure the structural affinity between two groups of samples from a whole set of samples. 3 Zeta merging We will develop the clustering algorithm using the structural character χC. The automatic detection of the number of clusters are also taken into consideration. 3.1 Algorithm of Zeta merging With the criterion of structural affinity in Section 2.4, it is straightforward to write the procedures of clustering in the hierarchical agglomerative way. The algorithm may proceed from the pair {Ci, Cj} that has the largest incremental popularity δχCi∪Cj, i.e., {Ci, Cj} = arg max i,j δχCi∪Cj. We name the method by Zeta merging, whose procedures are provided in Algorithm 1. In general, Zeta merging will proceed smoothly if the damping factor z is bounded as 0 < z < 1 2∥P∥ 1. Algorithm 1 Zeta merging inputs: the weighted adjacency matrix P, the m initial clusters Cc = {C1, . . . , Cm}, and the number mc (mc ≤m) of resulting clusters. Set t = m. while 1 do if t = mc then break; end if Search two clusters Ci and Cj such that {Ci, Cj} = arg max {Ci,Cj}∈Cc δχCi∪Cj. Cc ←{Cc \ {Ci, Cj}} ∪{Ci ∪Cj}; t ←t −1. end while The merits of Zeta merging are that it is free from the restriction of data distributions and is less affected by the factor of multiple scales in data. Affinity propagation in Zeta merging proceeds on graph according to cyclic associations, requiring no specification on data distributions. Moreover, the popularity character χC of each cluster is obtained from the averaged amount of variational information conveyed by εC. Thus the size of a cluster has little influence on the value δχCi∪Cj. What’s most important is that cycles rooted at each point in C globally interact with all other points. Thus, the global descriptor εC and the popularity character χC are not sensitive to the local data scale at each point, leading to the robustness of Zeta merging against the variation of data scales. 3.2 Number of clusters in Zeta merging In some circumstances, it is needed to automatically detect the number of underlying clusters from given data. This functionality can be reasonably realized in Zeta merging if each cluster corresponds to a diagonal block structure in P, up to some permutations. The principle is that the minimum δχCi∪Cj will be zero when a set of separable clusters emerges, behind which is the mathematical principle that inverting a block-diagonal matrix is equivalent to inverting the matrices on the diagonal blocks. In practice, however, the minimum δχCi∪Cj has a jumping variation on the stable part of its curve instead of exactly arriving at zero due to the perturbation of the interlinks between clusters. Then the number of clusters corresponds to the step at the jumping point. 4 The Zell algorithm An issue arising in Zeta merging is the determination of the initial set of clusters. Here, we give a method by performing local single Linkages ( message passing by minimum distances). The method of graph construction is also discussed here. Figure 2: Schematic illustration of l-links. From left to right: data with two seed points (red markers), 2-links grown from two seed points, and 2-links from four seed points. The same cluster is denoted by the markers with the same color of edges. 4.1 Detecting l-links Given the sample set Cy = {y1, . . . , ymo}, we first get the set S2K i of 2K nearest neighbors for the point yi. Then from yi, messages are passed among S2K i in the sense of minimum distances (or general dissimilarities), thus locally forming an acyclic directed subgraph at each point. We call such an acyclic directed subgraph by l-links, where l is the number of steps of message passing among S2K i . In general, l is a small integer, e.g., l ∈{2, 3, 4, . . . }. The further manipulation is to merge l-links that share common vertices. A simple schematic example is shown in Figure 2. The specific procedures are provided in Algorithm 2. Algorithm 2 Detecting l-links inputs: the sample set Cy = {y1, . . . , ymo}, the number l of l-links, the number K of nearest neighbors for each point, where l < K. Initialization: Cc = {Ci|Ci = {yi}, i = 1, . . . , mo} and q = 1. for i from 1 to mo do Search 2K nearest neighbors of yi and form S2K i . Iteratively perform Ci ←Ci ∪{yj} if yj = arg min yj∈S2K i min y∈Ci distance(y, yj), until |Ci| ≥l. Perform Cj ←Ci ∪Cj, Cc ←Cc \ Ci, and q ←q + 1, if |Ci ∩Cj| > 0, where j = 1, . . . , q. end for 4.2 Graph construction The directional connectivity of l-links leads us to build a directed graph whose vertex yi directionally points to its K nearest neighbors. The method of graph construction is presented in Algorithm 3. The free parameter σ in (6) is estimated according to the criterion that the geometric mean of all similarities between each point and its three nearest neighbors is set to be a, where a is a given parameter in (0, 1]. It is easy to know that ρ(P) < 1 here. Algorithm 3 Directed graph construction inputs: the sample set Cy, the number K of nearest neighbors, and a free parameter a ∈(0, 1]. Estimate the parameter σ by σ2 = − 1 molna P yi∈Cy P yj∈S3 i [distance (yi, yj)]2. Define the entry of the i-th row and j-th column of the weighted adjacency matrix P as Pi→j = ( exp (−[distance(yi, yj)]2 σ2 ), if yj ∈SK i , 0, otherwise. (6) Perform the sum-to-one operation for each row, i.e., Pi→j ←Pi→j/ Pmo j=1 Pi→j. 4.3 Zeta l-links (Zell) Our algorithm for data clustering is in effect to perform Zeta merging on the initial set of small clusters derived from l-links. So, we name our algorithm by Zeta l-links, or Zell. The complete implementation of the Zell algorithm is to consecutively perform Algorithm 3, Algorithm 2, and Algorithm 1. In practice , the steps in Algorithm 3 and Algorithm 2 are operated together to enhance 1Interested one may refer to the full version of this paper for proofs. −60 −50 −40 −30 −20 −10 0 −20 −10 0 10 20 500 1500 1000 100 300 40 50 400 1000 80 800 5 −60 −50 −40 −30 −20 −10 0 −20 −10 0 10 20 −60 −50 −40 −30 −20 −15 −10 −5 0 5 (a) (b) (c) 0 20 40 60 80 100 120 140 160 0 1 2 3 x 10 −6 Number of clusters Minimum Delta popularity 5 10 15 0 2 4 6 8 x 10 −7 Number of clusters Minimum Delta popularity 5 10 15 −2 −1 0 1 2 3 4 5 x 10 −7 Number of clusters First−order difference (d) (e) (f) Figure 3: Clustering on toy data. (a) Generated data of 12 clusters. The number of each cluster is shown in the figure. The data are of different distributions, consisting of multiple manifolds (two circles and a hyperbola), subspaces (two pieces of lines and a piece of the rectangular strip), and six Gaussians. The densities of clusters are diverse. The differences between the sizes of different clusters are large. The scales of the data vary. For each cluster in the manifold and subspace data, the points are randomly generated with different deviations. (b) Clusters yielded by Zell (given number of clusters). The different colors denote different clusters. (c) Clusters automatically detected by Zell on the data composed by six Gaussians and the short line. (d) Curve of minimum Delta popularity (δχ). (e) Enlarged part of (d) and the curve of its first-order differences. The point marked by the square is the detected jumping point. (f) The block structures of P corresponding to the data in (c). the efficiency of Zell. Zeta merging may also be combined with K-means and Affinity Propagation for clustering. These two algorithms work well for producing small clusters. So, they can be employed to generate initial clusters as the input of Zeta merging. 5 Experiment Experiments are conducted on clustering toy data, hand-written digits and cropped faces from captured images, and segmenting images to test the performance of Zell. The quantitative performance of the algorithms is measured by the normalized mutual information (NMI) [12] which is widely used in learning communities. The NMI quantifies the normalized statistical information shared between two distributions. The larger the NMI is, the better the clustering performance of the algorithm is. Four representative algorithms are taken into comparison, i.e., K-centers, (average) Linkage, Affinity Propagation (AP), and Normalized Cuts (Ncuts). Here we use K-centers instead of K-means because it can handle the case where distances between points are not measured by Euclidean norms. For fair comparison, we run Ncuts on the graph whose parameters are set the same with the graph used by Zell. The parameters for Zell are set as z = 0.01, a = 0.95, K = 20, and l = 2. 5.1 On toy data We first perform an experiment on a group of toy data of diverse distributions with multiple densities, multiple scales, and significantly different sizes of clusters. As shown in Figures 3 (b) and (c), the Zell algorithm accurately detects the underlying clusters out. Particularly, Zell is capable of simultaneously differentiating the cluster with five members and the cluster with 1500 members. This functionality is critically important for finding genes from microarray expressions in bioinformatics. Figures 3 (d) and (e) show the curves of minimum variational δχ (for the data in Figure 3 (c)) where the number of clusters is determined at the largest gap of the curve in the stable part. However, the method presented in Section 3.2 fails to automatically detect the number of clusters for the data in Figure 3 (a), because the corresponding P matrix has no clear diagonal block structures. Table 1: Imagery data. MNIST and USPS: digit databases. ORL and FRGC: face databases. The last row shows the numbers of clusters automatically detected by Zell on the five data sets. Data set MNIST USPS ORL sFRGC FRGC Number of samples 5139 11000 400 11092 16028 Number of clusters 5 10 40 186 466 Average number of each cluster 1027 ± 64 1100 ± 0 10 ± 0 60 ± 14 34 ± 24 Dimension of each sample 784 256 2891 2891 2891 Detected number of clusters 11 8 85 (K = 5) 229 511 Table 2: Quantitative clustering results on imagery data. NMI: normalized mutual information. The ‘pref’ means the preference value used in Affinity Propagation for clustering of given numbers. K = 5 for the ORL data set. Algorithm K-centers Linkage Ncuts Affinity propagation (pref) Zell MNIST 0.228 0.496 0.737 0.451 (-871906470) 0.865 NMI USPS 0.183 0.095 0.443 0.313 (-417749850) 0.772 ORL 0.393 0.878 0.939 0.877 (-6268) 0.940 sFRGC 0.106 0.934 0.953 0.899 (-16050) 0.988 FRGC 0.187 0.950 0.924 0.906 (-7877) 0.981 5.2 On imagery data The imagery patterns we adopt are the hand-written digits in the MNIST and USPS databases and the facial images in the ORL and FRGC (Face Recognition Grand Challenge, http://www.frvt.org/FRGC/) databases. The MNIST and USPS data sets are downloaded from Sam Roweis’s homepage (http://www.cs.toronto.edu/˜roweis). For MNIST, we select all the images of digits from 0 to 4 in the testing set for experiment. For FRGC, we use the facial images in the target set of experiment 4 in the FRGC version 2. Besides the whole target set, we also select a subset from it. Such persons are selected as another group of clusters if the number of faces for each person is no less than forty. The information of data sets is provided in Table 1. For digit patterns, the Frobenius norm is employed to measure distances of digit pairs without feature extraction. For face patterns, however, we extract visual features of each face by means of the local binary pattern algorithm. The Chi-square metric is exploited to compute distances, defined as distance(ˆy, ˇy) = P i (ˆyi−ˇyi)2 ˆyi+ˇyi . The quantitative results are given in Table 2. We see that Zell consistently outperforms the other algorithms across the five data sets. In particular, the performance of Zell is encouraging on the FRGC data set which has the largest numbers of clusters and samples. As reported in [1], AP does significantly outperform K-centers. However, AP shows the unsatisfactory performance on the digit data where the manifold structures may occur due to that the styles of digits vary significantly. The average Linkage also exhibits such phenomena. The results achieved by Ncuts are also competitive. However, Ncuts is overall unstable, for example, yielding the low accuracy on the USPS data. The results in Tabel 3 confirms the stability of Zell over the variations of free parameters. Actually, l affects the performance of Zell when it is larger, because it may incur incorrect initial clusters. 5.3 Image segmentation We show several examples of the application of Zell on image segmentation from the Berkeley segmentation database. The weighted adjacency matrix P is defined as Pi→j = exp(−(Ii−Ij)2 σ2 ) if Ij ∈N 8 i and 0 otherwise, where Ii is the intensity value of an image and N 8 i denotes the set of pixels in the 8-neighborhood of Ii. Figure 4 displays the segmentation results of different numbers of segments for each image. Overall, attentional regions are merged by Zell. Note the small attentional regions take the priorities of being merged than the large ones. Therefore, Zell yields many small attentional regions as final clusters. 6 Conclusion An algorithm, named Zell, has been developed for data clustering. The cyclization of a cluster is the fundamental principle of Zell. The key point of the algorithm is the integration of structural cycles but Zeta function of a graph. A popularity character of measuring the compactness of the cluster is defined via Zeta function, on which the core of Zell for agglomerative clustering is based. An Table 3: Results yielded by Zell over variations of free parameters on the sFRGC data. The initial set is {z = 0.01, a = 0.95, K = 20, l = 3}. When one of them varies, the other keep invariant. Parameter z a K l Range 10−{1,2,3,4} 0.2 × {1, 2, 3, 4, 4.75} 10 × {2, 3, 4, 5} {2, 3, 4} NMI 0.988 ± 0 0.988 ± 0.00019 0.987 ± 0.0015 0.988 ± 0.0002 Figure 4: Image segmentation by Zell from the Berkeley segmentation database. approach for finding initial small clusters is presented, which is based on the merging of local links among samples. The directed graph used in this paper is derived from the directionality of l-links. Experimental results on toy data, hand-written digits, facial images, and image segmentation show the competitive performance of Zell. We hope that Zell brings a new perspective on complex data clustering. Acknowledgement We thank Yaokun Wu and Sergey Savchenko for their continuing help on algebraic graph theory. We are also grateful of the interesting discussion with Yi Ma and John Wright on clustering and classification. Feng Li and Xiaodi Hou are acknowledged due to their kind help. The reviewers’ insightful comments and suggestions are also greatly appreciated. References [1] Frey, B.J. & Dueck, D. (2007) Clustering by passing messages between data points. Science 315:972-976. [2] Ma, Y. Derksen, H. Hong, W. & Wright, J. (2007) Segmentation of multivariate mixed data via lossy data coding and compression. IEEE Trans. on Pattern Recognition and Machine Intelligence 29:1546-1562. [3] Shi, J.B. & Malik, J. (2000) Normalized cuts and image segmentation. IEEE Trans. on Pattern Recognition and Machine Intelligence 22(8):888-905. [4] Ng, A.Y., Jordan, M.I. & Weiss, Y. (2001) On spectral clustering: analysis and an algorithm. Advances in Neural Information Processing Systems. Cambridge, MA: MIT Press. [5] Newman, M.E.J. (2006) Finding community structure in networks using the eigenvectors of matrices. Physical Review E 74(3). [6] Destexhe, A. & Contreras, D. (2006) Neuronal computations with stochastic network states. Science, 314(6):85-90. [7] Sporns, O. Tononi, G. & Edelman, G.M. (2000) Theoretical neuroanatomy: relating anatomical and functional connectivity in graphs and cortical connection matrices. Cerebral Cortex, 10:127-141. [8] Bagrow, J. Bollt, E. & Costa, L.F. (2007) On short cycles and their role in network structure. http://arxiv.org/abs/cond-mat/0612502. [9] Bianconi, G. & Marsili, M. (2005) Loops of any size and Hamilton cycles in random scale-free networks. Journal of Statistical Mechanics, P06005. [10] Savchenko, S.V. (1993) The zeta-function and Gibbs measures. Russ. Math. Surv. 48(1):189-190. [11] Li, S. Ahmed, S. Klimeck, G. & Darve, E. (2008) Computing entries of the inverse of a sparse matrix using the FIND algorithm. Journal of Computational Physics 227:9408-9427. [12] Strehl, A. & Ghosh, J. (2002) Cluster ensembles — a knowledge reuse framework for combining multiple partitions. Journal of Machine Learning Research 3:583617.
2008
67
3,556
Integrating locally learned causal structures with overlapping variables Robert E. Tillman Carnegie Mellon University Pittsburgh, PA 15213 rtillman@andrew.cmu.edu David Danks, Clark Glymour Carnegie Mellon University & Institute for Human & Machine Cognition Pittsburgh, PA 15213 {ddanks,cg09}@andrew.cmu.edu Abstract In many domains, data are distributed among datasets that share only some variables; other recorded variables may occur in only one dataset. While there are asymptotically correct, informative algorithms for discovering causal relationships from a single dataset, even with missing values and hidden variables, there have been no such reliable procedures for distributed data with overlapping variables. We present a novel, asymptotically correct procedure that discovers a minimal equivalence class of causal DAG structures using local independence information from distributed data of this form and evaluate its performance using synthetic and real-world data against causal discovery algorithms for single datasets and applying Structural EM, a heuristic DAG structure learning procedure for data with missing values, to the concatenated data. 1 Introduction In many domains, researchers are interested in predicting the effects of interventions, or manipulating variables, on other observed variables. Such predictions require knowledge of causal relationships between observed variables. There are existing asymptotically correct algorithms for learning such relationships from data, possibly with missing values and hidden variables [1][2][3], but these algorithms all assume that every variable is measured in a single study. Datasets for such studies are not always readily available, often due to privacy, ethical, financial, and practical concerns. However, given the increasing availability of large amounts of data, it is often possible to obtain several similar studies that individually measure subsets of the variables a researcher is interested in and together include all such variables. For instance, models of the United States and United Kingdom economies share some but not all variables, due to different financial recording conventions; fMRI studies with similar stimuli may record different variables, since the images vary according to magnet strength, data reduction procedures, etc.; and U.S. states report some of the same educational testing variables, but also report state-specific variables. In these cases, if each dataset has overlapping variable(s) with at least one other dataset, e.g. if two datasets D1 and D2, which measure variables V1 and V2, respectively, have at least one variable in common (V1 ∩V2 ̸= ∅), then we should be able to learn many of the causal relationships between the observed variables using this set of datasets. The existing algorithms, however, cannot in general be directly applied to such cases, since they may require joint observations for variables that are not all measured in a single dataset. While this problem has been discussed in [4] and [5], there are no general, useful algorithms for learning causal relationships from data of this form. A typical response is to concatenate the datasets to form a single common dataset with missing values for the variables that are not measured in each of the original datasets. Statistical matching [6] or multiple imputation [7] procedures may then be used to fill in the missing values by assuming an underlying model (or small class of models), estimating model parameters using the available data, and then using this model to interpolate the 1 missing values. While the assumption of some underlying model may be unproblematic in many standard prediction scenarios, i.e. classification, it is unreliable for causal inference; the causal relationships learned using the interpolated dataset that are between variables which are never jointly measured in single dataset will only be correct if the corresponding relationships between variables in the assumed model happen to be causal relationships in the correct model. The Structural EM algorithm [8] avoids this problem by iteratively updating the assumed model using the current interpolated dataset and then reestimating values for the missing data to form a new interpolated dataset until the model converges. The Structural EM algorithm is only justified, however, when missing data are missing at random (or indicator variables can be used to make them so) [8]. The pattern of missing values in the concatenated datasets described above is highly structured. Furthermore, Structural EM is a heuristic procedure and may converge to local maxima. While this may not be problematic in practice when doing prediction, it is problematic when learning causal relationships. Our experiments in section 4 show that Structural EM performs poorly in this scenario. We present a novel, asymptotically correct algorithm—the Integration of Overlapping Networks (ION) algorithm—for learning causal relationships (or more properly, the complete set of possible causal DAG structures) from data of this form. Section 2 provides the relevant background and terminology. Section 3 discusses the algorithm. Section 4 presents experimental evaluations of the algorithm using synthetic and real-world data. Finally, section 5 provides conclusions. 2 Formal preliminaries We now introduce some terminology. A directed graph G = ⟨V, E⟩is a set of nodes V, which represent variables, and a set of directed edges E connecting distinct nodes. If two nodes are connected by an edge then the nodes are adjacent. For pairs of nodes {X, Y } ⊆V, X is a parent (child) of Y , if there is a directed edge from X to Y (Y to X) in E. A trail in G is a sequence of nodes such that each consecutive pair of nodes in the sequence is adjacent in G and no node appears more than once in the sequence. A trail is a directed path if every edge between consecutive pairs of nodes points in the same direction. X is an ancestor (descendant) of Y if there is a directed path from X to Y (Y to X). G is a directed acyclic graph (DAG) if for every pair {X, Y } ⊆V, X is not both an ancestor and a descendent of Y (no directed cycles). A collider (v-structure) is a triple of nodes ⟨X, Y, Z⟩ such that X and Z are parents of Y . A trail is active given C ⊆V if (i) for every collider ⟨X, Y, Z⟩ in the trail either Y ∈C or some descendant of Y is in C and (ii) no other node in the trail is in C. For disjoint sets of nodes X, Y, and Z, X is d-separated (d-connected) from Y given Z if and only if there are no (at least one) active trails between any X ∈X and any Y ∈Y given Z. A Bayesian network B is a pair ⟨G, P⟩, where G = ⟨V, E⟩is a DAG and P is a joint probability distribution over the variables represented by the nodes in V such that P can be decomposed as follows: P(V) = Y V ∈V P(V |Parents(V )) For B = ⟨G, P⟩, if X is d-separated from Y given Z in G, then X is conditionally independent of Y given Z in P [9]. For disjoint sets of nodes, X, Y, and Z in V, P is faithful to G if X is d-separated from Y given Z in G whenever X is conditionally independent of Y given Z in P [1]. B is a causal Bayesian network if an edge from X to Y indicates that X is a direct cause of Y relative to V. Most algorithms for causal discovery, or learning causal relationships from nonexperimental data, assume that the distribution over the observed variables P is decomposable according to a DAG G and P is faithful to G. The goal is to learn G using the data from P. Most causal discovery algorithms return a set of possible DAGs which entail the same d-separations and d-connections, e.g. the Markov equivalence class, rather than a single DAG. The DAGs in this set have the same adjacencies but only some of the same directed edges. The directed edges common to each DAG represent causal relationships that are learned from the data. If we admit the possibility that there may be unobserved (latent) common causes between observed variables, then this set of possible DAGs is usually larger. A partial ancestral graph (PAG) represents the set of DAGs in a particular Markov equivalence class when latent common causes may be present. Nodes in a PAG correspond to observed variables. Edges are of four types: −◮, ◦−◮, ◦−◦and ◭−◮, where a ◦indicates either an ◮or −orientation, bidirected edges indicate the presence of a latent common cause, and fully directed edges (−◮) 2 indicate that the directed edge is present in every DAG, e.g. a causal relationship. For {X, Y } ⊆V, a possibly active trail between X and Y given Z ⊆V/{X, Y } is a trail in a PAG between X and Y such that some orientation of ◦’s on edges between consecutive nodes in the trail, to either −or ◮, makes the trail active given Z. 3 Integration of Overlapping Networks (ION) algorithm The ION algorithm uses conditional independence information to discover the complete set of PAGs over a set of variables V that are consistent with a set of datasets over subsets of V which have overlapping variables. ION accepts as input a set of PAGs which correspond to each of such datasets. A standard causal discovery algorithm that checks for latent common causes, such as FCI [1] or GES [3] with latent variable postprocessing steps1, must first be applied to each of the original datasets to learn these PAGs that will be input to ION. Expert domain knowledge can also be encoded in the input PAGs, if available. The ION algorithm is shown as algorithm 1 and described below. Input : PAGs Gi ∈G with nodes Vi ⊆V for i = 1, . . . , k Output: PAGs Hi ∈H with nodes Vi = V for i = 1, . . . , m K ←the complete graph over V with ◦’s at every endpoint 1 A ←∅ 2 Transfer nonadjacencies and endpoint orientations from each Gi ∈G to K and propagate the 3 changes in K using the rules described in [10] PAT({X, Y}, Z) ←all possibly active trails between X and Y given Z for all {X, Y} ⊆V and 4 Z ⊆V/{X, Y} such that X and Y are d-separated given Z in some Gi ∈G PC ←all minimal hitting sets of changes to K, such that all PATi ∈PAT are not active 5 for PCi ∈PC do 6 Ai ←K after making and propagating the changes PCi 7 if Ai is consistent with every Gi ∈G then add Ai to A 8 end 9 for Ai ∈A do 10 Remove Ai from A 11 Mark all edges in Ai as ‘?’ 12 For each {X, Y} ⊆V such that X and Y are adjacent in Ai, if X and Y are d-connected 13 given ∅in some Gi ∈G, then remove ‘?’ from the edge between X and Y in Ai PR ←every combination of removing or not removing ‘?’ marked edges from Ai 14 for PRi ∈PR do 15 Hi ←Ai after making and propagating the changes PRi 16 if Hi is consistent with every Gi ∈G then add Hi to H 17 end 18 end 19 Algorithm 1: The Integration of Overlapping Networks (ION) algorithm The algorithm begins with the complete graph over V with all ◦endpoints and transfers nonadjacencies and endpoint orientations from each Gi ∈G at line 3, e.g. if X and Y are not adjacent in Gi then remove the edge between X and Y , if X is directed into Y in Gi then set the endpoint at Y on the edge between X and Y to ◮. Once these orientations and edge removals are made, the changes to the complete graph are propagated using the rules in [10], which provably make every change that is entailed by the current changes made to the graph. Lines 4-9 find every possibly active trail for every {X, Y } ⊆V given Z ⊆V/{X, Y } such that X and Y are d-separated given Z in some Gi ∈G. The constructed set PC includes all minimal hitting sets of graphical changes, e.g. unique sets of minimal changes that are not subsets of other sets of changes, which make these paths no longer active. For each minimal hitting set, a new graph is constructed by making the changes in the set and propagating these changes. If the graph is consistent with each Gi ∈G, e.g. the graph does not imply a d-separation for some {X, Y } ⊆V given Z ⊆V/{X, Y } such that X and Y are d-connected in some Gi ∈G, then this graph is added to the current set of possible graphs. Lines 101We use the standard GES algorithm to learn a DAG structure from the data and then use the FCI rules to check for possible latent common causes. 3 19 attempt to discover any additional PAGs that may be consistent with each Gi ∈G after deleting edges from PAGs in the current set and propagating the changes. If some pair of nodes {X, Y } ⊆V that are adjacent in a current PAG are d-connected given ∅in some Gi ∈G, then we do not consider sets of edge removals which remove this edge. The ION algorithm is provably sound in the sense that the output PAGs are consistent with every Gi ∈G, e.g. no Hi ∈H entails a d-separation or d-connection that contradicts a d-separation or d-connection entailed by some Gi ∈G. This property follows from the fact that d-separation and d-connection are mutually exclusive, exhaustive relations. Theorem 3.1 (soundness). If X and Y are d-separated (d-connected) given Z in some Gi ∈G, then X and Y are d-separated (d-connected) given Z in every Hi ∈H. Proof Sketch. Every structure Ai constructed at line 7 provably entails every d-separation entailed by some Gi ∈G. Such structures are only added to A if they do not entail a d-separation corresponding to a d-connection in some Gi ∈G. The only changes made (other than changes resulting from propagating other changes which are provably correct by [10]) in lines 10-19 are edge removals, which can only create new d-separations. If a new d-separation is created which corresponds to a d-connection in some Gi ∈G, then the PAG entailing this new d-separation is not added to H. The ION algorithm is provably complete in the sense that if there is some structure Hi over the variables V that is consistent with every Gi ∈G, then Hi ∈H. Theorem 3.2 (completeness). Let Hi be a PAG over the variables V such that for every pair {X, Y } ⊆V, if X and Y are d-separated (d-connected) given Z ⊆V/{X, Y } in some Gi ∈G, then X and Y are d-separated (d-connected) given Z in Hi. Then, Hi ∈H. Proof Sketch. Every change made at line 3 is provably necessary to ensure soundness. At least one graph added to A at line 8 provably has every adjacency (possibly more) in Hi and no non-◦ endpoints on an edge found in Hi that is not also present in Hi. Some sequence of edge removals will provably produce Hi at line 16 and it will be added to the output set since it is consistent with every Gi ∈G. Thus, by theorems 3.1 and 3.2, ION is an asymptotically correct algorithm for learning the complete set of PAGs over V that are consistent with a set of datasets over subsets of V with overlapping variables, if the input PAGs are discovered using an asymtotically correct algorithm that detects the presence of latent common causes, i.e. FCI, with each of these datasets. Finding all minimal hitting sets is an NP-complete problem [11]. Since learning a DAG structure from data is also an NP-complete problem [12], the ION algorithm, as given above, requires a superexponential (in V) number of operations and is often computationally intractable even for small sizes of |V|. In practice, however, we can break the minimal hitting set problem into a sequence of smaller subproblems and use a branch and bound approach that is tractable in many cases and still results in an asymptotically correct algorithm. We tested several such strategies. The method which most effectively balanced time and space complexity tradeoffs was to first find all minimal hitting sets which make all possibly active trails of length 2 that correspond to d-separations in some Gi ∈G not active, then find the structures resulting from making and propagating these changes that are consistent with every Gi ∈G, and iteratively do the same for each of these structures, increasing the length of possibly active trails considered until trails of all sizes are considered. 4 Experimental results We first used synthetic data to evaluate the performance of ION with known ground truth. In the first experiment, we generated 100 random 4-node DAGs using the MCMC algorithm described in [13] with random discrete parameters (conditional probability tables for the factors in the decomposition shown in section 2). For each DAG, we then randomly chose two subsets of size 2 or 3 of the nodes in the DAG such that the union of the subsets included all 4 nodes and at least one overlapping variable between the two subsets was present. We used forward sampling to generate two i.i.d. samples of sizes N = 50, N = 100, N = 500, N = 1000 and N = 2500 from the DAGs for only the variables in each subset. We used both FCI and GES with latent variable postprocessing to 4 N=50 N=100 N=500 N=1000 N=2500 0 1 2 3 4 5 Sample size Edge omissions FCI−baseline ION−FCI GES−baseline ION−GES Structural EM N=50 N=100 N=500 N=1000 N=2500 0 0.5 1 1.5 2 2.5 3 Sample size Edge commissions (a) (b) N=50 N=100 N=500 N=1000 N=2500 0 1 2 3 4 5 Sample size Orientation errors N=50 N=100 N=500 N=1000 N=2500 0 2000 4000 6000 8000 10000 12000 14000 Sample size Time (seconds) (c) (d) Figure 1: (a) edge omissions, (b) edge commissions, (c) orientation errors, and (d) runtimes generate PAGs for each of these samples which were input to ION. To evaluate the accuracy of ION, we counted the number of edge omission, edge commision, and orientation errors (◮instead of −) for each PAG in the ION output set and averaged the results. These results were then averaged across all of the 100 4-node structures. Figure 1 shows the averaged results for these methods along with 3 other methods we included for comparison. ION-FCI and ION-GES refer the the performance of ION when the input PAGs are obtained using the FCI algorithm and the GES algorithm with latent variable postprocessing, respectively. For Structural EM, we took each of the datasets over subsets of the nodes in each DAG and formed a concatenated dataset, as described in section 1, which was input to the Structural EM algorithm.2 For FCI-baseline and GES-baseline, we used forward sampling to generate another i.i.d. sample of sizes N = 50, N = 100, N = 500, N = 1000 and N = 2500 for all of the variables in each DAG and used these datasets as input for the FCI and GES with latent variable postprocessing algorithms, respectively, to obtain a measure for how well these algorithms perform when no data is missing. The average runtimes for each method are also reported in figure 1. Error bars show 95% confidence intervals. We first note the performance of Structural EM. Almost no edge omission errors are made, but more edge commissions errors are made than any of the other methods and the edge commission errors do not decrease as the sample size increases. When we looked at the results, we found that Structural EM always returned either the complete graph or a graph that was almost complete, indicating that Structural EM is not a reliable method for causal discovery in this scenario where there is a highly structured pattern to the missing data. Furthermore, the runtime for Structural EM was considerably higher than any of the other methods. For the larger sample sizes (where more missing values need to be estimated at each iteration), a single run required several hours in some instances. Due to its significant computation time, we 2We ran Structural EM with 5 random restarts and chose the model with the highest BDeu score to avoid converging to local maxima. Random “chains” of nodes were used as the initial models. Structural EM was never stopped before convergence. 5 N=50 N=100 N=500 N=1000N=2500 0 1 2 3 4 5 6 7 8 Sample size Edge omissions FCI−baseline ION−FCI GES−baseline ION−GES N=50 N=100 N=500 N=1000N=2500 0 0.1 0.2 0.3 0.4 0.5 Sample size Edge commissions N=50 N=100 N=500 N=1000N=2500 0 1 2 3 4 5 6 Sample size Orientation errors (a) (b) (c) Figure 2: (a) edge omissions, (b) edge comissions, and (c) orientation errors N=50 N=100 N=500 N=1000N=2500 0 1 2 3 4 5 6 7 8 Sample size Edge omissions FCI−baseline ION−FCI GES−baseline ION−GES N=50 N=100 N=500 N=1000N=2500 0 0.1 0.2 0.3 0.4 0.5 Sample size Edge commissions N=50 N=100 N=500 N=1000N=2500 0 1 2 3 4 5 6 Sample size Orientation errors (a) (b) (c) Figure 3: (a) edge omissions, (b) edge comissions, and (c) orientation errors were unable to use Structural EM with larger DAG structures so it is excluded in the experiments below. The FCI-baseline and GES-baseline methods performed similarly to previous simulations of them. The ION-FCI and ION-GES methods performed similarly to the FCI-baseline and GESbaseline methods but made slightly more errors and showed slower convergence (due to the missing data). Very few edge commission errors were made. Slightly more edge omission errors were made, but these errors decrease as the sample size increases. Some edge orientation errors were made even for the larger sample sizes. This is due to the fact that each of the algorithms returns an equivalence class of DAGs rather than a single DAG. Even if the correct equivalence class is discovered, errors result after comparing the ground truth DAG to every DAG in the equivalence class and averaging. We also note that there are fewer orientation errors for the GES-baseline and ION-GES methods on the two smallest sample sizes than all of the other sample sizes. While this may seem surprising, it is simply a result of the fact that more edge omission errors are made for these cases. We repeated the above experiment for 3 similar cases where we used 6-node DAG structures rather than 4-node DAG structures: (i) two i.i.d. samples were generated for random subsets of sizes 2-5 with only 1 variable that is not overlapping between the two subsets; (ii) two i.i.d. samples were generated for random subsets of sizes 2-5 with only 2 variables that are not overlapping between the two subsets; (iii) three i.i.d. samples were generated for random subsets of sizes 2-5 with only 1 variable that is not overlapping between any pair of subsets. Figures 2, 3, and 4 show edge omission, edge commission, and orientation errors for each of these cases, respectively. In general, the performance in each case is similar to the performance for the 4-node case. We also tested the performance of ION-FCI using a real world dataset measuring IQ and various neuroanatomical and other traits [14]. We divided the variables into two subsets with overlapping variables based on domain grounds: (a) variables that might be included in a study on the relationship between neuroanatomical traits and IQ; and (b) variables for a study on the relationship between IQ, sex, and genotype, with brain volume and head circumference included as possible confounders. Figures 5a and 5b show the FCI output PAGs when only the data for each of these subsets of the variables is provided as input, respectively. Figure 5c shows the output PAG of ION-FCI when these two resulting PAGs are used as input. We also ran FCI on the complete dataset to have a comparison. Figure 5d shows this PAG. 6 N=50 N=100 N=500 N=1000N=2500 0 1 2 3 4 5 6 7 8 Sample size Edge omissions FCI−baseline ION−FCI GES−baseline ION−GES N=50 N=100 N=500 N=1000N=2500 0 0.1 0.2 0.3 0.4 0.5 Sample size Edge commissions N=50 N=100 N=500 N=1000N=2500 0 1 2 3 4 5 6 Sample size Orientation errors (a) (b) (c) Figure 4: (a) edge omissions, (b) edge comissions, and (c) orientation errors (a) Corpus Collosum Surface Area Brain Volume Brain Surface Area IQ Body Weight Head Circumference (b) Genotype Sex IQ Body Weight Head Circumference Brain Volume (c) Corpus Collosum Surface Area Brain Volume Brain Surface Area Genotype Sex IQ Body Weight Head Circumference (d) Corpus Collosum Surface Area Brain Volume Brain Surface Area Genotype Sex IQ Body Weight Head Circumference Figure 5: (a) FCI output PAG for variables in subset a, (b) FCI output PAG for variables in subset b, (c) ION output PAG when using the FCI ouput PAGs for variables in subset a and variables in subset b as input, and (d) FCI output PAG for all variables In this particular case, the output of ION-FCI consists of only a single PAG, which is identical to the result when FCI is given the complete dataset as input. This case shows that in some instances, ION-FCI can recover as much information about the true DAG structure as FCI even when less information can be extracted from the ION-FCI input. We note that the graphical structure of the complete PAG (figures 5c and 5d) is the union of the structures shown in figures 5a and 5b. While visually this may appear to be a trivial example for ION where all of the relevant information can be extracted in the first steps, there is in fact much processing required in later stages in the algorithm to determine the structure around the nonoverlapping variables. 5 Conclusions In practice, researchers are often unable to find or construct a single, complete dataset containing every variable they may be interested in (or doing so is very costly). We thus need some way of integrating information about causal relationships that can be discovered from a collection of datasets with related variables [5]. Standard causal discovery algorithms cannot be used, since they take only a single dataset as input. To address this open problem, we proposed the ION algorithm, an asymptotically correct algorithm for discovering the complete set of causal DAG structures that are consistent with such data. While the results presented in section 4 indicate that ION is useful in smaller domains when the branch and bound approach described in section 3 is used, a number of issues must be addressed before ION or a simlar algorithm is useful for higher dimensional datasets. Probably the most significant problem is resolving contradictory information among overlapping variables in different 7 input PAGs, i.e. X is a parent of Y in one PAG and a child of Y in another PAG, resulting from statistical errors or when the input samples are not identifically distributed. ION currently ignores such information rather than attempting to resolve it. This increases uncertainty and thus the size of the resulting output set of PAGs. Furthermore, simply ignoring such information does not always avoid conflicts. In some of such cases, ION will not discover any PAGs which entail the correct d-separations and d-connections. Thus, no output PAGs are returned. When performing conditional independence tests or evaluating score functions, statistical errors occur more frequently as the dimensionality of a dataset increases, unless the sample size also increases at an exponential rate (resulting from the so-called curse of dimensionality). Thus, until reliable methods for resolving conflicting information from input PAGs are developed, ION and similar algorithms will not in general be useful for higher dimensional datasets. Furthermore, while the branch and bound approach described in section 3 is a significant improvement over other methods we tested for computing minimal hitting sets, its memory requirements are still considerable in some instances. Other algorithmic strategies should be explored in future research. Acknowledgements We thank Joseph Ramsey, Peter Spirtes, and Jiji Zhang for helpful discussions and pointers. We thank Frank Wimberley for implementing the version of Structural EM we used. R.E.T. was supported by the James S. McDonnell Foundation Causal Learning Collaborative Initiative. C.G. was supported by a grant from the James S. McDonnell Foundation. References [1] P. Spirtes, C. Glymour, and R. Scheines. Causation, Prediction, and Search. MIT Press, 2nd edition, 2000. [2] J. Pearl. Causality: Models, Reasoning, and Inference. Cambridge University Press, 2000. [3] D. M. Chickering. Optimal structure identification with greedy search. Journal of Machine Learning Research, 3:507–554, 2002. [4] D. Danks. Learning the causal structure of overlapping variable sets. In Discovery Science: Proceedings of the 5th International Conference, 2002. [5] D. Danks. Scientific coherence and the fusion of experimental results. The British Journal for the Philosophy of Science, 56:791–807, 2005. [6] S. R¨assler. Statistical Matching. Springer, 2002. [7] D. B. Rubin. Multiple Imputation for Nonresponse in Surveys. Wiley & Sons, 1987. [8] N. Friedman. The Bayesian structural EM algorithm. In Proceedings of the 14th Conference on Uncertainty in Artificial Intelligence, 1998. [9] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kauffmann Publishers, 1988. [10] J. Zhang. A characterization of markov equivalence classes for causal models with latent variables. In Proceedings of the 23rd Conference on Uncertainty in Artificial Intelligence, 2007. [11] R. Greiner, B. A. Smith, and R. W. Wilkerson. A correction to the algorithm in Reiter’s theory of diagnosis. Artificial Intelligence, 41:79–88, 1989. [12] D. M. Chickering. Learning Bayesian networks is NP-complete. In Proceedings of the 5th International Workshop on Artificial Intelligence and Statistics, 1995. [13] G. Melanc¸on, I. Dutour, and M. Bousquet-M´elou. Random generation of dags for graph drawing. Technical Report INS-R0005, Centre for Mathematics and Computer Sciences, Amsterdam, 2000. [14] M. J. Tramo, W. C. Loftus, R. L Green, T. A. Stukel, J. B. Weaver, and M. S. Gazzaniga. Brain size, head size, and IQ in monozygotic twins. Neurology, 50:1246–1252, 1998. 8
2008
68
3,557
A mixture model for the evolution of gene expression in non-homogeneous datasets Gerald Quon1, Yee Whye Teh2, Esther Chan3, Timothy Hughes3, Michael Brudno1,3, Quaid Morris3 1Department of Computer Science, and 3Banting and Best Department of Medical Research, University of Toronto, Canada, 2Gatsby Computational Neuroscience Unit, University College London, United Kingdom {gerald.quon,quaid.morris}@utoronto.ca Abstract We address the challenge of assessing conservation of gene expression in complex, non-homogeneous datasets. Recent studies have demonstrated the success of probabilistic models in studying the evolution of gene expression in simple eukaryotic organisms such as yeast, for which measurements are typically scalar and independent. Models capable of studying expression evolution in much more complex organisms such as vertebrates are particularly important given the medical and scientific interest in species such as human and mouse. We present Brownian Factor Phylogenetic Analysis, a statistical model that makes a number of significant extensions to previous models to enable characterization of changes in expression among highly complex organisms. We demonstrate the efficacy of our method on a microarray dataset profiling diverse tissues from multiple vertebrate species. We anticipate that the model will be invaluable in the study of gene expression patterns in other diverse organisms as well, such as worms and insects. 1 Introduction High-throughput functional data is emerging as an indispensible resource for generating a complete picture of genome-wide gene and protein function. Currently, gene function is often inferred through sequence comparisons with genes of known function in other species, though sequence similarity is no guarantee of shared biological function. Gene duplication, one of the primary forces of genomic evolution, often gives rise to genes with high sequence similarity but distinct biological roles [1]. Differences in temporal and spatial gene expression patterns have also been posited to explain phenotypic differences among animals despite a surprisingly large degree of gene sequence similarity [2]. This observation and the increasingly wide availability of genome-wide gene expression profiles from related organisms has motivated us to develop statistical models to study the evolution of gene expression along phylogenies, in order to identify lineages where gene expression and therefore gene function is likely to be conserved or diverged. Comparing gene expression patterns between distantly related multi-cellular organisms is challenging because it is difficult to collect a wide range of functionally matching tissue samples. In some cases, matching samples simply may not exist because some organismal functions have been redistributed among otherwise homologous organs. For example, processes such as B-cell development are performed by both distinct and overlapping sets of tissues: primarily bone marrow in mammals; Bursa of Fabricus and bone marrow in birds; and likely kidney, spleen, and/or thymus in teleost fish (who lack bone marrow) [3]. Matching samples can also be hard to collect because anatomical arrangements of some of the queried organisms make isolation of specific tissues virtually impossible. For example, in frog, the kidneys are immediately adjacent to the ovaries and are typically covered in oocytes. By allowing tissue samples to be mixed and heterogeneous, though functionally related, it 1 becomes possible to compare expression patterns describing a much larger range of functions across a much larger range of organisms. Current detailed statistical models of expression data assume measurements from matched samples in each organism. As such, comparative studies of gene expression to date have either resorted to simple, non-phylogenetic measures to compare expression patterns [4], or restricted their comparisons to single-cellular organisms [5] or clearly homologous tissues in mammals [6]. Here, we present Brownian Factor Phylogenetic Analysis (BFPA), a new model of gene expression evolution that removes the earlier limitations of matched samples, therefore allowing detailed comparisons of expression patterns from the widely diverged multi-cellular organisms. Our model takes as input expression profiles of orthologous genes in multiple present-day organisms and a phylogenetic tree connecting those organisms, and simultaneously reconstructs the expression profiles for the ancestral nodes in the phylogenetic tree while detecting links in the phylogeny where rapid change of the expression profile has occurred. We model the expression data from related organisms using a mixture of Gaussians model related to a mixture of constrained factor analyzers [7]. In our model, each mixture component represents a different pattern of conservation and divergence of gene expression along each link of the phylogenetic tree. We assume a constrained linear mapping between the heterogeneous samples in different organisms and fit this mapping using maximum likelihood. We show that by expanding the amount of expression data that can be compared between species, our model generates more useful information for predicting gene function and is also better able to reconstruct the evolutionary history of gene expression as evidenced by its increased accuracy in reconstructing gene expression levels. 2 Previous work Recent evolutionary models of gene expression treat it as a quantitative (i.e. real-valued) trait and model evolutionary change in expression levels as a Brownian motion process [8, 9]. Assuming Brownian motion, a given gene’s expression level xs in a child species s after a divergence time ts from an ancestral species π(s) is predicted to be Gaussian distributed with a mean xπ(s) equal to the gene’s expression level in the ancestor and variance σ2ts: xs ∼N(xπ(s), σ2ts) (1) where σ2 represents the expected rate of change per unit time. The ancestor-child relationships are specified using a phylogeny, such as that shown in Figure 1a for the vertebrates. The leaves of the phylogeny are associated with present-day species and the internal branch points with shared ancestors. The exact position of the root of the phylogeny (not shown in the figure, but somewhere along branch ”T”) cannot be established without additional information, and the outgroup species ”T” is often used in place of the root of the tree. Nonetheless, the rooted phylogeny can be interpreted as a directed Gaussian graphical model, e.g. Figure 1b, whose nodes are variables representing expression levels in the corresponding species and whose directed edges point from immediate ancestors to their children species. The conditional probability distribution (CPD) at each node is given by Equation 1. Typical uses of these evolutionary models are to compare different hypotheses about divergence times [8] or the structure of the phylogeny [9] by calculating the likelihood of the present-day expression levels under various hypotheses. To avoid assigning this prior over the root node and thus introducing bias [10], Felsenstein developed a method called restricted maximum likelihood (REML) [11], which specifies a distribution over the observed differences between present-day expression levels rather than the expression levels themselves. 3 Brownian Factor Phylogenetic Analysis: A model of expression evolution In the following section, we propose changes to the Brownian motion model that not only allow for unmatched tissue samples, but also leverage the change observed in expression levels across multiple genes in order to classify genes into different patterns of expression evolution. We use xi s to indicate the hidden expression profile of the i-th gene (out of N ortholog groups) in species s. 2         Λ Σ      ρ Λ Σ    ρ           Λ Σ         ρ Λ Σ                    β                                 Figure 1: Our statistical model and associated species phylogenies. (a) The phylogeny of the species measured in our dataset of human (H), mouse (M), chicken (C), frog (F), and tetraodon (T), as well as an example phylogeny of three hypothetical species x1, x2, and x3 used to illustrate our model. (b) Our statistical model showing how the outgroup species x3 and its corresponding observed expression levels ˆx3 is used as a gene expression prior. Edge weights on the graph depict scaling factors applied to the variance terms Σ, which are specified by each conservation pattern c. 1 denotes no scaling on that branch, whereas ρ > 1 depicts a longer, and thus unconserved, branch. This particular conservation pattern represents a phylogeny where all species have conserved expression. The scale on the bottom shows hypothetical values for x1, x2, and x3, as well as the inferred value for x12. (c) The same model except applied to a conservation pattern where species x3 is determined to exhibit significantly different expression levels (rapid change). The input to our model are vectors of tissue-specific expression levels {ˆxi s}N i=1 for N genes over present-day species s ∈{P ∪o}; we distinguish the chosen outgroup species o from the rest of the present-day species P. ˆxi s ∈IRds, where ds is the number of tissues in species s. The goal of our model is to infer each gene’s corresponding pattern of gene expression evolution (conservation pattern) {ci}N i=1 and latent expression levels {xi s}N i=1 for all species s ∈{P ∪o ∪A}, where A represents the internal ancestral species in the phylogenetic tree (Figure 1). The likelihood function L = P {ˆxi P , xi P ∪o∪A, ci}N i=1|{ˆxi o}N i=1, θ  is shown below, where π(s) refers to the parent species of s, θ = (Λ, Σ, β, ρ, γ) are the model parameters, and N(x; µ, Σ) is the density of x under a multivariate normal distribution with mean µ and covariance Σ: L = Q i hQ s∈P ∪A P(xi s|xi π(s), ci, θ)  × Q s∈P P(ˆxi s|xi s, β) i P(xi o|ˆxi o, β)P(ci|γ) P(xi s|xi π(s), ci = Kj, θ) = N(xi s; Λsxi π(s), ρKj,s s Σs) (2) P(ˆxi s|xi s, β) = N(ˆxi s; xi s, βs) (3) P(ci = Kj|γ) = γj (4) Modeling branch lengths. Equation 2 reflects the central assumption of Brownian motion models [8, 9, 10] described in Equation 1, extended in two ways. BFPA extends this concept in two directions. First, we constrain all variances Σs to be diagonal in order to estimate tissue-specific drift rates, as tissues are known to vary widely in expression divergence rates [12]. Secondly, we note that in studying a diverse lineage such as vertebrates, we expect to see large changes in expression for genes that have diverged in function, as compared to genes of conserved function. We therefore model the drift of a gene’s expression levels along each branch of the tree as following one of two rates: a slow rate, reflecting a functional constraint, and a fast rate, reflecting neutral or selected change. Correspondingly, for each branch of the phylogenetic tree above the species s, we define two rate parameters, ρ2 s or ρ1 s, termed a short and long branch respectively (ρ2 s < ρ1 s). We fix ρ2 s = 1.0 and initialize ρ1 s to a much larger value to maintain this relationship during learning, thus modeling fast-moving genes as outliers. Our method of modelling constrained and unconstrained change as scalar multiples of a common variance is similar to the discrete gamma method [13]. 3 Linear relationship between ancestral and child tissues. We model tissues of child species as linear combinations of ancestral tissues. The matrix of coefficients Λs that relate expression levels in the child species’ tissues to that of its parent species is heavily constrained to leverage our prior understanding of the relationships of specific tissues [14]. To construct Λs, pairs of tissues that were clearly homologous (i.e. the heart) had their corresponding entry in Λs fixed at 1, and all other entries in the same row set to zero. For the remaining tissues, literature searches were conducted to determine which groups of tissues had broadly related function (i.e. immune tissues), and those entries were allowed to vary from zero. All other entries were constrained to be zero. Distinguishing intra- and inter-species variation. Equation 3 relates the observed expression levels of present-day species to the noiseless, inferred expression levels of the corresponding hidden nodes of each observed species. The variance factor βs is an estimate of the variation expected due to noise in the array measurements, and are estimated via maximum likelihood using multiple identical probes present on each microarray. Conservation pattern estimation. Our goal is to identify different types of expression evolution, including punctuated evolution, fully conserved expression, or rapid change along all branches of the phylogeny. We model the problem as a mixture model of conservation patterns, in which each conservation pattern specifies either constrained or fast change along each branch of the tree. Each conservation pattern Kj ∈{1, 2}|P ∪A| specifies a configuration of ρ1 s or ρ2 s for each species s (Kj,s ∈{1, 2} specifies ρKj,s s ). However, not all 2|P ∪A| possible patterns of short and long branches can be uniquely considered. In particular, a tree containing at least one ancestor incident to two long branches and one short are ambiguous because this tree cannot be distinguished from the same tree with that ancestor incident to three long branches. As a post-processing step, we consider short branches in those cases to be long, and sum over such ambiguous trees, leaving a total of J possible conservation patterns. Each pattern Kj is assigned a prior probability P(Kj) = γj that is learned, as reflected in Equation 4. 4 Inference Because our graphical model contains no cycles, we can apply belief propagation to perform exact inference and obtain the posterior distributions P(ci = Kj|ˆxi, θ), ∀i, j: δij = P(ci = Kj|ˆxi, θ) ∝ Z P(xi P ∪o∪A, ˆxi P , ci = Kj|ˆxi o, θ)∂xi P ∪o∪A (5) We can also estimate the distributions over expression levels of a species s′ as P(xi s′|ˆxi, θ) ∝ X j Z P(xi P ∪o∪A, ˆxi P , ci = Kj|ˆxi o, θ)∂xi P ∪o∪A\s′ (6) 5 Learning Applying the expectation maximization (EM) algorithm yields the following maximum likelihood estimates of the model parameters, where Es,s|Kj = E[xi sxiT s |ˆxi s, ci = Kj], Es,π(s)|Kj = E[xi sxiT π(s)|ˆxi s, ci = Kj], and Eπ(s),π(s)|Kj = E[xi π(s)xiT π(s)|ˆxi s, ci = Kj]: ˆΛs =   N X i=1 J X j=1 δij ρKj,s s Es,π(s)|Kj     N X i=1 J X j=1 δij ρKj,s s Eπ(s),π(s)|Kj   −1 (7) ˆΣs = 1 N diag PN i=1 PJ j=1 δij ρ Kj,s s  Es,s|Kj −2ΛsET s,π(s)|Kj + ΛsEπ(s),π(s)|KjΛT s  ˆρk s = P i P j[Kj,s = k]δijdim(xi s) −1 P i P j[Kj,s = k]δij× tr[Es,s|KjΣ−1 s ] + tr  ΛT s Σ−1 s (−2Es,π(s)|Kj + ΛsEπ(s),π(s)|Kj)  4 ˆγj = PN i=1 δij N (8) Although we have rooted the phylogeny using a present-day species rather than place a hypothetical root as has been done in previous Brownian motion models, these two models are related because they are equivalent under the condition that all samples are matched. First, note that in traditional Brownian motion models, the location of the root is arbitrary if one assumes a constant, improper prior over the root expression levels, since any choice of root would give rise to the same probability distribution over the expression levels. By using a present-day species with observed expression levels as the root node, we avoid integrating over this improper prior. Because the root node prior is constant, the likelihood of the other present-day species conditioned on this present-day root expression level is a constant times the likelihood of all present-day species expression levels. Our conditional model therefore assigns identical likelihoods and marginals as REML. 6 Results We present the results of applying our model to a novel dataset consisting of gene expression measurements of 4770 genes with unique, unambiguous orthology, i.e., each of the 4770 genes is present in only a single copy, across the following five present-day organisms: human, mouse, chicken, frog, and tetraodon. The phylogeny related these species is shown in Figure 1 with nodes labelled by the first letter of the species name. We set Tetradon as the root, so o = T and P = {H, M, C, F} and we label the internal ancestors by concatenating the labels of their present-day descendants, so A = {HM, HMC, HMCF}. Replicate microarray probe intensity measurements were taken for the 4770 genes across a total of 161 tissues (i.e., 322 microarrays in total) in the five organisms: 46 tissues from human, 55 from mouse, and 20 from each of the other three organisms. We applied a standard pre-processing pipeline to the array set to remove experimental artifacts and to transform the probe intensity measurements on each array to a common, variance-stabilized scale. Each array was first spatially detrended as described in [15]. Within a species, all arrays share the same probe set, so we applied VSN [16] to the arrays from each species to estimate an array-specific affine transform to transform the probe intensities to species-specific units. We next applied an arcsinh transform to the probe intensities to make the variance of the noise independent of the intensity measurement. For the final two preprocessing steps, we placed the transformed intensity measurements into a matrix for each species. The rows of this matrix correspond to genes and the columns are the measured tissues. First, to remove probe bias in the transformed intensities, we subtracted the row median from each element and then to attempt to transform measurements from different species to a common scale, we subtracted the column means from each element and divided by the column length. First, we investigate the stability of our conservation pattern estimates by using parameters trained on different random subsamples of our genes. We then evaluate the predictive value of our algorithm BFPA using two tasks: a) predicting gene expression profiles in a new species given expression profiles in other species, and b) predicting Gene Ontology annotation using the conservation pattern inferred by our model. To perform the stability experiments, we first randomly split the dataset into five subsets, and used each subset individually to train the model using 100 iterations of EM. We then estimated P(ci|ˆxi s, θ) for the four other subsets of genes, and classified each gene into its most likely conservation pattern. Hence, each gene is classified four times by non-overlapping training sets. Figure 2 shows that the classifications are quite stable and that most genes are classified into few conservation patterns. Most genes that were uniquely classified into a single conservation pattern either were classified as fully (all) conserved or completely unconserved, resulting in relatively few high-confidence lineagespecific genes. 6.1 Functional associations of co-transcriptionally evolving genes Pairs of genes exhibiting correlated expression also tend to perform similar function. This guiltby-association principle is often used to initially assign putative functions to genes. For example, a popular method for analyzing gene expression datasets is to cluster genes based on the pairwise 5 1 2 3 4 0 1000 2000 3000 4000 # of conservation patterns # genes none 2 3 4 all 0 500 1000 1500 2000 # conserved species # genes Figure 2: Stability of conservation pattern assignments to genes. (left) Each gene was placed into one of four bins, denoting the number of unique patterns it was classified into. Most genes were consistently classified into one conservation pattern for all four of its independent classifications. (right) For all genes uniquely classified into a single conservation pattern, the number of presentday species adjacent to conserved links was computed. Most genes were either classified as fully (all) conserved or completely unconserved. Pearson correlation coefficient (PCC), then measure the enrichment of these clusters in Gene Ontology (GO) function and process annotations [17]. In this section, we introduce the evolutionary correlation coefficient (ECC), a simple modification of PCC to integrate model predictions, and examine whether genes with the same annotated function are more similar in rank according the ECC or PCC measures. ECC scales the positively-transformed PCC by the marginal probability of the genes following the same expression evolution, assuming independent evolution. ECC(ˆxi, ˆxk) = 1 + PCC(ˆxi, ˆxk)  X j P(ci = j|ˆxi, θ)P(ck = j|ˆxk, θ) ECC can be applied using the output of either BFPA or the Brownian model. For the Brownian model, we trained and made predictions using only those matched samples in all five species. Those ten samples are the central nervous system (CNS), intestine, heart, kidney, liver, eye, muscle, spleen, stomach, and testis. We also introduce ECC-sequence, designed to measure the value of evolutionary information derived from sequence. First, the protein sequences of each gene were aligned using default parameters of MUSCLE [18]. These alignments were then inputted into PAML [19] together with the species tree shown in Figure 1 to estimate branch lengths. The PCC measure for each pair of genes was then scaled by the Pearson correlation coefficient of the branch lengths estimated by PAML to produce ECC-sequence. For all models, we first used the ECC/PCC similarity metric for each gene to rank all other genes in order of expression similarity. We then apply the Wilcoxon Rank Sum test to evaluate whether genes with the same GO annotations, as annotated for the mouse ortholog, are significantly higher in rank than all other genes. For this analysis, we only considered GO Process categories which have at least one of the 4770 genes annotated in that category. We also removed all genes which were not annotated in any category, resulting in a total of 3319 genes and 4246 categories. Figure 3 illustrates the distribution of smallest p-values achieved by each gene over all of their annotated functions. PCC is used as a baseline performance measure as it does not consider evolutionary information. We see that all evolutionary-based models outperform PCC in ranking genes with similar function much closer on average. ECC-sequence performs worse than PCC, suggesting that expression-based evolutionary metrics may provide additional information compared to those based on sequence. The relative performance of BFPA versus Brownian reflects an overall significant performance gap between our models and the existing ones. A control measure ECC-random is shown, which is computed by randomizing the gene labels of the data in each of the five organisms before learning. Finally, Brown+prior measures the performance of the Brownian model when the conservation pattern priors are allowed to be estimated, and performs better than the Brownian model but worse than BFPA, as expected. All differences between the distributions are statistically significant, as all pairwise p-values computed by the Kolmogorov-Smirnov test are less than 10−6. 6 5 10 15 0 500 1000 1500 2000 2500 3000 −log10(pvalue) # genes BFPA Brown+prior Brown PCC ECC−sequence ECC−random H M C F 0 500 1000 1500 2000 difference in wins species [BFPA]wins − [Brown]wins [BFPA]wins − [baseline]wins Figure 3: Model performance. (left) A reverse cumulative distribution plot of p-values obtained from applying the Wilcoxon Rank Sum test using either a PCC or ECC-based similarity metric. The smallest p-value achieved for each gene across all its annotated functions is used in the distribution. Position (x, y) indicates that for y genes, their p-value was less than 10−x. Higher lines on the graph translate into stronger associations between expression levels and gene function, which we interpret as better performance. (right) This graph shows the difference in the total number of expression values for which a particular method achieves the lowest error, sorted by species. 6.2 Reconstruction of gene expression levels Here we report the performance of our model in predicting the expression level of a gene in each of human, mouse, chicken, and frog, given its expression levels in the other species. Tetraodon is not predicted because it acts as an outgroup in our model. The model was trained using 100 EM iterations on half of the dataset, which was then used to predict the expression levels for each gene in each species in the other half of the dataset, and vice versa. To create a baseline performance measure, we computed the error when using an average of the four other species to predict the expression level of a gene in the fifth species. We only compute predictions for the ten matched samples across all species so that we can compare errors made by our model against those of Brownian and the baseline, which require matched samples. Figure 3 shows that with the exception of the comparison against Brownian in chicken, BFPA achieves lower error than both Brownian and baseline in predicting expression measurements. 7 Discussion We have presented a new model for the simultaneous evolution of gene expression levels across multiple tissues and organs. Given expression data from present-day species, our model can be used to simultaneously infer the ancestral expression levels of orthologous genes as well as determine where in the phylogeny the gene expression levels underwent substantial change. BFPA extends previous Brownian models [8, 9] by introducing a constrained factor analysis framework to account for complex tissue relationships between different species and by adapting the discrete gamma method [13] to model quantitative gene expression data. Our model performs better than other Brownian models in functional association and expression prediction experiments, demonstrating that the evolutionary history we infer better recovers the function of the gene. We have shown that this is in large part due to our ability to consider species-specific tissue measurements, a feature not implemented in any existing model to the best of our knowledge. We also showed that gene expression-based phylogenetic data may provide information not contained in sequence-based phylogenetic data in terms of helping predict the functional association of genes. Our model has a number of other applications outside of using it to study the evolutionary history of gene expression. Our ability to identify genes with conserved expression across multiple species will help in the inference of gene function from annotated to non-annotated species because unconserved expression patterns indicate a likely change in the biological function of a gene. We also expect that by identifying species that share a conserved expression pattern, our model will aid in the 7 identification of transcriptional cis-regulatory elements by focusing the search for cis-elements to those species identified as conserved in expression. While we have taken different profiled samples as representing different tissues, our methodology can be easily expanded to study evolutionary change in gene expression in response to different growth conditions or environmental stresses, as with those studied in [5]. Our methodology is also easily extendible to other model organisms for which there are genomes and expression data for multiple closely related species (e.g. yeast, worm, fly, plants). We anticipate that the results obtained will be invaluable in the study of genome evolution and identification of cis-regulatory elements, whose phylogeny should reflect that of the gene expression patterns. All data used in this publication can be obtained by a request to the authors. References [1] Li, W., Yang, J., Gu, X. (2005) Expression divergence between duplicate genes. Trends Genet., 21, 602-607. [2] Chen, K., Rajewsky, N. (2007) The evolution of gene regulation by transcription factors and microRNAs. Nature Rev. Genet., 8, 93-103. [3] Yergeau, D.A. et al. (2005) bloodthirsty, an RBCC/TRIM gene required for erythropoiesis in zebrafish. Dev. Biol., 283, 97-112. [4] Stuart, J.M., Segal, E., Koller, D., Kim, S.K. (2003) A gene-coexpression network for global discovery of conserved genetic modules. Science, 302, 249-255. [5] Tirosh, I., Weinberger, A., Carmi, M., Barkai, N. (2006) A genetic signature of interspecies variations in gene expression. Nat. Genet., 38, 830-834. [6] Khaitovich, P. et al. (2005) A neutral model of transcriptome evolution. PLoS. Biol., 2, 682-689. [7] Ghahramani, Z., & Hinton, G.E. (1996) The EM algorithm for mixtures of factor analyzers. Technical Report CRG-TR-96-2, University of Toronto. [8] Gu, X. (2004) Statistical framework for phylogenomic analysis of gene family expression profiles. Genetics, 167, 531-542. [9] Oakley, T.H. et al. (2005) Comparative methods for the analysis of gene-expression evolution: an example using yeast functional genomic data. Mol. Biol. Evol., 22, 40-50. [10] Felsenstein, J. (2004) Inferring phylogenies. Sunderland (Massachusetts): Sinauer Associates. 664 p. [11] Felsenstein, J. (1981) Evolutionary trees from gene-frequencies and quantitative characters - finding maximum likelihood estimates. Evolution, 35, 1229-1242. [12] Khaitovich et al. (2006) Evolution of primate gene expression. Nat. Rev. Genet., 7, 693-702. [13] Yang, Z. (1994) Maximum likelihood phylogenetic estimation from DNA sequences with variable rates over sites: approximate methods. J. Mol. Evol., 39, 306-314. [14] Kardong, K.V. (2006) Vertebrates: comparative anatomy, function, evolution. McGraw-Hill. 782 p. [15] Zhang, W., Morris, Q.D. et al. (2004) The functional landscape of mouse gene expression. J. Biol., 3, 21. [16] Huber, W. et al. (2002) Variance stabilization applied to microarray data calibration and to the quantification of differential expression. Bioinformatics, 18, S96-104. [17] The Gene Ontology Consortium. (2000) Gene Ontology: tool for the unification of biology. Nature Genet., 25, 25-29. [18] Edgar, R.C. (2004) MUSCLE: multiple sequence alignment with high accuracy and high throughput. Nucleic Acids Res., 32, 1792-1797. [19] Yang, Z. (2007) PAML 4: phylogenetic analysis by maximum likelihood. Mol. Biol. Evol., 24, 1586-1591. 8
2008
69
3,558
Analyzing human feature learning as nonparametric Bayesian inference Joseph L. Austerweil Department of Psychology University of California, Berkeley Berkeley, CA 94720 Joseph.Austerweil@gmail.com Thomas L. Griffiths Department of Psychology University of California, Berkeley Berkeley, CA 94720 Tom Griffiths@berkeley.edu Abstract Almost all successful machine learning algorithms and cognitive models require powerful representations capturing the features that are relevant to a particular problem. We draw on recent work in nonparametric Bayesian statistics to define a rational model of human feature learning that forms a featural representation from raw sensory data without pre-specifying the number of features. By comparing how the human perceptual system and our rational model use distributional and category information to infer feature representations, we seek to identify some of the forces that govern the process by which people separate and combine sensory primitives to form features. 1 Introduction Most accounts of the processes underlying human learning, decision-making, and perception assume that stimuli have fixed sets of features. For example, traditional accounts of category learning start with a set of features (e.g., is furry and barks), which are used to learn categories (e.g., dogs). In a sense, features are the basic atoms for these processes. Although the model’s features may be combined in particular ways to create new features, the basic primitives are assumed to be fixed. While this assumption has been useful in investigating many cognitive functions, it has been attacked on empirical [1] and theoretical [2] grounds. Experts identify parts of objects in their domain of expertise vastly differently than novices (e.g., [3]), and evidence for flexible feature sets has been found in many laboratory experiments (see [2] for a review). In this paper, we present an account of how flexible features sets could be induced from raw sensory data without requiring the number of features to be prespecified. From early work demonstrating XOR is only learnable by a linear classifier with the right representation [4] to the so-called “kernel trick” popular in support vector machines [5], forming an appropriate representation is a fundamental issue for applying machine learning algorithms. We draw on the convergence of interest from cognitive psychologists and machine learning researchers to provide a rational analysis of feature learning in the spirit of [6], defining an “ideal” feature learner using ideas from nonparametric Bayesian statistics. Comparing the features identified by this ideal learner to those learned by people provides a way to understand how distributional and category information contribute to feature learning. We approach the problem of feature learning as one of inferring hidden structure from observed data – a problem that can be solved by applying Bayesian inference. By using methods from nonparametric Bayesian statistics, we can allow an unbounded amount of structure to be expressed in the observed data. For example, nonparametric Bayesian clustering models allow observations to be assigned to a potentially infinite number of clusters, of which only a finite number are represented at any time. When such a model is presented with a new object that it cannot currently explain, 1 it increases the complexity of its representation to accommodate the object. This flexibility gives nonparametric Bayesian models the potential to explain how people infer rich latent structure from the world, and such models have recently been applied to a variety of aspects of human cognition (e.g., [6, 7]). While nonparametric Bayesian models have traditionally been used to solve problems related to clustering, recent work has resulted in new models that can infer a set of features to represent a set of objects without limiting the number of possible features [8]. These models are based on the Indian Buffet Process (IBP), a stochastic process that can be used to define a prior on the features of objects. We use the IBP as the basis for a rational model of human perceptual feature learning. The plan of the paper is as follows. Section 2 summarizes previous empirical findings from the human perceptual feature learning literature. Motivated by these results, Section 3 presents a rational analysis of feature learning, focusing on the IBP as one component of a nonparametric Bayesian solution to the problem of finding an optimal representation for some set of observed objects. Section 4 compares human learning and the predictions of the rational model. Section 5 concludes the paper. 2 Human perceptual feature learning One main line of investigation of human feature learning concerns the perceptual learning phenomena of unitization and differentiation. Unitization occurs when two or more features that were previously perceived as distinct features merge into one feature. In a visual search experiment by Shiffrin and Lightfoot [9], after learning that the features that generated the observed objects co-vary in particular ways, partcipants represented each object as its own feature instead of as three separate features. In contrast, differentiation is when a fused feature splits into new features. For example, color novices cannot distinguish between a color’s saturation and brightness; however, people can be trained to make these distinctions [10]. Although general conditions for when differentiation or unitization occur have been outlined, there is no formal account for why and when these processes take place. In Shiffrin and Lightfoot’s visual search experiment [9], participants were trained to find one of the objects shown in Figure 1(a) in a scene where the other three objects were present as distractors. Each object is composed of three features (single line segments) inside a rectangle. The objects can thus be represented by the feature ownership matrix shown in Figure 1(a), with Zik = 1 if object i has feature k. After prolonged practice, human performance drastically and suddenly improved, and this advantage did not transfer to other objects created from the same feature set. They concluded that the human perceptual system had come to represent each object holistically, rather than as being composed of its more primitive features. In this case, the fact that the features tended to co-occur only in the configurations corresponding to the four objects provides a strong cue that they may not be the best way to represent these stimuli. The distribution of potential features over objects provides one cue for inferring a feature representation; however, there can be cases where multiple feature representations are equally good. For example, Pevtzow and Goldstone [11] demonstrated that human perceptual feature learning is affected by category information. In the first part of their experiment, they trained participants to categorize eight “distorted” objects into one of three groups using one of two categorization schemes. The objects were distorted by the addition of a random line segment. The category membership of four of the objects, A-D, depended on the training condition, as shown in Figure 1 (b). Participants in the horizontal categorization condition had objects A and B categorized into one group and objects C and D into the other. Those in the vertical categorization condition learned objects A and C are categorized into one group and objects B and D in the other. The nature of this categorization affected the features learned by participants, providing a basis for selecting one of the two featural representations for these stimuli that would otherwise be equally well-justified based on distributional information. Recent work has supplemented these empirical results with computational models of human feature learning. One such model is a neural network that incorporates categorization information as it learns to segment objects [2]. Although the inputs to the model are the raw pixel values of the stimuli, the number of features must be specified in advance. This is a serious issue for an analysis of human feature learning because it does not allow us to directly compare different feature set sizes – a critical factor in capturing unitization and differentiation phenomena. Other work has investigated how the human perceptual system learns to group objects that seem to arise from a common cause 2 x1 1 1 1 0 0 0 x2 0 1 0 1 0 1 x3 0 0 1 1 1 0 x4 1 0 0 0 1 1 x1 x2 x3 x4 (a) (b) Figure 1: Inferring representations for objects. (a) Stimuli and feature ownership matrix from Shiffrin and Lightfoot [9]. (b) Four objects (A-D) and inferred features depending on categorization scheme from Pevtzow and Goldstone [11] [12]. This work uses a Bayesian model that can vary the number of causes it identifies, but assumes indifference to the spatial position of the objects and that the basic objects themselves are already known, with a binary variable representing the presence of an object in each scene being given to the model as the observed data. This model is thus given the basic primitives from raw sensory data and does not provide an account of how the human perceptual system identifies these primitives. In the remainder of the paper, we develop a rational model of human feature learning that applies to raw sensory data and does not assume a fixed number of features in advance. 3 A Rational Analysis of Feature Learning Rational analysis is a technique for understanding a cognitive process by comparing it to the optimal solution to an underlying computational problem [6], with the goal of understanding how the structure of this problem influences human behavior. By formally analyzing the problem of inferring featural representations from raw sensory data of objects, we can determine how distributional and category information should influence the features used to represent a set of objects. 3.1 Inferring Features from Percepts Our goal is to form the most probable feature representation for a set of objects given the set of objects we see. Formally, we can represent the features of a set of objects with a feature ownership matrix Z like that shown in Figure 1, where rows correspond to objects, columns correspond to features, and Zik = 1 indicates that object i possesses feature k. We can then seek to identify the most likely feature ownership matrix Z given the observed properties of a set of objects X by a simple application of Bayes theorem: ˆZ = arg max Z P(Z|X) = arg max Z P(X|Z)P(Z) ! Z′ P(X|Z′)P(Z′) = arg max Z P(X|Z)P(Z) (1) This separates the problem of finding the best featural representation given a set of data into two subproblems: finding a representation that is in general probable, as expressed by the prior P(Z), and finding a representation that generates the observed properties of the objects with high probability, as captured by the likelihood P(X|Z). We consider how these distributions are defined in turn. 3 3.2 A Prior on Feature Ownership Matrices Although in principle any distribution on binary matrices P(Z) could be used as a prior, we use one particular nonparametric Bayesian prior, the Indian Buffet Process (IBP) [8]. The IBP has several nice properties: it allows for multiple features per object, possessing one feature does not make possessing another feature less likely, and it generates binary matrices of unbounded dimensionality. This allows the IBP to use an appropriate, possibly different, number of features for each object and makes it possible for the size of the feature set to be learned from the objects. The IBP defines a distribution over binary matrices with a fixed number of rows and an infinite number of columns, of which only a finite number are expected to have non-zero elements. The distribution thus permits tractable inference of feature ownership matrices without specifying the number of features ahead of time. The probability of a feature ownership matrix under the IBP is typically described via an elaborate metaphor in which objects are customers and features are dishes in an Indian buffet, with the choice of dishes determining the features of the object, but reduces to P(Z) = αK+ "2N−1 h=1 Kh! exp{−αHN} K+ # k=1 (N −mk)!(mk −1)! N! (2) where N is the number of objects, Kh is the number of features with history h (the history is the column of the feature interpreted as a binary number), K+ is the number of columns with non-zero entries, HN is the N-th harmonic number, α affects the number of features objects own and mk is the number of objects that have feature k. 3.3 Two Likelihood Functions for Perceptual Data To define the likelihood, we assume N objects with d observed dimensions (e.g., pixels in an image) are grouped in a matrix X (X = [xT 1 , . . . , xT N], where xi ∈Rd). The feature ownership matrix Z marks the commonalities and contrasts between these objects, and the likelihood P(X|Z) expresses how these relationships influence their observed properties. Although in principle many forms are possible for the likelihood, two have been used successfully with the IBP in the past: the linearGaussian [8] and noisy-OR [13] models. The linear-Gaussian model assumes that xi is drawn from a Gaussian distribution with mean ziA and covariance matrix ΣX = σ2 XI, where zi is the binary vector defining the features of object xi and A is a matrix of the weights of each element of D of the raw data for each feature k. p(X|Z, A,σ X) = 1 (2πσ2 X)ND/2 exp{−1 2σ2 X tr((X −ZA)T (X −ZA))} (3) Although A actually represents the weights of each feature (which combine with each other to determine raw pixel values of each object), it is integrated out of so that the conditional probability of X given Z and A only depends on Z and hyperparameters corresponding to the variance in X and A (see [8] for details). The result of using this model is a set of images representing the perceptual features corresponding to the matrix Z, expressed in terms of the posterior distribution over the weights A. For the noisy-OR model [13], the raw visual data is reduced to binary pixel values. This model assumes that the pixel values X are generated from a noisy-OR distribution where Z defines the features that each object has and Y defines which pixels that should be one for each feature: p(xi,d = 1|Z, Y,λ ,ϵ ) = 1 −(1 −λ)zi,:y:,d(1 −ϵ) (4) where hyperparameters ϵ and λ represent the probability a pixel is turned on without a cause and the probability a feature fails to turn on a pixel respectively. Additionally, Y is assumed to have a Bernoulli prior with hyperparameter p representing the probability that an entry of Y is one, with p(Y ) = " k,d pyk,d(1 −p)1−yk,d. The result of using this model is a distribution over binary arrays indicating the pixels associated with the features identified by Z, expressed via the posterior distribution on Y . 3.4 Summary The prior and likelihood defined in the preceding sections provide the ingredients necessary to use Bayesian inference to identify the features of a set of objects from raw sensory data. The result 4 Figure 2: Inferring feature representations using distributional information from Shriffin and Lightfoot [9]. On the left, bias features and on the right, the four objects as learned features. The rational model justifies the human perceptual system’s unitization of the objects as features is a posterior distribution on feature ownership matrices Z, indicating how a set of objects could be represented, as well as an indication of how the features identified by this representation are expressed in the sensory data. While computing this posterior distribution exactly is intractable, we can use existing algorithms developed for probabilistic inference in these models. Although we used Gibbs sampling – a form of Markov chain Monte Carlo that produces samples from the posterior distribution on Z – for all of our simulations, Reversible Jump MCMC and particle filtering inference algorithms have also been derived for these models [8, 13, 14]. 4 Comparison with Human Feature Learning The nonparametric Bayesian model outlined in the previous section provides an answer to the question of how an ideal learner should represent a set of objects in terms of features. In this section we compare the representations discovered by this ideal model to human inferences. First, we demonstrate that the representation discovered by participants in Shiffrin and Lightfoot’s experiment [9] is optimal under this model. Second, we illustrate that both the IBP and the human perceptual system incorporate category information appropriately. Finally, we present simulations that show the flexibility of the IBP to learn different featural representations depending on the distributional information of the actual features used to generate the objects, and discuss how this relates to the phenomena of unitization and differentiation more generally. 4.1 Using Distributional Information When should whole objects or line segments be learned as features? It is clear which features should be learned when all of the line segments occur independently and when the line segments in each object always occur together (the line segments and the objects respectively). However, in the intermediate cases of non-perfect co-occurence, what should be learned? Without a formal account of feature learning, there is no basis for determining when object “wholes” or “parts” should be learned as features. Our rational model provides an answer – when there is enough statistical evidence for the individual line segments to be features, then each line segment should be differentiated into features. Otherwise, the collection of line segments should be learned as one unitized feature. The stimuli constructed by Shiffrin and Lightfoot [9] constitute one of the intermediate cases between the extremes of total independence and perfect correlation, and are thus a context in which formal modeling can be informative. Figure 2 presents the features learned by applying the model with a noisy-OR likelihod to this object set. The features on left are the bias and the four features on the right are the four objects from their study. The learned features match the representation formed by people in the experiment. Although there is imperfect co-occurence between the features in each object, there is not enough statistical evidence to warrant representing the object as a combination of features. These results were obtained with an object set consisting of five copies of each of the four objects with added noise that flips a pixel’s value with probability 1 75. The results were obtained by running the Gibbs sampler with initialization p = 0.2, α = 1.0, ϵ = 0.025, and λ = .975. Inference is robust to different initializations as long as they are near these values. 5 (a) (c) (b) (d) Figure 3: Inferring feature representations using category information from Pevtzow and Goldstone [11]. (a) - (b) Features learned from using the rational model with the noisy-OR likelihood where 10 distorted copies of objects A-D comprise the object set with (a) horizontal and (b) vertical categorization schemes (c = 35) respectively. The features inferred by the model match those learned by participants in the experiment. (c) - (d) Features learned from using the same model with the full object set with 10 distorted copies of each object, the (c) horizontal and (d) vertical categorization schemes (c = 75) respectively. The first two features learned by the model match those learned by participants in the experiment. The third feature represents the intersection of the third category (Pevtzow and Goldstone did not test if participants learned this feature). 4.2 Using Category Information To model the results of Pevtzow and Goldstone [11], we applied the rational model with the noisyOR likelihood to the stimuli used in their experiment. Although this model does not incorporate category information directly, we included it indirectly by postpending c bits per category to the end of each image. Figure 3 (a) and (b) show the features learned by the model when trained on distorted objects A-D using both categorization schemes. The categorization information is used appropriately by the model and mirrors the different feature representations inferred by the two pariticipant groups. Figure 3 (c) and (d) show the features learned by the model when given ten distorted copies of all eight objects. Like the human perceptual system, the model infers different, otherwise undistinguishable, feature sets using categorization information appropriately. Although the neural network model of feature learning presented in [2] also inferred correct representations with the four object set, this model did not produce correct results for the eight object set. Inference is susceptible to local minima given poor initializations of the hyperparameters. The features shown in Figure 3 used the following initialization: p = 0.125, α = 1.5, λ = 0.99, and ϵ = 0.01.1 4.3 Unitization and Differentiation The results presented in this section show that our rational model reproduces human inferences for particular datasets, suggesting that the model might be useful more generally in identifying conditions under which the human perceptual system should unitize or differentiate sensory primitives. The Shiffrin and Lightfoot results demonstrated one case where whole objects should be learned as features even though each object was created from features that did not perfectly co-occur. The IBP confirms the intuitive explanation that there is not enough statistical evidence to break (differentiate) the objects into individual features and thus the unitization behavior of the participants is justified. However, there is no comparison with the same underlying feature set to when statistical evidence warrants differentiation, so that the individual features should be learned as features. To illustrate the importance of distributional information on the inferred featural representation, we designed a simulation to show cases where the objects and the actual features used to generate the objects should be learned as the features. Figure 4 (a) shows the bias (on left) and the set of six features used in the simulations. Figure 4 (b) is an artificially generated set of observed objects for 1The features inferred by the model in each figure has highest probability given the images it observed. 6 1 1 0 0 0 1 0 1 1 1 0 0 1 0 1 0 1 0 0 0 0 1 1 1 1 1 0 0 0 1 ... 1 1 1 0 0 0 1 1 0 1 0 0 1 1 0 0 1 0 1 1 0 0 0 1 1 0 1 1 0 0 ... (a) (b) (c) (d) (e) Figure 4: Inferring different feature representations depending on the distributional information. (a) The bias (on left) and the six features used to generate both object sets. (b) - (c) The feature membership matrices for (b) unitization and (c) differentiation sets respectively. (d) - (e) The feature representations inferred by model for (d) unitization and (e) differentiation sets respectively. which there is not enough statistical evidence to warrant differentiation. This is the same underlying feature membership matrix as the Shiffrin and Lightfoot result (unitization set). Figure 4 (c) is an artificially generated object set in which the observed objects should be differentiated. Here, the features used to generate the objects occur independently of each other and thus the underlying feature membership matrix used to generate the observed objects is all possible $6 3 % objects (differentiation set). Figure 4 (d) and (e) show the results of applying the rational model with a noisy-OR likelihood to these two object sets. When the underlying features occur independently of each other, the model represents the objects in terms of these features. When the features often co-occur, the model forms a representation which consists simply of the objects themselves. For each simulation, 40 objects from the appropriate set (repeating as necessary) were presented to the model. Each object was perturbed by added noise that flipped a pixel’s value with probability 1 75. The hyperparameters were inferred with Metropolis-Hastings steps during Gibbs sampling and were initialized to: α = 1, σ2 X = 2.25, and σ2 A = 0.5. These simulations demonstrate that even when the same underlying features create two object sets, different representations should be inferred depending on the the distributional information, suggesting that this kind of information can be a powerful driving force behind unitization and differentiation. 5 Discussion and Future Directions The flexibility of human featural representations and the power of representation in machine learning make a formal account of how people derive representations from raw sensory information tremendously important. We have outlined one approach to this problem, drawing on ideas from nonparametric Bayesian statistics to provide a rational account of how the human perceptual system uses distributional and category information to infer representations. First, we showed that in one circumstance where it is ambiguous whether or not parts or objects should form the featural rep7 resentation of the objects, that this model peforms similarily to the human perceptual system (they both learn the objects themselves as the basic features). Second, we demonstrated that the IBP and the human perceptual systems both use categorization information to make the same inductions as appropriate for the given categorization scheme. Third, we further investigated how distributional information of the features that create the object set affects the inferred representation. These results begin to sketch a picture of human feature learning as a rational combination of different sources of information about the structure of a set of objects. There are two main future directions for our work. First, we intend to perform further analysis of how the human perceptual system uses statistical cues. Specifically, we plan to investigate whether the feature sets identified by the perceptual system are affected by the distributional information it is given (as our simulations would suggest). Second, we hope to use hierarchical nonparametric Bayesian models to investigate the interplay between knowledge effects and perceptual input. Recent work has identified a connection between the IBP and the Beta process [15], making it possible to define hierarchical Bayesian models in which the IBP appears as a component. Such models would provide a more natural way to capture the influence of category information on feature learning, extending the analyses that we have performed here. Acknowledgements We thank Rob Goldstone, Karen Schloss, Stephen Palmer, and the Computational Cognitive Science Lab at Berkeley for discussions and the Air Force Office of Scientific Research for support. References [1] P. G. Schyns, R. L. Goldstone, and J. Thibaut. Development of features in object concepts. Behavioral and Brain Sciences, 21:1–54, 1998. [2] R. L. Goldstone. Learning to perceive while perceiving to learn. In Perceptual organization in vision: Behavioral and neural perspectives, pages 233–278. 2003. [3] I. Biederman and M. M. Schiffrar. Sexing day-old chicks: A case study and expert systems analysis of a difficult perceptual-learning task. Journal of Experimental Psychology: Learning, Memory, and Cognition, 13:640–645, 1987. [4] M. L. Minsky and S. A. Papert. Perceptrons. MIT Press, Cambridge, MA, 1969. [5] B. Scholkopf and A. J. Smola. Learning with Kernels. MIT Press, Cambridge, MA, 2001. [6] J. R. Anderson. Is human cognition adaptive? Behavioral and Brain Sciences, 14:471–517, 1991. [7] A. N. Sanborn, T. L. Griffiths, and D. J. Navarro. A more rational model of categorization. In Proceedings of the 28th Annual Conference of the Cognitive Science Society, 2006. [8] T. L. Griffiths and Z. Ghahramani. Infinite latent feature models and the Indian buffet process. In Advances in Neural Information Processing Systems 18, 2006. [9] R. M. Shiffrin and N. Lightfoot. Perceptual learning of alphanumeric-like characters. In The psychology of learning and motivation, volume 36, pages 45–82. Academic Press, San Diego, 1997. [10] R. L. Goldstone. Influences of categorization on perceptual discrimination. Journal of Experimental Psychology: General, 123:178–200, 1994. [11] R. Pevtzow and R. L. Goldstone. Categorization and the parsing of objects. In Proceedings of the Sixteenth Annual Conference of the Cognitive Science Society, pages 712–722, Hillsdale, NJ, 1994. Lawrence Erlbaum Associates. [12] G. Orban, J. Fiser, R. N. Aslin, and M. Lengyel. Bayesian model learning in human visual perception. In Advances in Neural Information Processing Systems 18, 2006. [13] F. Wood, T. L. Griffiths, and Z. Ghahramani. A non-parametric Bayesian method for inferring hidden causes. In Proceeding of the 22nd Conference on Uncertainty in Artificial Intelligence, 2006. [14] F. Wood and T. L. Griffiths. Particle filtering for nonparametric Bayesian matrix factorization. In Advances in Neural Information Processing Systems 19, 2007. [15] R. Thibaux and M. I. Jordan. Hierarchical Beta processes and the Indian buffet process. Technical Report 719, University of California, Berkeley. Department of Statistics, 2006. 8
2008
7
3,559
An Homotopy Algorithm for the Lasso with Online Observations Pierre J. Garrigues Department of EECS Redwood Center for Theoretical Neuroscience University of California Berkeley, CA 94720 garrigue@eecs.berkeley.edu Laurent El Ghaoui Department of EECS University of California Berkeley, CA 94720 elghaoui@eecs.berkeley.edu Abstract It has been shown that the problem of ℓ1-penalized least-square regression commonly referred to as the Lasso or Basis Pursuit DeNoising leads to solutions that are sparse and therefore achieves model selection. We propose in this paper RecLasso, an algorithm to solve the Lasso with online (sequential) observations. We introduce an optimization problem that allows us to compute an homotopy from the current solution to the solution after observing a new data point. We compare our method to Lars and Coordinate Descent, and present an application to compressive sensing with sequential observations. Our approach can easily be extended to compute an homotopy from the current solution to the solution that corresponds to removing a data point, which leads to an efficient algorithm for leave-one-out cross-validation. We also propose an algorithm to automatically update the regularization parameter after observing a new data point. 1 Introduction Regularization using the ℓ1-norm has attracted a lot of interest in the statistics [1], signal processing [2], and machine learning communities. The ℓ1 penalty indeed leads to sparse solutions, which is a desirable property to achieve model selection, data compression, or for obtaining interpretable results. In this paper, we focus on the problem of ℓ1-penalized least-square regression commonly referred to as the Lasso [1]. We are given a set of training examples or observations (yi, xi) ∈ R × Rm, i = 1 . . . n. We wish to fit a linear model to predict the response yi as a function of xi and a feature vector θ ∈Rm, yi = xT i θ + νi, where νi represents the noise in the observation. The Lasso optimization problem is given by min θ 1 2 n X i=1 (xT i θ −yi)2 + µn∥θ∥1, (1) where µn is a regularization parameter. The solution of (1) is typically sparse, i.e. the solution θ has few entries that are non-zero, and therefore identifies which dimensions in xi are useful to predict yi. The ℓ1-regularized least-square problem can be formulated as a convex quadratic problem (QP) with linear equality constraints. The equivalent QP can be solved using standard interior-point methods (IPM) [3] which can handle medium-sized problems. A specialized IPM for large-scale problems was recently introduced in [4]. Homotopy methods have also been applied to the Lasso to compute the full regularization path when λ varies [5] [6][7]. They are particularly efficient when the solution is very sparse [8]. Other methods to solve (1) include iterative thresholding algorithms [9][10][11], feature-sign search [12], bound optimization methods [13] and gradient projection algorithms [14]. 1 We propose an algorithm to compute the solution of the Lasso when the training examples (yi, xi)i=1...N are obtained sequentially. Let θ(n) be the solution of the Lasso after observing n training examples and θ(n+1) the solution after observing a new data point (yn+1, xn+1) ∈R×Rm. We introduce an optimization problem that allows us to compute an homotopy from θ(n) to θ(n+1). Hence we use the previously computed solution as a “warm-start”, which makes our method particularly efficient when the supports of θ(n) and θ(n+1) are close. In Section 2 we review the optimality conditions of the Lasso, which we use in Section 3 to derive our algorithm. We test in Section 4 our algorithm numerically, and show applications to compressive sensing with sequential observations and leave-one-out cross-validation. We also propose an algorithm to automatically select the regularization parameter each time we observe a new data point. 2 Optimality conditions for the Lasso The objective function in (1) is convex and non-smooth since the ℓ1 norm is not differentiable when θi = 0 for some i. Hence there is a global minimum at θ if and only if the subdifferential of the objective function at θ contains the 0-vector. The subdifferential of the ℓ1-norm at θ is the following set ∂∥θ∥1 =  v ∈Rm : vi = sgn(θi) if |θi| > 0 vi ∈[−1, 1] if θi = 0  . Let X ∈Rn×m be the matrix whose ith row is equal to xT i , and y = (y1, . . . , yn)T . The optimality conditions for the Lasso are given by XT (Xθ −y) + µnv = 0, v ∈∂∥θ∥1. We define as the active set the indices of the elements of θ that are non-zero. To simplify notations we assume that the active set appears first, i.e. θT = (θT 1 , 0T ) and vT = (vT 1 , vT 2 ), where v1i = sgn(θ1i) for all i, and −1 ≤v2j ≤1 for all j. Let X = (X1 X2) be the partitioning of X according to the active set. If the solution is unique it can be shown that XT 1 X1 is invertible, and we can rewrite the optimality conditions as θ1 = (XT 1 X1)−1(XT 1 y −µnv1) −µnv2 = XT 2 (X1θ1 −y) . Note that if we know the active set and the signs of the coefficients of the solution, then we can compute it in closed form. 3 Proposed homotopy algorithm 3.1 Outline of the algorithm Suppose we have computed the solution θ(n) to the Lasso with n observation and that we are given an additional observation (yn+1, xn+1) ∈R × Rm. Our goal is to compute the solution θ(n+1) of the augmented problem. We introduce the following optimization problem θ(t, µ) = arg min θ 1 2  X txT n+1  θ −  y tyn+1  2 2 + µ∥θ∥1. (2) We have θ(n) = θ(0, µn) and θ(n+1) = θ(1, µn+1). We propose an algorithm that computes a path from θ(n) to θ(n+1) in two steps: Step 1 Vary the regularization parameter from µn to µn+1 with t = 0. This amounts to computing the regularization path between µn and µn+1 as done in Lars. The solution path is piecewise linear and we do not review it in this paper (see [15][7][5]). Step 2 Vary the parameter t from 0 to 1 with µ = µn+1. We show in Section 3.2 how to compute this path. 2 3.2 Algorithm derivation We show in this Section that θ(t, µ) is a piecewise smooth function of t. To make notations lighter we write θ(t) := θ(t, µ). We saw in Section 2 that the solution to the Lasso can be easily computed once the active set and signs of the coefficients are known. This information is available at t = 0, and we show that the active set and signs will remain the same for t in an interval [0, t∗) where the solution θ(t) is smooth. We denote such a point where the active set changes a “transition point” and show how to compute it analytically. At t∗we update the active set and signs which will remain valid until t reaches the next transition point. This process is iterated until we know the active set and signs of the solution at t = 1, and therefore can compute the desired solution θ(n+1). We suppose as in Section 2 and without loss of generality that the solution at t = 0 is such that θ(0) = (θT 1 , 0T ) and vT = (vT 1 , vT 2 ) ∈∂∥θ(0)∥1 satisfy the optimality conditions. Lemma 1. Suppose θ1i ̸= 0 for all i and |v2j| < 1 for all j. There exist t∗> 0 such that for all t ∈[0, t∗), the solution of (2) has the same support and the same sign as θ(0). PROOF. The optimality conditions of (2) are given by XT (Xθ −y) + t2xn+1 xT n+1θ −yn+1  + µw = 0, (3) where w ∈∂∥θ∥1. We show that there exists a solution θ(t)T = (θ1(t)T , 0T ) and w(t)T = (vT 1 , w2(t)T ) ∈∂∥θ(t)∥1 satisfying the optimality conditions for t sufficiently small. We partition xT n+1 = (xT n+1,1, xT n+1,2) according to the active set. We rewrite the optimality conditions as XT 1 (X1θ1(t) −y) + t2xn+1,1 xn+1,1T θ1(t) −yn+1  + µv1 = 0 XT 2 (X1θ1(t) −y) + t2xn+1,2 xn+1,1T θ1(t) −yn+1  + µw2(t) = 0 . Solving for θ1(t) using the first equation gives θ1(t) = XT 1 X1 + t2xn+1,1xn+1,1 T −1 XT 1 y + t2yn+1xn+1,1 −µv1  . (4) We can see that θ1(t) is a continuous function of t. Since θ1(0) = θ1 and the elements of θ1 are all strictly positive, there exists t∗ 1 such that for t < t∗ 1, all elements of θ1(t) remain positive and do not change signs. We also have −µn+1w2(t) = XT 2 (X1θ1(t) −y) + t2xn+1,2 xn+1,1 T θ1(t) −yn+1  . (5) Similarly w2(t) is a continuous function of t, and since w2(0) = v2, there exists t∗ 2 such that for t < t∗ 2 all elements of w2(t) are strictly smaller than 1 in absolute value. By taking t∗= min(t∗ 1, t∗ 2) we obtain the desired result. The solution θ(t) will therefore be smooth until t reaches a transition point where either a component of θ1(t) becomes zero, or one of the component of w2(t) reaches one in absolute value. We now show how to compute the value of the transition point. Let ˜X =  X xn+1T  and ˜y =  y yn+1  . We partition ˜X =  ˜X1 ˜X2  according to the active set. We use the Sherman-Morrison formula and rewrite (4) as θ1(t) = ˜θ1 − (t2 −1)¯e 1 + α(t2 −1)u, where ˜θ1 = ( ˜XT 1 ˜X1)−1( ˜XT 1 ˜y −µv1), ¯e = xn+1,1T ˜θ1 −yn+1, α = xn+1,1T ( ˜XT 1 ˜X1)−1xn+1,1 and u = ( ˜XT 1 ˜X1)−1xn+1,1. Let t1i the value of t such that θ1i(t) = 0. We have t1i = 1 +  ¯eui ˜θ1i −α −1! 1 2 , We now examine the case where a component of w2(t) reaches one in absolute value. We first notice that ( xn+1,1T θ1(t) −yn+1 = ¯e 1+α(t2−1) ˜X1θ1(t) −˜y = ˜e − (t2−1)¯e 1+α(t2−1) ˜X1u , 3 where ˜e = ˜X1˜θ1 −˜y. We can rewrite (5) as −µw2(t) = ˜XT 2 ˜e + ¯e(t2 −1) 1 + α(t2 −1)(xn+1,2 −˜XT 2 ˜X1u). Let cj be the jth column of ˜X2, and x(j) the jth element of xn+1,2. The jth component of w2(t) will become 1 in absolute value as soon as cT j ˜e + ¯e(t2 −1) 1 + α(t2 −1)  x(j) −cT j ˜X1u  = µ. Let t+ 2 j (resp. t− 2 j) be the value such that w2j(t) = 1 (resp. w2j(t) = −1). We have              t+ 2 j = 1 +  ¯e(x(j)−cT j ˜ X1u) −µ−cT j ˜e −α −1! 1 2 t− 2 j = 1 +  ¯e(x(j)−cT j ˜ X1u) µ−cT j ˜e −α −1! 1 2 . Hence the transition point will be equal to t′ = min{mini t1i, minj t+ 2 j, minj t− 2 j} where we restrict ourselves to the real solutions that lie between 0 and 1. We now have the necessary ingredients to derive the proposed algorithm. Algorithm 1 RecLasso: homotopy algorithm for online Lasso 1: Compute the path from θ(n) = θ(0, µn) to θ(0, µn+1). 2: Initialize the active set to the non-zero coefficients of θ(0, µn+1) and let v = sign (θ(0, µn+1)). Let v1 and xn+1,1 be the subvectors of v and xn+1 corresponding to the active set, and ˜X1 the submatrix of ˜X whose columns correspond to the active set. Initialize ˜θ1 = ( ˜XT 1 ˜X1)−1( ˜XT 1 ˜y −µv1). Initialize the transition point t′ = 0. 3: Compute the next transition point t′. If it is smaller than the previous transition point or greater than 1, go to Step 5. Case 1 The component of θ1(t′) corresponding to the ith coefficient goes to zero: Remove i from the active set. Update v by setting vi = 0. Case 2 The component of w2(t′) corresponding to the jth coefficient reaches one in absolute value: Add j to the active set. If the component reaches 1 (resp. −1), then set vj = 1 (resp. vj = −1). 4: Update v1, ˜X1 and xn+1,1 according to the updated active set. Update ˜θ1 = ( ˜XT 1 ˜X1)−1( ˜XT 1 ˜y −µv1) (rank 1 update). Go to Step 3. 5: Compute final value at t = 1, where the values of θ(n+1) on the active set are given by ˜θ1. The initialization amounts to computing the solution of the Lasso when we have only one data point (y, x) ∈R × Rm. In this case, the active set has at most one element. Let i0 = arg maxi |x(i)| and v = sign(yx(i0)). We have θ(1) = ( 1 (x(i0))2 (yx(i0) −µ1v)ei0 if |yx(i0)| > µ1 0 otherwise . We illustrate our algorithm by showing the solution path when the regularization parameter and t are successively varied with a simple numerical example in Figure 1. 3.3 Complexity The complexity of our algorithm is dominated by the inversion of the matrix ˜XT 1 ˜X1 at each transition point. The size of this matrix is bounded by q = min(n, m). As the update to this matrix after a 4 Figure 1: Solution path for both steps of our algorithm. We set n = 5, m = 5, µn = .1n. All the values of X, y, xn+1 and yn+1 are drawn at random. On the left is the homotopy when the regularization parameter goes from µn = .5 to µn+1 = .6. There is one transition point as θ2 becomes inactive. On the right is the piecewise smooth path of θ(t) when t goes from 0 to 1. We can see that θ3 becomes zero, θ2 goes from being 0 to being positive, whereas θ1, θ4 and θ5 remain active with their signs unchanged. The three transition points are shown as black dots. transition point is rank 1, the cost of computing the inverse is O(q2). Let k be the total number of transition points after varying the regularization parameter from µn to µn+1 and t from 0 to 1. The complexity of our algorithm is thus O(kq2). In practice, the size of the active set d is much lower than q, and if it remains ∼d throughout the homotopy, the complexity is O(kd2). It is instructive to compare it with the complexity of recursive least-square, which corresponds to µn = 0 for all n and n > m. For this problem the solution typically has m non-zero elements, and therefore the cost of updating the solution after a new observation is O(m2). Hence if the solution is sparse (d small) and the active set does not change much (k small), updating the solution of the Lasso will be faster than updating the solution to the non-penalized least-square problem. Suppose that we applied Lars directly to the problem with n + 1 observations without using knowledge of θ(n) by varying the regularization parameter from a large value where the size of the active set is 0 to µn+1. Let k′ be the number of transition points. The complexity of this approach is O(k′q2), and we can therefore compare the efficiency of these two approaches by comparing the number of transition points. 4 Applications 4.1 Compressive sensing Let θ0 ∈Rm be an unknown vector that we wish to reconstruct. We observe n linear projections yi = xT i θ0 + νi, where νi is Gaussian noise of variance σ2. In general one needs m such measurement to reconstruct θ0. However, if θ0 has a sparse representation with k non-zero coefficients, it has been shown in the noiseless case that it is sufficient to use n ∝k log m such measurements. This approach is known as compressive sensing [16][17] and has generated a tremendous amount of interest in the signal processing community. The reconstruction is given by the solution of the Basis Pursuit (BP) problem min θ ∥θ∥1 subject to Xθ = y. If measurements are obtained sequentially, it is advantageous to start estimating the unknown sparse signal as measurements arrive, as opposed to waiting for a specified number of measurements. Algorithms to solve BP with sequential measurements have been proposed in [18][19], and it has been shown that the change in the active set gives a criterion for how many measurements are needed to recover the underlying signal [19]. In the case where the measurements are noisy (σ > 0), a standard approach to recover θ0 is to solve the Basis Pursuit DeNoising problem instead [20]. Hence, our algorithm is well suited for 5 compressive sensing with sequential and noisy measurements. We compare our proposed algorithm to Lars as applied to the entire dataset each time we receive a new measurement. We also compare our method to coordinate descent [11] with warm start: when receiving a new measurement, we initialize coordinate descent (CD) to the actual solution. We sample measurements of a model where m = 100, the vector θ0 used to sample the data has 25 non-zero elements whose values are Bernoulli ±1, xi ∼N(0, Im), σ = 1, and we set µn = .1n. The reconstruction error decreases as the number of measurements grows (not plotted). The parameter that controls the complexity of Lars and RecLasso is the number of transition points. We see in Figure 2 that this quantity is consistently smaller for RecLasso, and that after 100 measurements when the support of the solution does not change much there are typically less than 5 transition points for RecLasso. We also show in Figure 2 timing comparison for the three algorithms that we have each implemented in Python. We observed that CD requires a lot of iterations to converge to the optimal solution when n < m, and we found difficult to set a stopping criterion that ensures convergence. Our algorithm is consistently faster than Lars and CD with warm-start. Figure 2: Compressive sensing results. On the x-axis of the plots are the iterations of the algorithm, where at each iteration we receive a new measurement. On the left is the comparison of the number of transition points for Lars and RecLasso, and on the right is the timing comparison for the three algorithms. The simulation is repeated 100 times and shaded areas represent one standard deviation. 4.2 Selection of the regularization parameter We have supposed until now a pre-determined regularization schedule, an assumption that is not practical. The amount of regularization depends indeed on the variance of the noise present in the data which is not known a priori. It is therefore not obvious how to determine the amount of regularization. We write µn = nλn such that λn is the weighting factor between the average meansquared error and the ℓ1-norm. We propose an algorithm that selects λn in a data-driven manner. The problem with n observations is given by θ(λ) = arg min θ 1 2n n X i=1 (xT i θ −yi)2 + λ∥θ∥1. We have seen previously that θ(λ) is piecewise linear, and we can therefore compute its gradient unless λ is a transition point. Let err(λ) = (xT n+1θ(λ)−yn+1)2 be the error on the new observation. We propose the following update rule to select λn+1 log λn+1 = log λn −η ∂err ∂log λ(λn) ⇒λn+1 = λn × exp n 2nηxT n+1,1(XT 1 X1)−1v1(xT n+1θ1 −yn+1) o , where the solution after n observations corresponding to the regularization parameter λn is given by (θT 1 , 0T ), and v1 = sign(θ1). We therefore use the new observation as a test set, which allows us to update the regularization parameter before introducing the new observation by varying t from 0 6 to 1. We perform the update in the log domain to ensure that λn is always positive. We performed simulations using the same experimental setup as in Section 4.1 and using η = .01. We show in Figure 3 a representative example where λ converges. We compared this value to the one we would obtain if we had a training and a test set with 250 observations each such that we could fit the model on the training set for various values of λ, and see which one gives the smallest prediction error on the test set. We obtain a very similar result, and understanding the convergence properties of our proposed update rule for the regularization parameter is the object of current research. 4.3 Leave-one-out cross-validation We suppose in this Section that we have access to a dataset (yi, xi)i=1...n and that µn = nλ. The parameter λ is tied to the amount of noise in the data which we do not know a priori. A standard approach to select this parameter is leave-one-out cross-validation. For a range of values of λ, we use n −1 data points to solve the Lasso with regularization parameter (n −1)λ and then compute the prediction error on the data point that was left out. This is repeated n times such that each data point serves as the test set. Hence the best value for λ is the one that leads to the smallest mean prediction error. Our proposed algorithm can be adapted to the case where we wish to update the solution of the Lasso after a data point is removed. To do so, we compute the first homotopy by varying the regularization parameter from nλ to (n −1)λ. We then compute the second homotopy by varying t from 1 to 0 which has the effect of removing the data point that will be used for testing. As the algorithm is very similar to the one we proposed in Section 3.2 we omit the derivation. We sample a model with n = 32 and m = 32. The vector θ0 used to generate the data has 8 non-zero elements. We add Gaussian noise of variance 0.2 to the observations, and select λ for a range of 10 values. We show in Figure 4 the histogram of the number of transition points for our algorithm when solving the Lasso with n −1 data points (we solve this problem 10 × n times). Note that in the majority cases there are very few transition points, which makes our approach very efficient in this setting. Figure 3: Evolution of the regularization parameter when using our proposed update rule. Figure 4: Histogram of the number of transition points when removing an observation. 5 Conclusion We have presented an algorithm to solve ℓ1-penalized least-square regression with online observations. We use the current solution as a “warm-start” and introduce an optimization problem that allows us to compute an homotopy from the current solution to the solution after observing a new data point. The algorithm is particularly efficient if the active set does not change much, and we show a computational advantage as compared to Lars and Coordinate Descent with warm-start for applications such as compressive sensing with sequential observations and leave-one-out cross-validation. We have also proposed an algorithm to automatically select the regularization parameter where each new measurement is used as a test set. 7 Acknowledgments We wish to acknowledge support from NSF grant 0835531, and Guillaume Obozinski and Chris Rozell for fruitful discussions. References [1] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B, 58(1):267–288, 1996. [2] S. Chen, D. Donoho, and M. Saunders. Atomic decomposition by basis pursuit. SIAM Review, 43(1):129– 159, 2001. [3] S. Boyd and L. Vandenberghe. Convex optimization. Cambridge Univ. Press, 2004. [4] S-J. Kim, K. Koh, M. Lustig, S. Boyd, and D. Gorinevsky. An interior-point method for large-scale l1-regularized least squares. IEEE Journal of Selected Topics in Signal Processing, 1(4):606–617, 2007. [5] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. Annals of Statistics, 32(2):407–499, 2004. [6] M.R. Osborne, B. Presnell, and B.A. Turlach. A new approach to variable selection in least squares problems. IMA Journal of Numerical Analysis, 20:389–404, 2000. [7] D.M. Malioutov, M. Cetin, and A.S. Willsky. Homotopy continuation for sparse signal representation. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Philadelphia, PA, March 2005. [8] I. Drori and D.L. Donoho. Solution of ℓ1 minimization problems by lars/homotopy methods. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Toulouse, France, May 2006. [9] I. Daubechies, M. Defrise, and C. De Mol. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Communications on Pure and Applied Mathematics, 57:1413–1541, 2004. [10] C.J. Rozell, D.H. Johnson, R.G. Baraniuk, and B.A. Olshausen. Locally competitive algorithms for sparse approximation. In Proceedings of the International Conference on Image Processing (ICIP), San Antonio, TX, September 2007. [11] J. Friedman, T. Hastie, H. Hoefling, and R. Tibshirani. Pathwise coordinate optimization. The Annals of Applied Statistics, 1(2):302–332, 2007. [12] H. Lee, A. Battle, R. Raina, and A.Y. Ng. Efficient sparse coding algorithms. In Proceedings of the Neural Information Processing Systems (NIPS), 2007. [13] M. Figueiredo and R. Nowak. A bound optimization approach to wavelet-based image deconvolution. In Proceedings of the International Conference on Image Processing (ICIP), Genova, Italy, September 2005. [14] M. Figueiredo, R. Nowak, and S. Wright. Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems. IEEE Journal of Selected Topics in Signal Processing, 1(4):586–597, 2007. [15] M Osborne. An effective method for computing regression quantiles. IMA Journal of Numerical Analysis, Jan 1992. [16] E. Cand`es. Compressive sampling. Proceedings of the International Congress of Mathematicians, 2006. [17] D.L. Donoho. Compressed sensing. IEEE Transactions on Information Theory, 52(4):1289–1306, 2006. [18] S. Sra and J.A. Tropp. Row-action methods for compressed sensing. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Toulouse, France, May 2006. [19] D. Malioutov, S. Sanghavi, and A. Willsky. Compressed sensing with sequential observations. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Las Vegas, NV, March 2008. [20] Y. Tsaig and D.L. Donoho. Extensions of compressed sensing. Signal Processing, 86(3):549–571, 2006. 8
2008
70
3,560
Adaptive Martingale Boosting Philip M. Long Google plong@google.com Rocco A. Servedio Columbia University rocco@cs.columbia.edu Abstract In recent work Long and Servedio [LS05] presented a “martingale boosting” algorithm that works by constructing a branching program over weak classifiers and has a simple analysis based on elementary properties of random walks. [LS05] showed that this martingale booster can tolerate random classification noise when it is run with a noise-tolerant weak learner; however, a drawback of the algorithm is that it is not adaptive, i.e. it cannot effectively take advantage of variation in the quality of the weak classifiers it receives. We present an adaptive variant of the martingale boosting algorithm. This adaptiveness is achieved by modifying the original algorithm so that the random walks that arise in its analysis have different step size depending on the quality of the weak learner at each stage. The new algorithm inherits the desirable properties of the original [LS05] algorithm, such as random classification noise tolerance, and has other advantages besides adaptiveness: it requires polynomially fewer calls to the weak learner than the original algorithm, and it can be used with confidencerated weak hypotheses that output real values rather than Boolean predictions. 1 Introduction Boosting algorithms are efficient procedures that can be used to convert a weak learning algorithm (one which outputs a weak hypothesis that performs only slightly better than random guessing for a binary classification task) into a strong learning algorithm (one which outputs a high-accuracy classifier). A rich theory of boosting has been developed over the past two decades; see [Sch03, MR03] for some overviews. Two important issues for boosting algorithms which are relevant to the current work are adaptiveness and noise-tolerance; we briefly discuss each of these issues before describing the contributions of this paper. Adaptiveness. “Adaptiveness” refers to the ability of boosting algorithms to adjust to different accuracy levels in the sequence of weak hypotheses that they are given. The first generation of boosting algorithms [Sch90, Fre95] required the user to input an “advantage” parameter γ such that the weak learner was guaranteed to always output a weak hypothesis with accuracy at least 1/2 + γ. Given an initial setting of γ, even if the sequence of weak classifiers generated by the runs of the weak learner included some hypotheses with accuracy (perhaps significantly) better than 1/2+γ, the early boosting algorithms were unable to capitalize on this extra accuracy; thus, these early boosters were not adaptive. Adaptiveness is an important property since it is often the case that the advantage of successive weak classifiers grows smaller and smaller as boosting proceeds. A major step forward was the development of the AdaBoost algorithm [FS97]. AdaBoost does not require a lower bound γ on the minimum advantage, and the error rate of its final hypothesis depends favorably on the different advantages of the different weak classifiers in the sequence. More precisely, if the accuracy of the t-th weak classifier is 1 2 + γt, then the AdaBoost final hypothesis has error at most QT −1 t=0 p 1 −4γ2 t . This error rate is usually upper bounded (see [FS97]) by exp −2 T −1 X t=0 γ2 t ! (1) and indeed (1) is a good approximation if no γt is too large. Noise tolerance. One drawback of many standard boosting techniques, including AdaBoost, is that they can perform poorly when run on noisy data [FS96, MO97, Die00, LS08]. Motivated in part by this observation, in recent years boosting algorithms that work by constructing branching programs over the weak classifiers (note that this is in contrast with AdaBoost, which constructs a single weighted sum of weak classifiers) have been developed and shown to enjoy some provable noise tolerance. In particular, the algorithms of [KS05, LS05] have been shown to boost to optimally high accuracy in the presence of random classification noise when run with a random classification noise tolerant weak learner. (Recall that “random classification noise at rate η” means that the true binary label of each example is independently flipped with probability η. This is a very well studied noise model, see e.g. [AL88, Kea98, AD98, BKW03, KS05, RDM06] and many other references.) While the noise tolerance of the boosters [KS05, LS05] is an attractive feature, a drawback of these algorithms is that they do not enjoy the adaptiveness of algorithms like AdaBoost. The MMM booster of [KS05] is not known to have any adaptiveness at all, and the “martingale boosting” algorithm of [LS05] only has the following limited type of adaptiveness. The algorithm works in stages t = 0, 1, . . . where in the t-th stage a collection of t + 1 weak hypotheses are obtained; let γt denote the minimum advantage of these t + 1 hypotheses obtained in stage t. [LS05] shows that the final hypothesis constructed by martingale boosting has error at most exp −(PT −1 t=0 γt)2 2T ! . (2) (2) is easily seen to always be a worse bound than (1), and the difference can be substantial. Consider, for example, a sequence of weak classifiers in which the advantages decrease as γt = 1/√t + 1 (this is in line with the oft-occurring situation, mentioned above, that advantages grow smaller and smaller as boosting progresses). For any ǫ > 0 we can bound (1) from above by ǫ by taking T ≈1/√ǫ, whereas for this sequence of advantages the error bound (2) is never less than 0.5 (which is trivial), and in fact (2) approaches 1 as t →∞. Our contributions: adaptive noise-tolerant boosting. We give the first boosting algorithm that is both adaptive enough to satisfy a bound of exp  −Ω PT −1 t=0 γ2 t  and is provably tolerant to random classification noise. We do this by modifying the martingale boosting algorithm of [LS05] to make it adaptive; the modification inherits the noise-tolerance of the original [LS05] algorithm. In addition to its adaptiveness, the new algorithm also improves on [LS05] by constructing a branching program with polynomially fewer nodes than the original martingale boosting algorithm (thus it requires fewer calls to the weak learner), and it can be used directly with weak learners that generate confidence-rated weak hypotheses (the original martingale boosting algorithm required the weak hypotheses to be Boolean-valued). Our approach. We briefly sketch the new idea that lets us achieve adaptiveness. Recall that the original martingale booster of Long and Servedio formulates the boosting process as a random walk; intuitively, as a random example progresses down through the levels of the branching program constructed by the [LS05] booster, it can be viewed as performing a simple random walk with step size 1 on the real line, where the walk is biased in the direction (positive or negative) corresponding to the correct classification of the example. (The quantity tracked during the random walk is the difference between the number of positive predictions and the number of negative predictions made by base classifiers encountered in the braching program up to a given point in time.) This means that after enough stages, a random positive example will end up to the right of the origin with high probability, and contrariwise for a random negative example. Thus a high-accuracy classifier is obtained simply by labelling each example according to the sign (+ or −) of its final location on the real line. The new algorithm extends this approach in a simple and intuitive way, by having examples perform a random walk with variable step size: if the weak classifier at a given internal node has large advantage, then the new algorithm makes the examples that reach that node take a large step in the random walk. This is a natural way to exploit the fact that examples reaching such a largeadvantage node usually tend to walk in the right direction. The idea extends straightforwardly to let us handle confidence-rated weak hypotheses (see [SS99]) whose predictions are real values in [−1, 1] as opposed to Boolean values from {−1, 1}. This is done simply by scaling the step size for a given example x from a given node according to the numerical value h(x) that the confidence-rated weak hypothesis h at that node assigns to example x. While using different step sizes at different levels is a natural idea, it introduces some complications. In particular, if a branching program is constructed naively based on this approach, it is possible for the number of nodes to increase exponentially with the depth. To avoid this, we use a randomized rounding scheme together with the variable-step random walk to ensure that the number of nodes in the branching program grows polynomially rather than exponentially in the number of stages in the random walk (i.e. the depth of the branching program). In fact, we actually improve on the efficiency of the original martingale boosting algorithm of [LS05] by a polynomial factor, by truncating “extreme” nodes in the branching program that are “far” from the origin. Our analysis shows that this truncation has only a small effect on the accuracy of the final classifier, while giving a significant asymptotic savings in the size of the final branching program (roughly 1/γ3 nodes as opposed to the 1/γ4 nodes of [KS05, LS05]). 2 Preliminaries We make the following assumptions and notational conventions throughout the paper. There is an initial distribution D over a domain of examples X. There is a target function c : X →{−1, 1} that we are trying to learn. Given the target function c and the distribution D, we write D+ to denote the distribution D restricted to the positive examples {x ∈X : c(x) = 1}. Thus, for any event S ⊆{x ∈X : c(x) = 1} we have PrD+[x ∈S] = PrD[x ∈S]/PrD[c(x) = 1]. Similarly, we write D−to denote D restricted to the negative examples {x ∈X : c(x) = −1}. As usual, our boosting algorithms work by repeatedly passing a distribution D′ derived from D to a weak learner, which outputs a classifier h. The future behavior will be affected by how well h performs on data distributed according to D′. To keep the analysis clean, we will abstract away issues of sampling from D′ and estimating the accuracy of the resulting h. These issues are trivial if D is uniform over a moderate-sized domain (since all probabilities can be computed exactly), and otherwise they can be handled via the same standard estimation techniques used in [LS05]. Martingale boosting. We briefly recall some key aspects of the martingale boosting algorithm of [LS05] which are shared by our algorithm (and note some differences). Both boosters work by constructing a leveled branching program. Each node in the branching program has a location; this is a pair (β, t) where β is a real value (a location on the line) and t ≥0 is an integer (the level of the node; each level corresponds to a distinct stage of boosting). The initial node, where all examples start, is at (0, 0). In successive stages t = 0, 1, 2, . . . the booster constructs nodes in the branching program at levels 0, 1, 2, . . .. For a location (β, t) where the branching program has a node, let Dβ,t be the distribution D conditioned on reaching the node at (β, t). We sometimes refer to this distribution Dβ,t as the distribution induced by node (β, t). As boosting proceeds, in stage t, each node (β, t) at level t is assigned a hypothesis which we call hβ,t. Unlike [LS05] we shall allow confidence-rated hypotheses, so each weak hypothesis is a mapping from X to [−1, 1]. Once the hypothesis hβ,t has been obtained, out-edges are constructed from (β, t) to its child nodes at level t + 1. While the original martingale boosting algorithm of [LS05] had two child nodes at (β −1, t + 1) and (β + 1, t + 1) from each internal node, as we describe in Section 3 our new algorithm will typically have four child nodes for each node (but may, for a confidence-rated base classifier, have as many as eight). Our algorithm. To fully specify our new boosting algorithm we must describe: (1) How the weak learner is run at each node (β, t) to obtain a weak classifier. This is straightforward for the basic case of “two-sided” weak learners that we describe in Section 3 and somewhat less straightforward in the usual (non-two-sided) weak learner setting. In Section 5.1 we describe how to use a standard weak learner, and how to handle noise – both extensions borrow heavily from earlier work [LS05, KS05]. (2) What function is used to label the node (β, t), i.e. how to route subsequent examples that reach (β, t) to one of the child nodes. It turns out that this function is a randomized version of the weak classifier mentioned in point (1) above. (3) Where to place the child nodes at level t + 1; this is closely connected with (2) above. As in [LS05], once the branching program has been fully constructed down through some level T the final hypothesis it computes is very simple. Given an input example x, the output of the final hypothesis on x is sgn(β) where (β, T) is the location in level T to which x is ultimately routed as it passes through the branching program. 3 Boosting a two-sided weak learner In this section we assume that we have a two-sided weak learner. This is an algorithm which, given a distribution D, can always obtain hypotheses that have two-sided advantage as defined below: Definition 1 A hypothesis h : X →[−1, 1] has two-sided advantage γ with respect to D if it satisfies both Ex∈D+[h(x)] ≥γ and Ex∈D−[h(x)] ≤−γ. As we explain in Section 5.1 we may apply methods of [LS05] to reduce the typical case, in which we only receive “normal” weak hypotheses rather than two-sided weak hypotheses, to this case. The branching program starts off with a single node at location (0, 0). Assuming the branching program has been constructed up through level t, we now explain how it is extended in the t-th stage up through level t + 1. There are two basic steps in each stage: weak training and branching. Weak training. Consider a given node at location (β, t) in the branching program. As in [LS05] we construct a weak hypothesis hβ,t simply by running the two-sided weak learner on examples drawn from Dβ,t and letting hβ,t be the hypothesis it generates. Let us write γβ,t to denote γβ,t def = min{Ex∈(Dβ,t)+[hβ,t(x)], Ex∈(Dβ,t)−[−hβ,t(x)]}. We call γβ,t the advantage at node (β, t). We do this for all nodes at level t. Now we define the advantage at level t to be γt def = min β γβ,t. (3) Branching. Intuitively, we would like to use γt as a scaling factor for the “step size” of the random walk at level t. Since we are using confidence-rated weak hypotheses, it is also natural to have the step that example x takes at a given node be proportional to the value of the confidence-rated hypothesis at that node on x. The most direct way to do this would be to label the node (β, t) with the weak classifier hβ,t and to route each example x to a node at location (β + γthβ,t(x), t + 1). However, there are obvious difficulties with this approach; for one thing a single node at (β, t) could give rise to arbitrarily many (infinitely many, if |X| = ∞) nodes at level t+1. Even if the hypotheses hβ,t were all guaranteed to {−1, 1}-valued, if we were to construct a branching program in this way then it could be the case that by the T -th stage there are 2T −1 distinct nodes at level T . We get around this problem by creating nodes at level t+1 only at integer multiples of γt 2 . Note that this “granularity” that is used is different at each level, depending on the advantage at each level (we shall see in the next section that this is crucial for the analysis). This keeps us from having too many nodes in the branching program at level t + 1. Of course, we only actually create those nodes in the branching program that have an incoming edge as described below (later we will give an analysis to bound the number of such nodes). We simulate the effect of having an edge from (β, t) to (β + γthβ,t(x), t + 1) by using two edges from (β, t) to (i · γt/2, t + 1) and to ((i + 1) · γt/2, t + 1), where i is the unique integer such that i·γt/2 ≤β +γthβ,t(x) < (i+1)·γt/2. To simulate routing an example x to (β +γthβ,t(x), t+1), the branching program routes x randomly along one of these two edges so that the expected location at which x ends up is (β +γthβ,t(x), t+1). More precisely, if β +γthβ,t(x) = (i+ρ)·γt/2 where 0 ≤ρ < 1, then the rule used at node (β, t) to route an example x is “with probability ρ send x to ((i + 1) · γt/2, t + 1) and with probability (1 −ρ) send x to (i · γt/2, t + 1).” Since |hβ,t(x)| ≤1 for all x by assumption, it is easy to see that at most eight outgoing edges are required from each node (β, t). Thus the branching program that the booster constructs uses a randomized variant of each weak hypothesis hβ,t to route examples along one of (at most) eight outgoing edges. 4 Proof of correctness for boosting a two-sided weak learner The following theorem shows that the algorithm described above is an effective adaptive booster for two-sided weak learners: Theorem 2 Consider running the above booster for T stages. For t = 0, . . . , T −1 let the values γ0, . . . , γT −1 > 0 be defined as described above, so each invocation of the two-sided weak learner on distribution Dβ,t yields a hypothesis hβ,t that has γβ,t ≥γt. Then the final hypothesis h constructed by the booster satisfies Prx∈D[h(x) ̸= c(x)] ≤exp −1 8 T −1 X t=0 γ2 t ! . (4) The algorithm makes at most M ≤O(1) · PT −1 t=0 1 γt Pt−1 j=0 γj calls to the weak learner (i.e. constructs a branching program with at most M nodes). Proof: We will show that Prx∈D+[h(x) ̸= 1] ≤exp  −1 8 PT −1 t=0 γ2 t  ; a completely symmetric argument shows a similar bound for negative examples, which gives (4). For t = 1, . . . , T we define the random variable At as follows: given a draw of x from D+ (the original distribution D restricted to positive examples), the value of At is γt−1hβ,t−1(x), where (β, t −1) is the location of the node that x reaches at level t of the branching program. Intuitively At captures the direction and size of the move that we would like x to make during the branching step that brings it to level t. We define Bt to be the random variable that captures the direction and size of the move that x actually makes during the branching step that brings it to level t. More precisely, let i be the integer such that i · (γt−1/2) ≤β + γt−1hβ,t−1(x) < (i + 1) · (γt−1/2), and let ρ ∈[0, 1) be such that β + γt−1hβ,t−1(x) = (i + ρ) · (γt−1/2). Then Bt = ((i + 1) · (γt−1/2) −β) with probability ρ, and (i · (γt−1/2) −β) with probability 1 −ρ. We have that E[Bt] (where the expectation is taken only over the ρ-probability in the definition of Bt) equals ((i + ρ) · (γt−1/2) −β)hβ,t−1(x) = γt−1hβ,t−1(x) = At. Let Xt denote Pt i=1 Bt, so the value of Xt is the actual location on the real line where x ends up at level t. Fix 1 ≤t ≤T and let us consider the conditional random variable (Xt|Xt−1). Conditioned on Xt−1 taking any particular value (i.e. on x reaching any particular location (β, t −1)), we have that x is distributed according to (Dβ,t−1)+, and thus we have E[Xt|Xt−1] = Xt−1 + Ex∈(Dβ,t)+[γt−1hβ,t−1(x)] ≥Xt−1 + γt−1γβ,t−1 ≥Xt−1 + γ2 t−1, (5) where the first inequality follows from the two-sided advantage of hβ,t−1. For t = 0, . . . , T, define the random variable Yt as Yt = Xt −Pt−1 i=0 γ2 i (so Y0 = X0 = 0). Since conditioning on the value of Yt−1 is equivalent to conditioning on the value of Xt−1, using (5) we get E[Yt|Yt−1] = E " Xt − t−1 X i=0 γ2 i Yt−1 # = E[Xt|Yt−1] − t−1 X i=0 γ2 i ≥Xt−1 − t−2 X i=0 γ2 i = Yt−1, so the sequence of random variables Y0, . . . , YT is a sub-martingale.1 To see that this sub-martingale has bounded differences, note that we have |Yt −Yt−1| = |Xt −Xt−1 −γ2 t−1| = |Bt −γ2 t−1|. 1The more common definition of a sub-martingale requires that E[Yt|Y0, ..., Yt−1] ≤Yt−1, but the weaker assumption that E[Yt|Yt−1] ≤Yt−1 suffices for the concentration bounds that we need (see [ASE92, Hay05]). The value of Bt is obtained by first moving by γt−1hβ,t−1(x), and then rounding to a neighboring multiple of γt−1/2, so |Bt| ≤(3/2)γt−1, which implies |Yt −Yt−1| ≤(3/2)γt−1 + γ2 t−1 ≤2γt−1. Now recall Azuma’s inequality for sub-martingales: Let 0 = Y0, . . . , YT be a sub-martingale which has |Yi −Yi−1| ≤ci for each i = 1, . . . , T. Then for any λ > 0 we have Pr[YT ≤−λ] ≤exp  − λ2 2 PT i=1 c2 i  . We apply this with each ci = 2γi−1 and λ = PT −1 t=0 γ2 t . This gives us that the error rate of h on positive examples, Prx∈D+[h(x) = −1], equals Pr[XT < 0] = Pr[YT < −λ] ≤ exp − λ2 8 PT −1 t=0 γ2 t ! = exp −1 8 T −1 X t=0 γ2 t ! . (6) So we have established (4); it remains to bound the number of nodes constructed in the branching program. Let us write Mt to denote the number of nodes at level t, so M = PT −1 t=0 Mt. The t-th level of boosting can cause the rightmost (leftmost) node to be at most 2γt−1 distance farther away from the origin than the rightmost (leftmost) node at the (t −1)-st level. This means that at level t, every node is at a position (β, t) with |β| ≤2 Pt−1 j=0 γj. Since nodes are placed at integer multiples of γt/2, we have that M = PT −1 t=0 Mt ≤O(1) · PT −1 t=0 1 γt Pt−1 j=0 γj. Remark. Consider the case in which each advantage γt is just γ and we are boosting to accuracy ǫ. As usual taking T = O(log(1/ǫ)/γ2) gives an error bound of ǫ. With these parameters we have that M ≤O(log2(1/ǫ)/γ4), the same asymptotic bound achieved in [LS05]. In the next section we describe a modification of the algorithm that improves this bound by essentially a factor of 1 γ . 4.1 Improving efficiency by freezing extreme nodes Here we describe a variant of the algorithm from the previous section that constructs a branching program with fewer nodes. The algorithm requires an input parameter ǫ which is an upper bound on the desired final error of the aggregate classifier. For t ≥1, after the execution of step t −1 of boosting, when all nodes at level t have been created, each node (α, t) with |α| > r 8 Pt−1 s=0 γ2s  2 ln t + ln 4 ǫ  is “frozen.” The algorithm commits to classifying any test examples routed to any such nodes according to sgn(α), and these nodes are not used to generate weak hypotheses during the next round of training. We have the following theorem about the performance of this algorithm: Theorem 3 Consider running the modified booster for T stages. For t = 0, . . . , T −1 let the values γ1, . . . , γT > 0 be defined as described above, so each invocation of the weak learner on distribution Dβ,t yields a hypothesis hβ,t that has γβ,t ≥γt. Then the final output hypothesis h of the booster satisfies Prx∈D[h(x) ̸= c(x)] ≤ǫ 2 + exp  −1 8 T −1 P t=0 γ2 t  . (7) The algorithm makes O rPT −1 t=0 γ2 t  ln T + ln 1 ǫ  · PT −1 t=0 1 γt  calls to the weak learner. Proof: As in the previous proof it suffices to bound Prx∈D+[h(x) ̸= 1]. The proof of Theorem 2 gives us that if we never did any freezing, then Prx∈D+[h(x) ̸= 1] ≤exp  −1 8 PT −1 t=0 γ2 t  . Now let us analyze the effect of freezing in a given stage t < T . Let At be the distance from the origin past which examples are frozen in round t; i.e. At = q (8 Pt−1 s=0 γ2s)(2 ln t + ln 4 ǫ). Nearly exactly the same analysis as proves (6) can be used here: for a positive example x to be incorrectly frozen in round t, it must be the case Xt < −At, or equivalently Yt < −At −Pt−1 i=0 γ2 i . Thus our choice of At gives us that Prx∈D+[x incorrectly frozen in round t] is at most Pr[Yt ≤−At − t−1 X i=0 γ2 t ] ≤Pr[Yt ≤−At] ≤ ǫ 4t2 , so consequently we have Prx∈D+[x ever incorrectly frozen ] ≤ǫ 2. From here we may argue as in [LS05]: we have that Prx∈D+[h(x) = 0] equals Prx∈D+[h(x = 0 and x is frozen] + Prx∈D+[h(x) = 0 and x is not frozen] ≤ǫ 2 + exp −1 2 T −1 X t=0 γ2 t ! which gives (7). The bound on the number of calls to the weak learner follows from the fact that there are O(At/γt) such calls in each stage of boosting, and the fact that At ≤ q (8 PT −1 s=0 γ2s)(2 ln T + ln 4 ǫ) for all t. It is easy to check that if γt = γ for all t, taking T = O(log(1/ǫ)/γ2) the algorithm in this section will construct an ǫ-accurate hypothesis that is an O(log2(1/ǫ)/γ3)-node branching program. 5 Extensions 5.1 Standard weak learners In Sections 3 and 4, we assumed that the boosting algorithm had access to a two-sided weak learner, which is more accurate than random guessing on both the positive and the negative examples separately. To make use of a standard weak learner, which is merely more accurate than random guessing on average, we can borrow ideas from [LS05]. The idea is to force a standard weak learner to provide a hypothesis with two-sided accuracy by (a) balancing the distribution so that positive and negative examples are accorded equal importance, (b) balancing the predictions of the output of the weak learner so that it doesn’t specialize on one kind of example. Definition 4 Given a probability distribution D over examples, let bD be the distribution obtained by rescaling the positive and negative examples so that they have equal weight: i.e., let bD[S] = 1 2D+[S] + 1 2D−[S]. Definition 5 Given a confidence-rated classifier h : X →[−1, 1] and a probability distribution D over X, let the balanced variant of h with respect to D be the function ˆh : X →[−1, 1] defined as follows: (a) if Ex∈D[h(x)] ≥0, then, for all x ∈X, ˆh(x) = h(x)+1 Ex∈D[h(x)]+1 −1. (b) if Ex∈D[h(x)] ≤ 0, then, for all x ∈X, ˆh(x) = h(x)−1 −Ex∈D[h(x)]+1 + 1. The analysis is the natural generalization of Section 5 of [LS05] to confidence-rated classifiers. Lemma 6 If D is balanced with respect to c, and h is a confidence-rated classifier such that Ex∈D[h(x)c(x)] ≥γ, then Ex∈D[ˆh(x)c(x)] ≥γ/2. Proof. Assume without loss of generality that Ex∈D[h(x)] ≥0 (the other case can be handled symmetrically). By linearity of expectation Ex∈D[ˆh(x)c(x)] = Ex∈D[h(x)c(x)] Ex∈D[h(x)] + 1 + Ex∈D[c(x)]  1 Ex∈D[h(x]) + 1 −1  . Since D is balanced we have Ex∈D[c(x)] = 0, and hence Ex∈D[ˆh(x)c(x)] = Ex∈D[h(x)c(x)] Ex∈D[h(x)]+1 , so the lemma follows from the fact that Ex∈D[h(x)] ≤1. We will use a standard weak learner to simulate a two-sided weak learner as follows. Given a distribution D, the two-sided weak learner will pass bD to the standard weak learner, take its output g, and return h = ˆg. Our next lemma analyzes this transformation. Lemma 7 If Ex∈b D[g(x)c(x)] ≥γ, then Ex∈D+[h(x)] ≥γ/2 and Ex∈D−[−h(x)] ≥γ/2. Proof: Lemma 6 implies that Ex∈b D[h(x)c(x)] ≥γ/2. Expanding the definition of bD, we have Ex∈D+[h(x)] −Ex∈D−[h(x)] ≥γ. (8) Since h balanced g with respect to bD and c, we have Ex∈b D[h(x)] = 0. Once again expanding the definition of bD, we get that Ex∈D+[h(x)] + Ex∈D−[h(x)] = 0 which implies Ex∈D−[h(x)] = −Ex∈D+[h(x)] and Ex∈D+[h(x)] = −Ex∈D+[h(x)]. Substituting each of the RHS for its respective LHS in (8) completes the proof. Lemma 7 is easily seen to imply counterparts of Theorems 2 and 3 in which the requirement of a two-sided weak learner is weakened to require only standard weak learning, but each γt is replaced with γt/2. 5.2 Tolerating random classification noise As in [LS05], noise tolerance is facilitated by the fact that the path through the network is not affected by altering the label of an example. On the other hand, balancing the distribution before passing it to the weak learner, which was needed to use a standard weak learner, may disturb the independence between the event that an example is noisy, and the random draw of x. This can be repaired exactly as in [KS05, LS05]; because of space constraints we omit the details. References [AD98] J. Aslam and S. Decatur. Specification and simulation of statistical query algorithms for efficiency and noise tolerance. J. Comput & Syst. Sci., 56:191–208, 1998. [AL88] Dana Angluin and Philip Laird. Learning from noisy examples. Machine Learning, 2(4):343–370, 1988. [ASE92] N. Alon, J. Spencer, and P. Erdos. The Probabilistic Method (1st ed.). Wiley-Interscience, New York, 1992. [BKW03] A. Blum, A. Kalai, and H. Wasserman. Noise-tolerant learning, the parity problem, and the statistical query model. J. ACM, 50(4):506–519, 2003. [Die00] T.G. Dietterich. An experimental comparison of three methods for constructing ensembles of decision trees: bagging, boosting, and randomization. Machine Learning, 40(2):139–158, 2000. [Fre95] Y. Freund. Boosting a weak learning algorithm by majority. Information and Computation, 121(2):256–285, 1995. [FS96] Y. Freund and R. Schapire. Experiments with a new boosting algorithm. In ICML, pages 148–156, 1996. [FS97] Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. JCSS, 55(1):119–139, 1997. [Hay05] T. P. Hayes. A large-deviation inequality for vector-valued martingales. 2005. [Kea98] M. Kearns. Efficient noise-tolerant learning from statistical queries. JACM, 45(6):983–1006, 1998. [KS05] A. Kalai and R. Servedio. Boosting in the presence of noise. JCSS, 71(3):266–290, 2005. [LS05] P. Long and R. Servedio. Martingale boosting. In Proc. 18th Annual COLT, pages 79–94, 2005. [LS08] P. Long and R. Servedio. Random classification noise defeats all convex potential boosters. In ICML, 2008. [MO97] R. Maclin and D. Opitz. An empirical evaluation of bagging and boosting. In AAAI/IAAI, pages 546–551, 1997. [MR03] R. Meir and G. R¨atsch. An introduction to boosting and leveraging. In LNAI Advanced Lectures on Machine Learning, pages 118–183, 2003. [RDM06] L. Ralaivola, F. Denis, and C. Magnan. CN=CNNN. In ICML, pages 265–272, 2006. [Sch90] R. Schapire. The strength of weak learnability. Machine Learning, 5(2):197–227, 1990. [Sch03] R. Schapire. The boosting approach to machine learning: An overview. Springer, 2003. [SS99] R. Schapire and Y. Singer. Improved boosting algorithms using confidence-rated predictions. Machine Learning, 37:297–336, 1999.
2008
71
3,561
Fast High-dimensional Kernel Summations Using the Monte Carlo Multipole Method Dongryeol Lee Computational Science and Engineering Georgia Institute of Technology Atlanta, GA 30332 dongryel@cc.gatech.edu Alexander Gray Computational Science and Engineering Georgia Institute of Technology Atlanta, GA 30332 agray@cc.gatech.edu Abstract We propose a new fast Gaussian summation algorithm for high-dimensional datasets with high accuracy. First, we extend the original fast multipole-type methods to use approximation schemes with both hard and probabilistic error. Second, we utilize a new data structure called subspace tree which maps each data point in the node to its lower dimensional mapping as determined by any linear dimension reduction method such as PCA. This new data structure is suitable for reducing the cost of each pairwise distance computation, the most dominant cost in many kernel methods. Our algorithm guarantees probabilistic relative error on each kernel sum, and can be applied to high-dimensional Gaussian summations which are ubiquitous inside many kernel methods as the key computational bottleneck. We provide empirical speedup results on low to high-dimensional datasets up to 89 dimensions. 1 Fast Gaussian Kernel Summation In this paper, we propose new computational techniques for efficiently approximating the following sum for each query point qi ∈Q: Φ(qi, R) = X rj∈R e−||qi−rj||2/(2h2) (1) where R is the reference set; each reference point is associated with a Gaussian function with a smoothing parameter h (the ’bandwidth’). This form of summation is ubiquitous in many statistical learning methods, including kernel density estimation, kernel regression, Gaussian process regression, radial basis function networks, spectral clustering, support vector machines, and kernel PCA [1, 4]. Cross-validation in all of these methods require evaluating Equation 1 for multiple values of h. Kernel density estimation, for example, requires |R| density estimate based on |R| −1 points, yielding a brute-force computational cost scaling quadratically (that is O(|R|2)). Error bounds. Due to its expensive computational cost, many algorithms approximate the Gaussian kernel sums at the expense of reduced precision. Therefore, it is natural to discuss error bound criteria which measure the quality of the approximations with respect to their corresponding true values. The following error bound criteria are common in literature: Definition 1.1. An algorithm guarantees ϵ absolute error bound, if for each exact value Φ(qi, R) for qi ∈Q, it computes eΦ(qi, R) such that eΦ(qi, R) −Φ(qi, R) ≤ϵ. Definition 1.2. An algorithm guarantees ϵ relative error bound, if for each exact value Φ(qi, R) for qi ∈Q, it computes eΦ(qi, R) ∈R such that eΦ(qi, R) −Φ(qi, R) ≤ϵ |Φ(qi, R)|. 1 Bounding the relative error (e.g., the percentage deviation) is much harder because the error bound criterion is in terms of the initially unknown exact quantity. As a result, many previous methods [7] have focused on bounding the absolute error. The relative error bound criterion is preferred to the absolute error bound criterion in statistical applications in which high accuracy is desired. Our new algorithm will enforce the following “relaxed” form of the relative error bound criterion, whose motivation will be discussed shortly. Definition 1.3. An algorithm guarantees (1 −α) probabilistic ϵ relative error bound, if for each exact value Φ(qi, R) for qi ∈Q, it computes eΦ(qi, R) ∈R, such that with at least probability 0 < 1 −α < 1, eΦ(qi, R) −Φ(qi, R) ≤ϵ |Φ(qi, R)|. Previous work. The most successful class of acceleration methods employ “higher-order divide and conquer” or generalized N-body algorithms (GNA) [4]. This approach can use any spatial partioning tree such as kd-trees or ball-trees for both the query set Q and reference data R and performs a simulataneous recursive descent on both trees. GNA with relative error bounds (Definition 1.2) [5, 6, 11, 10] utilized bounding boxes and additional cached-sufficient statistics such as higher-order moments needed for series-expansion. [5, 6] utilized bounding-box based error bounds which tend to be very loose, which resulted in slow empirical performance around suboptimally small and large bandwidths. [11, 10] extended GNA-based Gaussian summations with series-expansion which provided tighter bounds; it showed enormous performance improvements, but only up to low dimensional settings (up to D = 5) since the number of required terms in series expansion increases exponentially with respect to D. [9] introduces an iterative sampling based GNA for accelerating the computation of nested sums (a related easier problem). Its speedup is achieved by replacing pessimistic error bounds provided by bounding boxes with normal-based confidence interval from Monte Carlo sampling. [9] demonstrates the speedup many orders of magnitude faster than the previous state of the art in the context of computing aggregates over the queries (such as the LSCV score for selecting the optimal bandwidth). However, the authors did not discuss the sampling-based approach for computations that require per-query estimates, such as those required for kernel density estimation. None of the previous approaches for kernel summations addresses the issue of reducing the computational cost of each distance computation which incurs O(D) cost. However, the intrinsic dimensionality d of most high-dimensional datasets is much smaller than the explicit dimension D (that is, d << D). [12] proposed tree structures using a global dimension reduction method, such as random projection, as a preprocessing step for efficient (1 + ϵ) approximate nearest neighbor search. Similarly, we develop a new data structure for kernel summations; our new data structure is constructed in a top-down fashion to perform the initial spatial partitioning in the original input space RD and performs a local dimension reduction to a localized subset of the data in a bottom-up fashion. This paper. We propose a new fast Gaussian summation algorithm that enables speedup in higher dimensions. Our approach utilizes: 1) probabilistic relative error bounds (Definition 1.3) on kernel sums provided by Monte Carlo estimates 2) a new tree structure called subspace tree for reducing the computational cost of each distance computation. The former can be seen as relaxing the strict requirement of guaranteeing hard relative bound on very small quantities, as done in [5, 6, 11, 10]. The latter was mentioned as a possible way of ameliorating the effects of the curse of dimensionality in [14], a pioneering paper in this area. Notations. Each query point and reference point (a D-dimensional vector) is indexed by natural numbers i, j ∈N, and denoted qi and rj respectively. For any set S, |S| denotes the number of elements in S. The entities related to the left and the right child are denoted with superscripts L and R; an internal node N has the child nodes N L and N R. 2 Gaussian Summation by Monte Carlo Sampling Here we describe the extension needed for probabilistic computation of kernel summation satisfying Definition 1.3. The main routine for the probabilistic kernel summation is shown in Algorithm 1. The function MCMM takes the query node Q and the reference node R (each initially called with the roots of the query tree and the reference tree, Qroot and Rroot) and β (initially called with α value which controls the probability guarantee that each kernel sum is within ϵ relative error). 2 Algorithm 1 The core dual-tree routine for probabilistic Gaussian kernel summation. MCMM(Q, R, β) if CANSUMMARIZEEXACT(Q, R, ϵ) then SUMMARIZEEXACT(Q, R) else if CANSUMMARIZEMC(Q, R, ϵ, β) then 5: SUMMARIZEMC(Q, R, ϵ, β) else if Q is a leaf node then if R is a leaf node then MCMMBASE(Q, R) 10: else MCMM  Q, RL, β 2  , MCMM  Q, RR, β 2  else if R is a leaf node then MCMM(QL, R, β), MCMM(QR, R, β) 15: else MCMM  QL, RL, β 2  , MCMM  QL, RR, β 2  MCMM  QR, RL, β 2  , MCMM  QR, RR, β 2  The idea of Monte Carlo sampling used in the new algorithm is similar to the one in [9], except the sampling is done per query and we use approximations that provide hard error bounds as well (i.e. finite difference, exhaustive base case: MCMMBASE). This means that the approximation has less variance than a pure Monte Carlo approach used in [9]. Algorithm 1 first attempts approximations with hard error bounds, which are computationally cheaper than sampling-based approximations. For example, finite-difference scheme [5, 6] can be used for the CANSUMMARIZEEXACT and SUMMARIZEEXACT functions in any general dimension. The CANSUMMARIZEMC function takes two parameters that specify the accuracy: the relative error and its probability guarantee and decides whether to use Monte Carlo sampling for the given pair of nodes. If the reference node R contains too few points, it may be more efficient to process it using exact methods that use error bounds based on bounding primitives on the node pair or exhaustive pair-wise evaluations, which is determined by the condition: τ · minitial ≤|R| where τ > 1 controls the minimum number of reference points needed for Monte Carlo sampling to proceed. If the reference node does contain enough points, then for each query point q ∈Q, the SAMPLE routine samples minitial terms over the terms in the summation Φ(q, R) = P rjn∈R Kh(||q −rjn||) where Φ(q, R) denotes the exact contribution of R to q’s kernel sum. Basically, we are interested in estimating Φ(q, R) by eΦ(q, R) = |R|µS, where µS is the sample mean of S. From the Central Limit Theorem, given enough m samples, µS ⇝N(µ, σ2 S/m) where Φ(q, R) = |R|µ (i.e. µ is the average of the kernel value between q and any reference point r ∈R); this implies that |µS −µ| ≤zβ/2σS/√m with probability 1−β. The pruning rule we have to enforce for each query point for the contribution of R is: zβ/2 σS √m ≤ϵΦ(q, R) |R| where σS the sample standard deviation of S. Since Φ(q, R) is one of the unknown quanities we want to compute, we instead enforce the following: zβ/2 σS √m ≤ ϵ  Φl(q, R) + |R|  µS − zβ/2σS √m  |R| (2) where Φl(q, R) is the currently running lower bound on the sum computed using exact methods and |R|  µS − zβ/2σS √m  is the probabilistic component contributed by R. Denoting Φl,new(q, R) = Φl(q, R) + |R|  µS − zβ/2σS √ |S|  , the minimum number of samples for q needed to achieve the 3 target error the right side of the inequality in Equation 2 with at least probability of 1 −β is: m ≥z2 β/2σ2 S (|R| + ϵ|R|)2 ϵ2(Φl(q, R) + |R|µS)2 If the given query node and reference node pair cannot be pruned using either nonprobabilistic/probabilistic approximations, then we recurse on a smaller subsets of two sets. In particular, when dividing over the reference node R, we recurse with half of the β value1. We now state the probablistic error guarantee of our algorithm as a theorem. Theorem 2.1. After calling MCMM with Q = Qroot, R = Rroot, and β = α, Algorithm 1 approximates each Φ(q, R) with eΦ(q, R) such that Definition 1.3 holds. Proof. For a query/reference (Q, R) pair and 0 < β < 1, MCMMBASE and SUMMARIZEEXACT compute estimates for q ∈Q such that eΦ(q, R) −Φ(q, R) < ϵ Φ(q,R)|R| |R| with probability at least 1 > 1 −β. By Equation 2, SUMMARIZEMC computes estimates for q ∈Q such that eΦ(q, R) −Φ(q, R) < ϵ Φ(q,R)|R| |R| with probability 1 −β. We now induct on |Q ∪R|. Line 11 of Algorithm 1 divides over the reference whose subcalls compute estimates that satisfy eΦ(q, RL) −Φ(q, RL) ≤ϵ Φ(q,R)|RL| |R| and eΦ(q, RR) −Φ(q, RR) ≤ ϵ Φ(q,R)|RR| |R| each with at least 1 −β 2 probability by induction hypothesis. For q ∈Q, eΦ(q, R) = eΦ(q, RL)+ eΦ(q, RR) which means |eΦ(q, R)−Φ(q, R)| ≤ϵ Φ(q,R)|R| |R| with probability at least 1−β. Line 14 divides over the query and each subcall computes estimates that hold with at least probability 1−β for q ∈QL and q ∈QR. Line 16 and 17 divides both over the query and the reference, and the correctness can be proven similarly. Therefore, MCMM(Qroot, Rroot, α) computes estimates satisfying Definition 1.3. “Reclaiming” probability. We note that the assigned probability β for the query/reference pair computed with exact bounds (SUMMARIZEEXACT and MCMMBASE) is not used. This portion of the probability can be “reclaimed” in a similar fashion as done in [10] and re-used to prune more aggressively in the later stages of the algorithm. All experiments presented in this paper were benefited by this simple modification. 3 Subspace Tree A subspace tree is basically a space-partitioning tree with a set of orthogonal bases associated with each node N: N.Ω= (µ, U, Λ, d) where µ is the mean, U is a D×d matrix whose columns consist of d eigenvectors, and Λ the corresponding eigenvalues. The orthogonal basis set is constructed using a linear dimension reduction method such as PCA. It is constructed in the top-down manner using the PARTITIONSET function dividing the given set of points into two (where the PARTITIONSET function divides along the dimension with the highest variance in case of a kd-tree for example), with the subspace in each node formed in the bottom-up manner. Algorithm 3 shows a PCA tree (a subspace tree using PCA as a dimension reduction) for a 3-D dataset. The subspace of each leaf node is computed using PCABASE which can use the exact PCA [3] or a stochastic one [2]. For an internal node, the subspaces of the child nodes, N L.Ω= (µL, U L, ΛL, dL) and N R.Ω= (µR, U R, ΛR, dR), are approximately merged using the MERGESUBSPACES function which involves solving an (dL + dR + 1) × (dL + dR + 1) eigenvalue problem [8], which runs in O((dL + dR + 1)3) << O(D3) given that the dataset is sparse. In addition, each data point x in each node N is mapped to its new lower-dimensional coordinate using the orthogonal basis set of N: xproj = U T (x −µ). The L2 norm reconstruction error is given by: ||xrecon −x||2 2 = ||(Uxproj + µ) −x||2 2. Monte Carlo sampling using a subspace tree. Consider CANSUMMARIZEMC function in Algorithm 2. The “outer-loop” over this algorithm is over the query set Q, and it would make sense to project each query point q ∈Q to the subspace owned by the reference node R. Let U and µ be the orthogonal basis system for R consisting of d basis. For each q ∈Q, consider the squared distance 1We could also divide β such that the node that may be harder to approximate gets a lower value. 4 Algorithm 2 Monte Carlo sampling based approximation routines. SAMPLE(q, R, ϵ, α, S, m) for k = 1 to m do r ←random point in R S ←S ∪{Kh(||q −r||)} µS ←MEAN(S), σ2 S ←VARIANCE(S) Φl,new(q, R) ←Φl(q, R)+|R|  µS − zα/2σS √ |S|  mthresh ←z2 α/2σ2 S (|R|+ϵ|R|)2 ϵ2(Φl(q,R)+|R|µS)2 m ←mthresh −|S| CANSUMMARIZEMC(Q, R, ϵ, α) return τ · minitial ≤|R| SUMMARIZEMC(Q, R, ϵ, α) for qi ∈Q do S ←∅, m ←minitial repeat SAMPLE(qi, R, ϵ, α, S, m) until m ≤0 Φ(qi, R) ←Φ(qi, R) + |R| · MEAN(S) ||(q −µ) −rproj ||2 (where (q −µ) is q’s coordinates expressed in terms of the coordinate system of R) as shown in Figure 1. For the Gaussian kernel, each pairwise kernel value is approximated as: e−||q−r||2/(2h2) ≈e−||q−qrecon||2/(2h2)e−||qproj −rproj ||2/(2h2) (3) where qrecon = Uqproj +µ and qproj = U T (q−µ). For a fixed query point q, e−||q−qrecon||2/(2h2) can be precomputed (which takes d dot products between two D-dimensional vectors) and re-used for every distance computation between q and any reference point r ∈R whose cost is now O(d) << O(D). Therefore, we can take more samples efficiently. For a total of sufficiently large m samples, the computational cost is O(d(D + m)) << O(D · m) for each query point. Increased variance comes at the cost of inexact distance computations, however. Each distance computation incurs at most squared L2 norm of ||rrecon −r||2 2 error. That is, ||q −rrecon||2 2 −||q −r||2 2 ≤||rrecon −r||2 2. Neverhteless, the sample variance for each query point plus the inexactness due to dimension reduction τS can be shown to be bounded for the Gaussian kernel as: (where each s = e−||q−rrecon||2/(2h2)): 1 m −1 X s∈S s2 −m · µ2 S ! + τS ≤ 1 m −1 X s∈S s2 ! min  1, max r∈R e||rrecon−r||2 2/h2 −m  µS min r∈R e−||rrecon−r||2 2/(2h2) 2! Exhaustive computations using a subspace tree. Now suppose we have built subspace trees for the query and the reference sets. We can project either each query point onto the reference subspace, or each reference point onto the query subspace, depending on which subspace has a smaller dimension and the number of points in each node. The subspaces formed in the leaf nodes usually are highly numerically accurate since it contains very few points compared to the extrinsic dimensionality D. 4 Experimental Results We empirically evaluated the runtime performance of our algorithm on seven real-world datasets, scaled to fit in [0, 1]D hypercube, for approximating the Gaussian sum at every query point with a range of bandwidths. This experiment is motivated by many kernel methods that require computing the Gaussian sum at different bandwidth values (according to the standard least-sqares crossvalidation scores [15]). Nevertheless, we emphasize that the acceleration results are applicable to other kernel methods that require efficient Gaussian summation. In this paper, the reference set equals the query set. All datasets have 50K points so that the exact exhaustive method can be tractably computed. All times are in seconds and include the time needed to build the trees. Codes are in C/C++ and run on a dual Intel Xeon 3GHz with 8 Gb of main memory. The measurements in second to eigth columns are obtained by running the algorithms at the bandwidth kh∗where 10−3 ≤k ≤103 is the constant in the corresponding column header. The last columns denote the total time needed to run on all seven bandwidth values. Each table has results for five algorithms: the naive algorithm and four algorithms. The algorithms with p = 1 denote the previous state-of-the-art (finite-difference with error redistribution) [10], 5 Algorithm 3 PCA tree building routine. BUILDPCATREE(P) if CANPARTITION(P) then {PL, PR} ←PARTITIONSET(P) N ←empty node N L ←BUILDPCATREE(PL) N R ←BUILDPCATREE(PR) N.S ←MERGESUBSPACES(N L.S, N R.S) else N ←BUILDPCATREEBASE(P) N.S ←PCABASE(P) N.Pproj ←PROJECT(P, N.S) return N while those with p < 1 denote our probabilistic version. Each entry has the running time and the percentage of the query points that did not satisfy the relative error ϵ. Analysis. Readers should focus on the last columns containing the total time needed for evaluating Gaussian sum at all points for seven different bandwidth values. This is indicated by boldfaced numbers for our probabilistic algorithm. As expected, On low-dimensional datasets (below 6 dimensions), the algorithm using series-expansion based bounds gives two to three times speedup compared to our approach that uses Monte Carlo sampling. Multipole moments are an effective form of compression in low dimensions with analytical error bounds that can be evaluated; our Monte Carlo-based method has an asymptotic error bound which must be “learned” through sampling. As we go from 7 dimensions and beyond, series-expansion cannot be done efficiently because of its slow convergence. Our probabilistic algorithm (p = 0.9) using Monte Carlo consistently performs better than the algorithm using exact bounds (p = 1) by at least a factor of two. Compared to naive, it achieves the maximum speedup of about nine times on an 16-dimensional dataset; on an 89-dimensional dataset, it is at least three times as fast as the naive. Note that all the datasets contain only 50K points, and the speedup will be more dramatic as we increase the number of points. 5 Conclusion We presented an extension to fast multipole methods to use approximation methods with both hard and probabilistic bounds. Our experimental results show speedup over the previous state-of-the-art on high-dimensional datasets. Our future work will include possible improvements inspired by a recent work done in the FMM community using a matrix-factorization formulation [13]. Figure 1: Left: A PCA-tree for a 3-D dataset. Right: The squared Euclidean distance between a given query point and a reference point projected onto a subspace can be decomposed into two components: the orthogonal component and the component in the subspace. 6 Algorithm \ scale 0.001 0.01 0.1 1 10 100 1000 Σ mockgalaxy-D-1M-rnd (cosmology: positions), D = 3, N = 50000, h∗= 0.000768201 Naive 182 182 182 182 182 182 182 1274 MCMM 3 3 5 10 26 48 2 97 (ϵ = 0.1, p = 0.9) 1 % 1 % 1 % 1 % 1 % 1 % 5 % DFGT 2 2 2 2 6 19 3 36 (ϵ = 0.1, p = 1) 0 % 0 % 0 % 0 % 0 % 0 % 0 % MCMM 3 3 4 11 27 58 21 127 (ϵ = 0.01, p = 0.9) 0 % 0 % 1 % 1 % 1 % 1 % 7 % DFGT 2 2 2 2 7 30 5 50 (ϵ = 0.01, p = 1) 0 % 0 % 0 % 0 % 0 % 0 % 0 % Algorithm \ scale 0.001 0.01 0.1 1 10 100 1000 Σ bio5-rnd (biology: drug activity), D = 5, N = 50000, h∗= 0.000567161 Naive 214 214 214 214 214 214 214 1498 MCMM 4 4 6 144 149 65 1 373 (ϵ = 0.1, p = 0.9) 0 % 0 % 0 % 0 % 1 % 0 % 1 % DFGT 4 4 5 24 96 65 2 200 (ϵ = 0.1, p = 1) 0 % 0 % 0 % 0 % 0 % 0 % 0 % MCMM 4 4 6 148 165 126 1 454 (ϵ = 0.01, p = 0.9) 0 % 0 % 0 % 0 % 1 % 0 % 1 % DFGT 4 4 5 25 139 126 4 307 (ϵ = 0.01, p = 1) 0 % 0 % 0 % 0 % 0 % 0 % 0 % Algorithm \ scale 0.001 0.01 0.1 1 10 100 1000 Σ pall7 −rnd, D = 7, N = 50000, h∗= 0.00131865 Naive 327 327 327 327 327 327 327 2289 MCMM 3 3 3 3 63 224 < 1 300 (ϵ = 0.1, p = 0.9) 0 % 0 % 0 % 1 % 1 % 12 % 0 % DFGT 10 10 11 14 84 263 223 615 (ϵ = 0.1, p = 1) 0 % 0 % 0 % 0 % 0 % 0 % 0 % MCMM 3 3 3 3 70 265 5 352 (ϵ = 0.01, p = 0.9) 0 % 0 % 0 % 1 % 2 % 1 % 8 % DFGT 10 10 11 14 85 299 374 803 (ϵ = 0.01, p = 1) 0 % 0 % 0 % 0 % 0 % 0 % 0 % Algorithm \ scale 0.001 0.01 0.1 1 10 100 1000 Σ covtype −rnd, D = 10, N = 50000, h∗= 0.0154758 Naive 380 380 380 380 380 380 380 2660 MCMM 11 11 13 39 318 < 1 < 1 381 (ϵ = 0.1, p = 0.9) 0 % 0 % 0 % 1 % 0 % 0 % 0 % DFGT 26 27 38 177 390 244 < 1 903 (ϵ = 0.1, p = 1) 0 % 0 % 0 % 0 % 0 % 0 % 0 % MCMM 11 11 13 77 362 2 < 1 477 (ϵ = 0.01, p = 0.9) 0 % 0 % 0 % 1 % 1 % 10 % 0 % DFGT 26 27 38 180 427 416 < 1 1115 (ϵ = 0.01, p = 1) 0 % 0 % 0 % 0 % 0 % 0 % 0 % Algorithm \ scale 0.001 0.01 0.1 1 10 100 1000 Σ CoocTexture −rnd, D = 16, N = 50000, h∗= 0.0263958 Naive 472 472 472 472 472 472 472 3304 MCMM 10 11 22 189 109 < 1 < 1 343 (ϵ = 0.1, p = 0.9) 0 % 0 % 0 % 1 % 8 % 0 % 0 % DFGT 22 26 82 240 452 66 < 1 889 (ϵ = 0.1, p = 1) 0 % 0 % 0 % 0 % 0 % 0 % 0 % MCMM 10 11 22 204 285 < 1 < 1 534 (ϵ = 0.01, p = 0.9) 0 % 0 % 1 % 1 % 10 % 4 % 0 % DFGT 22 26 83 254 543 230 < 1 1159 (ϵ = 0.01, p = 1) 0 % 0 % 0 % 0 % 0 % 0 % 0 % 7 Algorithm \ scale 0.001 0.01 0.1 1 10 100 1000 Σ LayoutHistogram −rnd, D = 32, N = 50000, h∗= 0.0609892 Naive 757 757 757 757 757 757 757 5299 MCMM 32 32 54 168 583 8 8 885 (ϵ = 0.1, p = 0.9) 0 % 0 % 1 % 1 % 1 % 0 % 0 % DFGT 153 159 221 492 849 212 < 1 2087 (ϵ = 0.1, p = 1) 0 % 0 % 0 % 0 % 0 % 0 % 0 % MCMM 32 45 60 183 858 8 8 1246 (ϵ = 0.01, p = 0.9) 0 % 0 % 1 % 6 % 1 % 0 % 0 % DFGT 153 159 222 503 888 659 < 1 2585 (ϵ = 0.01, p = 1) 0 % 0 % 0 % 0 % 0 % 0 % 0 % Algorithm \ scale 0.001 0.01 0.1 1 10 100 1000 Σ CorelCombined −rnd, D = 89, N = 50000, h∗= 0.0512583 Naive 1716 1716 1716 1716 1716 1716 1716 12012 MCMM 384 418 575 428 1679 17 17 3518 (ϵ = 0.1, p = 0.9) 0 % 0 % 0 % 1 % 10 % 0 % 0 % DFGT 659 677 864 1397 1772 836 17 6205 (ϵ = 0.1, p = 1) 0 % 0 % 0 % 0 % 0 % 0 % 0 % MCMM 401 419 575 437 1905 17 17 3771 (ϵ = 0.01, p = 0.9) 0 % 0 % 0 % 1 % 2 % 0 % 0 % DFGT 659 677 865 1425 1794 1649 17 7086 (ϵ = 0.01, p = 1) 0 % 0 % 0 % 0 % 0 % 0 % 0 % References [1] Nando de Freitas, Yang Wang, Maryam Mahdaviani, and Dustin Lang. Fast krylov methods for n-body learning. In Y. Weiss, B. Sch¨olkopf, and J. Platt, editors, Advances in Neural Information Processing Systems 18, pages 251–258. MIT Press, Cambridge, MA, 2006. [2] P. Drineas, R. Kannan, and M. Mahoney. Fast monte carlo algorithms for matrices iii: Computing a compressed approximate matrix decomposition, 2004. [3] G. Golub. Matrix Computations, Third Edition. The Johns Hopkins University Press, 1996. [4] A. Gray and A. W. Moore. N-Body Problems in Statistical Learning. In Todd K. Leen, Thomas G. Dietterich, and Volker Tresp, editors, Advances in Neural Information Processing Systems 13 (December 2000). MIT Press, 2001. [5] Alexander G. Gray and Andrew W. Moore. Nonparametric Density Estimation: Toward Computational Tractability. In SIAM International Conference on Data Mining 2003, 2003. [6] Alexander G. Gray and Andrew W. Moore. Very Fast Multivariate Kernel Density Estimation via Computational Geometry. In Joint Statistical Meeting 2003, 2003. to be submitted to JASA. [7] L. Greengard and J. Strain. The Fast Gauss Transform. SIAM Journal of Scientific and Statistical Computing, 12(1):79–94, 1991. [8] Peter Hall, David Marshall, and Ralph Martin. Merging and splitting eigenspace models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(9):1042–1049, 2000. [9] Michael Holmes, Alexander Gray, and Charles Isbell. Ultrafast monte carlo for statistical summations. In J.C. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20, pages 673–680. MIT Press, Cambridge, MA, 2008. [10] Dongryeol Lee and Alexander Gray. Faster gaussian summation: Theory and experiment. In Proceedings of the Twenty-second Conference on Uncertainty in Artificial Intelligence. 2006. [11] Dongryeol Lee, Alexander Gray, and Andrew Moore. Dual-tree fast gauss transforms. In Y. Weiss, B. Sch¨olkopf, and J. Platt, editors, Advances in Neural Information Processing Systems 18, pages 747– 754. MIT Press, Cambridge, MA, 2006. [12] Ting Liu, Andrew W. Moore, and Alexander Gray. Efficient exact k-nn and nonparametric classification in high dimensions. In Sebastian Thrun, Lawrence Saul, and Bernhard Sch¨olkopf, editors, Advances in Neural Information Processing Systems 16. MIT Press, Cambridge, MA, 2004. [13] P. G. Martinsson and Vladimir Rokhlin. An accelerated kernel-independent fast multipole method in one dimension. SIAM J. Scientific Computing, 29(3):1160–1178, 2007. [14] A. W. Moore, J. Schneider, and K. Deng. Efficient locally weighted polynomial regression predictions. In D. Fisher, editor, Proceedings of the Fourteenth International Conference on Machine Learning, pages 196–204, San Francisco, 1997. Morgan Kaufmann. [15] B. W. Silverman. Density Estimation for Statistics and Data Analysis. Chapman and Hall/CRC, 1986. 8
2008
72
3,562
ICA based on a Smooth Estimation of the Differential Entropy Lev Faivishevsky School of Engineering, Bar-Ilan University levtemp@gmail.com Jacob Goldberger School of Engineering, Bar-Ilan University goldbej@eng.biu.ac.il Abstract In this paper we introduce the MeanNN approach for estimation of main information theoretic measures such as differential entropy, mutual information and divergence. As opposed to other nonparametric approaches the MeanNN results in smooth differentiable functions of the data samples with clear geometrical interpretation. Then we apply the proposed estimators to the ICA problem and obtain a smooth expression for the mutual information that can be analytically optimized by gradient descent methods. The improved performance of the proposed ICA algorithm is demonstrated on several test examples in comparison with state-ofthe-art techniques. 1 Introduction Independent component analysis (ICA) is the problem of recovering latent random vector from observations of unknown linear functions of that vector. Assume a data S ∈Rd is generated via d independent sources. We observe X = AS where A is an unknown square matrix called the mixing matrix. We are given repeated observation dataset {x1, ..., xn} and our goal is to recover the linear transformation A and the sources s1, ..., sn that generated our data xi = Asi. Given the minimal statement of the problem, it has been shown [6] that one can recover the original sources up to a scaling and a permutation provided that at most one of the underlying sources is Gaussian and the rest are non-Gaussian. Upon pre-whitening the observed data, the problem reduces to a search over rotation matrices in order to recover the source and mixing matrix in the sense described above [10]. We will assume henceforth that such pre-processing has been done. Specifying distributions for the components of X, one obtains a parametric model that can be estimated via maximum likelihood [3, 4]. Working with W = A−1 as the parametrization, one readily obtains a gradient or fixed-point algorithm that yields an estimate ˆW and provides estimates of the latent components via ˆS = ˆWX [10]. In practical applications the distributions of the d components of X are unknown. Therefore it is preferable to consider the ICA model as a semiparametric model in which the distributions of the components of X are left unspecified. The problem is then, obviously, to find a suitable contrast function, i.e. a target function to be minimized in order to estimate the ICA model. The earliest ICA algorithms were based on contrast functions defined in terms of expectations of a single fixed nonlinear function, chosen in ad-hoc manner [5]. More sophisticated algorithms have been obtained by careful choice of a single fixed nonlinear function, such that the expectations of this function yield a robust approximation to the mutual information [9]. Maximizing the likelihood in the semiparametric ICA model is essentially equivalent to minimizing the mutual information between the components of the estimate ˆS = ˆWX [4]. The usage of the mutual information as a contrast function to be minimized in estimating the ICA model is well motivated, quite apart from the link to maximum likelihood [6]. 1 Estimating MI from a given finite sample set is difficult. Several modern approaches rely on knearest neighbor estimates of entropy and mutual information [12, 16]. Recently the Vasicek estimator [17] for the differential entropy of 1D random variables, based on k-nearest neighbors statistics, was applied to ICA [8, 13]. In addition ICA was studied by another recently introduced MI estimator [16]. However, the derivative of the estimators that are based on order statistics can hardly be computed and therefore the optimization of such numerical criteria can not be based on gradient techniques. Also the result numerical criteria tend to have a non-smooth dependency on sample values. The optimization therefore should involve computation of contrast function on a whole grid of searched parameters. In addition, such estimators do not utilize optimally the whole amount of data included in the samples of random vectors. Therefore they require significant artificial enlargement of data sets by a technique called data augmentation [13] that replaces each data point in sample with R-tuple (R is usually 30) of points given by an statistical procedure with ad-hoc parameters. An alternative is the Fourier filtering of the estimated values of the evaluated MI estimators [16]. In the present paper we propose new smooth estimators for the differential entropy, the mutual information and the divergence. The estimators are obtained by a novel approach averaging k-nearest neighbor statistics for the all possible values of order statistics k. The estimators are smooth, their derivatives may be easily analytically calculated thus enabling fast gradient optimization techniques. They fully utilize the amount of data comprised into a random variable sample. The estimators provide a novel geometrical interpretation for the entropy. When applied to ICA problem, the proposed estimator leads to the most precise results for many distributions known at present. The rest of the paper is organized as follows: Section 2 reviews the kNN approach for the entropy and divergence estimation, Section 3 introduces the mean estimator for the differential entropy, the mutual information and the divergence. Section 4 describes the application of the proposed estimators to the ICA problem and Section 5 describes conducted numerical experiments. 2 kNN Estimators for the Differential Entropy We review the nearest neighbor technique for the Shannon entropy estimation. The differential entropy of X is defined as: H(X) = − Z f(x) log f(x)dx (1) We describe the derivation of the Shannon differential entropy estimate of [11, 18]. Our aim is to estimate H(X) from a random sample (x1, ..., xn) of n random realizations of a d-dimensional random variable X with unknown density function f(x). The entropy is the average of −log f(x). If one had unbiased estimators for log f(xi), one would arrive to an unbiased estimator for the entropy. We will estimate log f(xi) by considering the probability density function Pik(ϵ) for the distance between xi and its k-th nearest neighbor (the probability is computed over the positions of all other n −1 points, with xi kept fixed). The probability Pik(ϵ)dϵ is equal to the chance that there is one point within distance r ∈[ϵ, ϵ + dϵ] from xi, that there are k−1 other points at smaller distances, and that the remaining n−k−1 points have larger distances from xi. Denote the mass of the ϵ-ball centered at xi by pi(ϵ), i.e. pi(ϵ) = R ∥x−xi∥<ϵ f(x)dx. Applying the trinomial formula we obtain: Pik(ϵ) = (n−1)! 1!(k−1)!(n−k−1)! dpi(ϵ) dϵ pk−1 i (1 −pi)n−k−1 (2) It can be easily verified that indeed R Pik(ϵ)dϵ = 1. Hence, the expected value of the function log pi(ϵ) according to the distribution Pik(ϵ) is: EPik(ϵ)(log pi(ϵ)) = Z ∞ 0 Pik(ϵ) log pi(ϵ)dϵ = k µn−1 k ¶ Z 1 0 pk−1(1 −p)n−k−1 log p dp (3) = ψ(k) −ψ(n) where ψ(x) is the digamma function (the logarithmic derivative of the gamma function). To verify the last equality, differentiate the identity R 1 0 xa−1(1−x)b−1 = Γ(a)Γ(b)/Γ(a + b) with respect to 2 the parameter a and recall that Γ′(x) = ψ(x)Γ(x). The expectation is taken over the positions of all other n −1 points, with xi kept fixed. Assuming that f(x) is almost constant in the entire ϵ-ball around xi, we obtain: pi(ϵ) ≈cdϵdf(xi). (4) where d is the dimension of x and cd is the volume of the d-dimensional unit ball (cd = πd/2/Γ(1+ d/2) for Euclidean norm). Substituting Eq. (4) into Eq. (3), we obtain: −log f(xi) ≈ψ(n) −ψ(k) + log(cd) + dE(log(ϵ)) (5) which finally leads to the unbiased kNN estimator for the differential entropy [11]: Hk(X) = ψ(n) −ψ(k) + log(cd) + d n n X i=1 log ϵi (6) where ϵi is the distance from xi to its k-th nearest neighbor. An alternative proof of the asymptotic unbiasedness and consistency of the kNN estimator is found at [15]. A similar approach can be used to obtain a kNN estimator for the Kullback-Leibler divergence [19]. The estimator works as follows. Let {x1, ..., xn} and {y1, ..., ym} be i.i.d. d-dimensional samples drawn independently from the densities p and q respectively. By definition the divergence is given by: D(p∥q) = Z p(x) log p(x) q(x) (7) The distance of xi to its nearest neighbor in {xj}j̸=i is defined as ρn(i) = min j̸=i d(xi, xj) (8) We also define the distance of xi to its nearest neighbor in {yj} νn(i) = min j=1,...,m d(xi, yj) (9) Then the estimator of [19] is given by ˆDn,m = d n n X i=1 log νm(i) ρn(i) + log m n −1 (10) The authors established asymptotic unbiasedness and mean-square consistency of the estimator (10). The same proofs could be applied to obtain k-nearest neighbor version of the estimator: ˆDk n,m = d n n X i=1 log vk m(i) ρkn(i) + log m n −1 (11) Being non-parametric, the kNN estimators (6, 11) rely on the order statistics. This makes the analytical calculation of the gradient hardly possible. Also it leads to a certain lack of smoothness of the estimator value as a function of the sample coordinates. One also should mention that finding the k-nearest neighbor is a computationally intensive problem. It becomes necessarily to use involved approximate nearest neighbor techniques for large data sets. 3 The MeanNN Entropy Estimator We propose a novel approach for the entropy estimation as a function of sample coordinates. It is based on the fact that the kNN estimator (6) is valid for every k. Therefore the differential entropy can be also extracted from a mean of several estimators corresponding to different values of k. Next we consider all the possible values of order statistics k from 1 to n −1: Hmean = 1 n −1 n−1 X k=1 Hk = log(cd) + ψ(n) + 1 n −1 n−1 X k=1 (−ψ(k) + d n n X i=1 log ϵi,k) (12) where ϵi,k is the k-th nearest neighbor of xi. Consider the double-summation last term in Eq. (12). Exchanging the order of summation, the last sum adds for each sample point xi the sum of log of 3 its distances to all its nearest neighbors in the sample. It is of course equivalent to the sum of log of its distances to all other points in the sample set. Hence the mean estimator (12) for the differential entropy can be written as: Hmean = const + d n(n −1) X i̸=j log ∥xi −xj∥ (13) where the constant depends just on the sample size and dimensionality. We dub this estimator, the MeanNN estimator for differential entropy. It follows that the differential entropy (approximation) has a clear geometric meaning. It is proportional to log of the products of distances between each two points in a random i.i.d. sample. It is an intuitive observation since a higher entropy would lead to a larger scattering of the samples thus pairwise distances would grow resulting in a larger product of all distances. Moreover, the MeanNN estimator (13) is a smooth function of the sample coordinates. Its gradient can be easily found. The asymptotic unbiasedness and consistency of the estimator follow from the same properties of the kNN estimator (6). Obviously, the same method gives the mean estimator for the mutual information by usage of well known equality connecting the mutual information and marginal and joint entropies: Imean(X; Y ) = Hmean(X) + Hmean(Y ) −Hmean(X, Y ) (14) We demonstrate the MeanNN estimator for the entropy in the case exponential distributed random variable f(x, µ) = 1 µe−x µ , x > 0, µ > 0. In this case case the entropy may be analytically calculated as H = log µ+1. We compared the performance of the MeanNN estimator with k-nearest neighbor estimator (6) for various values of k. Results are given in Table 1. One may see that the mean square error of the MeanNN estimator is the same or worse for the traditional kNN estimators. But the standard deviation of the estimator values is best for the MeanNN estimator. Further we will apply MeanNN for optimization of a certain criterion based on the entropy. In such cases the most important characteristics of an estimator is its monotonic dependency on the estimated value and the prediction of the exact value of the entropy is less important. Therefore one may conclude that MeanNN is better applicable for optimization of entropy based numerical criteria. 1NN 4NN 10NN MeanNN Mean square error of entropy estimation 0.0290 0.0136 0.0117 0.0248 STD of estimator values 0.1698 0.1166 0.1079 0.1029 Table 1: Performance of MeanNN entropy estimator in comparison with kNN entropy estimators. 100 samples of random variable, 10 various values of µ parameter, 100 repetitions. To obtain the estimator for the divergence we apply the same mean approach to estimator (11) setting m = n −1: ˆDmean n,n−1 = d n(n −1) n−1 X k=1 n X i=1 log vk m(i) ρkn(i) = d n(n −1)  X i,j log d(xi, yj) − X i̸=j log d(xi, xj)   (15) The mean estimator for the divergence has a clear geometric interpretation. If the product of all distances inside one sample is small in comparison with the product of pairwise distances between the samples then one concludes that divergence is large and vice versa. 4 The MeanNN ICA Algorithm As many approaches do, we will use a contrast function J(Y ) = Z q(y1, ..., yd) log q(y1, .., yd) Qd i=1 q(yi) dµ = D(q(y1, .., yd)∥ d Y i=1 q(yi)) = d X i=1 H(Yi)−H(Y1, ..., Yd) (16) Considering Y as linear function of X, Y = WX, it is easily verified [3, 7, 10] that 4 J(Y ) = d X t=1 H(Yt) −H(X1, ..., Xd) −log(|W|) (17) In particular, the change in the entropy of the joint distribution under linear transformation is simply the logarithm of the Jacobian of the transformation. As we will assume the X’s to be pre-whitened, W will be restricted to rotation matrices, therefore log(|W|) = 0 and the minimization of J(Y ) reduces to finding ˆW = arg min W H(Y1) + ... + H(Yd) (18) Denoting the rows of the matrix W by W = (w1, ..., wd) ⊤, we can explicitly write the minimization expression as a function of W: ˆW = arg min W d X t=1 H(w ⊤ t X) (19) Then we can plug the MeanNN entropy estimator into Eq. (19) to obtain (after omitting irrelevant constants) an explicit contrast function to minimize: ˆW = arg min W S(W) = arg min W d X t=1 n X i̸=j log((w ⊤ t (xi −xj))2) (20) The gradient of the contrast function S(W) with respect to a rotation matrix W may be found with the assistance of the so-called Givens rotations (see e.g. [14]). In this parametrization a rotation matrix W ∈Rd×d is represented by a product of d(d −1)/2 plane rotations: W = d−1 Y s=1 d Y t=s+1 Gst (21) where Gst is a rotation matrix corresponding to a rotation in the st plane by an angle λst. It is the identity matrix except that its elements (s, s),(s, t),(t, s),(t, t) form a two-dimensional (2-D) rotation matrix by · Gst(s, s) Gst(s, t) Gst(t, s) Gst(t, t) ¸ = · cos(λst) sin(λst) −sin(λst) cos(λst) ¸ (22) The gradient of a single rotation matrix Gst with respect to λst is a zero matrix except for elements (s, s),(s, t),(t, s),(t, t) for which ∂ ∂λst · Gst(s, s) Gst(s, t) Gst(t, s) Gst(t, t) ¸ = · −sin(λst) cos(λst) −cos(λst) −sin(λst) ¸ (23) It can easily verified that the gradient of the contrast function (20) is given by ∂ ∂λst S = d X q,r=1 ∂S ∂wqr ∂wqr ∂λst = 2 d X q,r=1 n X i̸=j (xir −xjr) |w ⊤ q (xi −xj)| "d−1 Y u=1 d Y v=u+1 ˜Guv # qr (24) where ˜Guv = ∂ ∂λuv Guv if both u = s and v = t, and ˜Guv = Guv otherwise. The contrast function S(W) and its gradient ∂ ∂λst S may in theory suffer from discontinuities if a row wt is perpendicular to a vector xi −xj. To overcome this numerical difficulty we utilize a smoothed version of the contrast function S(W, ϵ) and give the expression for its gradient: S(W, ϵ) = d X t=1 n X i̸=j log((w ⊤ t (xi −xj))2 + ϵ) (25) ∂ ∂λst S = d X q,r=1 ∂S ∂wqr ∂wqr ∂λst = d X q,r=1 n X i̸=j (xir −xjr) (w ⊤ q (xi −xj))2 + ϵ "d−1 Y u=1 d Y v=u+1 ˜Guv # qr (26) For the optimization of the contrast function we apply the conjugate gradient method. The algorithm is summarized in Figure 1. 5 Input: Data vectors x1, x2, ..., xn ∈Rd, assumed whitened Output: Mixing matrix W Method: • Initialize d(d −1)/2 rotation angles λst • Apply the conjugate gradient optimization to the contrast function S(W(λ)) (25) to find the optimal angles • Reconstruct the rotation matrix W from the found angles by Givens rotations (21) Figure 1: The MeanNN ICA algorithm 5 Experiments First we study the set of 9 problems proposed by [2]. Each problem corresponds to a 1D probability distribution q(x). One thousand pairs of random numbers x and y are mixed as x′ = x cos φ + y sin φ, y′ = −x sin φ + y cos φ with random angle φ common to all pairs (i.e. A is a pure rotation). We applied the conjugate gradient methods for the optimization of the contrast function (25) with ϵ = 1/n = 0.001 in order to recover this rotation matrix. This was repeated 100 times with different angles φ and with different random sets of pairs (x, y). To assess the quality of the estimator ˆA (or, equivalently, of the back transformation ˆW = ˆA−1), we use the Amari performance index Perr from [1]. Perr = 1 2d d X i,j=1 ( |pij| maxk |pik| + |pij| maxk |pkj|) −1 (27) where pij = ( ˆA−1A)ij. We compared our method with three state-of-the-art approaches: MILCA [16], RADICAL [13] and KernelICA [2]. We used the official code proposed by authors1. For the first two techniques that utilize different information theoretic measures assessed by order statistics it is highly recommended to use dataset augmentation. This is a computationally intensive technique for the dataset enlargement by replacing each data set point with a fixed number (usually 30) new data points randomly generated in the small neighborhood of the original point. The proposed method gives smooth results without any additional augmentation due to its smooth nature (see Eq. (13)). pdfs MILCA MILCA Aug RADICAL RADICAL Aug KernelICA MeanNN ICA a 3.3 2.5 3.6 2.8 3.3 2.4 b 3.4 3.0 3.6 3.3 3.0 2.6 c 7.5 4.4 7.6 5.4 4.9 4.2 d 1.8 1.7 1.4 1.6 1.4 1.4 e 1.7 1.6 1.5 1.7 1.5 1.4 f 1.4 1.3 1.6 1.4 1.4 1.4 g 1.4 1.3 1.6 1.4 1.4 1.4 h 1.7 2.0 1.6 1.7 1.4 1.5 i 1.9 2.1 1.8 1.8 1.5 1.8 Table 2: Amari performance (multiplied by 100) for two-component ICA. The distributions are: (a) Student with 3 degrees of freedom; (b) double exponential; (c) Student with 5 degrees of freedom; (d) exponential; (e) mixture of two double exponentials; (f) symmetric mixtures of two Gaussians; (g) nonsymmetric mixtures of two Gaussians; (h) symmetric mixtures of four Gaussians; (i) nonsymmetric mixtures of four Gaussians. In the explored cases the proposed method achieves the level of a state-of-the-art performance. This is well explained by the inherent smoothness of MeanNN estimator, see Figure 2. Here we presented 1http://www.klab.caltech.edu/∼kraskov/MILCA/, https://www.cs.umass.edu/∼elm/ICA/, http://www.di.ens.fr/∼fbach/kernel-ica/index.htm 6 the comparison of different contrast functions based on different order statistics estimators for a grid of possible rotations angles for the mixture of two exponentially distributed random variables (case e). The contrast function corresponding to the order statistics k = 10 generally coincides with the MILCA approach. Also the contrast function corresponding to the order statistics k = 30 ≃√n generally coincides with the RADICAL method. One may see that MeanNN ICA contrast function leads to much more robust prediction of the rotation angle. One should mention that the gradient based optimization enables to obtain the global optimum with high precision as opposed to MILCA and RADICAL schemes which utilize subspace grid optimization. Application of the gradient based optimization schemes also leads to a computational advantage. The number of needed function evaluations was limited by 20 as opposed to 150 evaluations for grid optimization schemes MILCA and RADICAL. 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 2 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 Rotation angle φ Contrast function S(W(φ)) MeanNN 10NN 30NN Figure 2: Convergence analysis for a mixture of two exponentially distributed random variables. Contrast function dependence on a rotation angle for different entropy estimators. 1000 samples, 0.01 radian grid. We also studied the application of MeanNN ICA to multidimensional problems. For that purpose we chose at random D (generally) different distributions, then we mixed them by a random rotation and ran the compared ICA algorithms to recover the rotation matrix. The results are presented at Table 3. MeanNN ICA achieved the best performance. dims MILCA MILCA Aug RADICAL RADICAL Aug KernelICA MeanNN ICA 2 3.0 3.3 3.1 3.0 2.9 2.5 4 2.7 2.7 2.8 2.3 2.6 2.2 Table 3: Amari index (multiplied by 100) for multidimensional ICA. 1000 samples, 10 repetitions 6 Conclusion We proposed a novel approach for estimation of main information theoretic measures such as differential entropy, mutual information and divergence. The estimators represent smooth differential functions with clear geometrical meaning. Next this novel estimation technique was applied to the ICA problem. Compared to state-of-the-art ICA methods the proposed method demonstrated superior results in the conducted tests. Studied state-of-the-art approaches can be divided in two groups. The first group is based on exact entropy estimation, that usually leads to high performance as demonstrated by MILCA and RADICAL. The drawback of such estimators is the lack of the gradient and therefore numerical difficulties in optimization. The second group apply different from entropy criteria, that benefit easy calculation of gradient (KernelICA). However such methods may suffer from deteriorated performance. 7 MeanNN ICA comprises the advantages of these two kinds of estimators. It represents a contrast function based on an accurate entropy estimation and its gradient is given analytically therefore it may be readily optimized. Finally we mention that the proposed estimation method may further be applied to various problems in the field of machine learning and beyond. References [1] S. Amari, A. Cichoki, and H.H.Yang. A new learning algorithm for blind signal separation. Advances in Neural Information Processing Systems, 8, 1996. [2] F. Bach and M. Jordan. Kernel independent component analysis. Journal of Machine Learning Research, 3, 2002. [3] A. J. Bell and T. J. Sejnowski. An information-maximization approach to blind separation and blind deconvolution. Neural Computatiuon, 7, 1995. [4] J.-F. Cardoso. Multidimensional independent component analysis. Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP’98), 1998. [5] C.Jutten and J.Herault. Blind separation of sources, part 1: An adaptive algorithm based on neuromimetic architecture. Signal Processing, 1991. [6] P. Comon. Independent component analysis, a new concept? Signal Processing, 36(3), 1994. [7] Thomas M. Cover and Joy A. Thomas. Elements of Information Theory. Wiley-Interscience, August 1991. [8] D.T.Pham and P.Garat. Blind separation of mixtures of independent signals through a quasi-maximum likelihood approach. IEEE transactions on Signal Processing 45(7), 1997. [9] A. Hyvarinen and E.Oja. A fast fixed point algorithm for independent component analysis. Neural computation, 9(7), 1997. [10] A. Hyvarinen, J. Karhunen, and E. Oja. Independent component analysis. 2001. [11] L. Kozachenko and N. Leonenko. On statistical estimation of entropy of random vector. Problems Infor. Transmiss., 23 (2), 1987. [12] A. Kraskov, H. St¨ogbauer, and P. Grassberger. Estimating mutual information. Physical Review E, 69:066138, 2004. [13] E. Miller and J. Fisher. Ica using spacing estimates of entropy. Proc. Fourth International Symposium on Independent Component Analysis and Blind Signal Separation, Nara, Japan, Apr. 2003, pp. 1047–1052., 2003. [14] J. Peltonen and S. Kaski. Discriminative components of data. IEEE Transactions on Neural Networks, 16(1), 2005. [15] H. Singh, N. Misra, V. Hnizdo, A. Fedorowicz, and Eugene Demchuk. Nearest neighbor estimates of entropy. American Journal of Mathematical and Management Sciences, 2003. [16] H. St¨ogbauer, A. Kraskov, S. Astakhov, and P. Grassberger. Least-dependent-component analysis based on mutual information. Phys. Rev. E, 70(6):066123, Dec 2004. [17] O. Vasicek. A test for normality based on sample entropy. J. Royal Stat. Soc. B, 38 (1):54–59, 1976. [18] J. D. Victor. Binless strategies for estimation of information from neural data. Physical Review, 2002. [19] Q. Wang, S. R. Kulkarni, and S. Verdu. A nearest-neighbor approach to estimating divergence between continuous random vectors. IEEE Int. Symp. Information Theory, Seattle, WA, 2006. 8
2008
73
3,563
Support Vector Machines with a Reject Option Yves Grandvalet 1, 2, Alain Rakotomamonjy 3, Joseph Keshet 2 and St´ephane Canu 3 1 Heudiasyc, UMR CNRS 6599 2 Idiap Research Institute Universit´e de Technologie de Compi`egne Centre du Parc BP 20529, 60205 Compi`egne Cedex, France CP 592, CH-1920 Martigny Switzerland 3 LITIS, EA 4108 Universit´e de Rouen & INSA de Rouen 76801 Saint Etienne du Rouvray, France Abstract We consider the problem of binary classification where the classifier may abstain instead of classifying each observation. The Bayes decision rule for this setup, known as Chow’s rule, is defined by two thresholds on posterior probabilities. From simple desiderata, namely the consistency and the sparsity of the classifier, we derive the double hinge loss function that focuses on estimating conditional probabilities only in the vicinity of the threshold points of the optimal decision rule. We show that, for suitable kernel machines, our approach is universally consistent. We cast the problem of minimizing the double hinge loss as a quadratic program akin to the standard SVM optimization problem and propose an active set method to solve it efficiently. We finally provide preliminary experimental results illustrating the interest of our constructive approach to devising loss functions. 1 Introduction In decision problems where errors incur a severe loss, one may have to build classifiers that abstain from classifying ambiguous examples. Rejecting these examples has been investigated since the early days of pattern recognition. In particular, Chow (1970) analyses how the error rate may be decreased thanks to the reject option. There have been several attempts to integrate a reject option in Support Vector Machines (SVMs), using strategies based on the thresholding of SVMs scores (Kwok, 1999) or on a new training criterion (Fumera & Roli, 2002). These approaches have however critical drawbacks: the former is not consistent and the latter leads to considerable computational overheads to the original SVM algorithm and lacks some of its most appealing features like convexity and sparsity. We introduce a piecewise linear and convex training criterion dedicated to the problem of classification with the reject option. Our proposal, inspired by the probabilistic interpretation of SVM fitting (Grandvalet et al., 2006), is a double hinge loss, reflecting the two thresholds in Chow’s rule. Hence, we generalize the loss suggested by Bartlett and Wegkamp (2008) to arbitrary asymmetric misclassification and rejection costs. For the symmetric case, our probabilistic viewpoint motivates another decision rule. We then propose the first algorithm specifically dedicated to train SVMs with a double hinge loss. Its implementation shows that our decision rule is at least at par with the one of Bartlett and Wegkamp (2008). The paper is organized as follows. Section 2 defines the problem and recalls Bayes rule for binary classification with a reject option. The proposed double hinge loss is derived in Section 3, together with the decision rule associated with SVM scores. Section 4 addresses implementation issues: it formalizes the SVM training problem and details an active set algorithm specifically designed for 1 training with the double hinge loss. This implementation is tested empirically in Section 5. Finally, Section 6 concludes the paper. 2 Problem Setting and the Bayes Classifier Classification aims at predicting a class label y ∈Y from an observed pattern x ∈X. For this purpose, we construct a decision rule d : X →A, where A is a set of actions that typically consists in assigning a label to x ∈X. In binary problems, where the class is tagged either as +1 or −1, the two types of errors are: (i) false positive, where an example labeled −1 is predicted as +1, incurring a cost c−; (ii) false negative, where an example labeled +1 is predicted as −1, incurring a cost c+. In general, the goal of classification is to predict the true label for an observed pattern. However, patterns close to the decision boundary are misclassified with high probability. This problem becomes especially eminent in cases where the costs, c−or c+, are high, such as in medical decision making. In these processes, it might be better to alert the user and abstain from prediction. This motivates the introduction of a reject option for classifiers that cannot predict a pattern with enough confidence. This decision to abstain, which is denoted by 0, incurs a cost, r−and r+ for examples labeled −1 and +1, respectively. The costs pertaining to each possible decision are recapped on the righthand-side. In what follows, we assume that all costs are strictly positive: c−> 0 , c+ > 0 , r−> 0 , r+ > 0 . (1) Furthermore, it should be possible to incur a lower expected loss by choosing the reject option instead of any prediction, that is c−r+ + c+ r−< c−c+ . (2) d(x) y +1 −1 +1 0 c− 0 r+ r− −1 c+ 0 Bayes’ decision theory is the paramount framework in statistical decision theory, where decisions are taken to minimize expected losses. For classification with a reject option, the overall risk is R(d) = c+ EXY [Y = 1, d(X) = −1] + c−EXY [Y = −1, d(X) = 1] + r+ EXY [Y = 1, d(X) = 0] + r−EXY [Y = −1, d(X) = 0] , (3) where X and Y denote the random variable describing patterns and labels. The Bayes classifier d∗is defined as the minimizer of the risk R(d). Since the seminal paper of Chow (1970), this rule is sometimes referred to as Chow’s rule: d∗(x) = ( +1 if P(Y = 1|X = x) > p+ −1 if P(Y = 1|X = x) < p− 0 otherwise , (4) where p+ = c−−r− c−−r−+ r+ and p−= r− c+ −r+ + r− . Note that, assuming that (1) and (2) hold, we have 0 < p−< p+ < 1. One of the major inductive principle is the empirical risk minimization, where one minimizes the empirical counterpart of the risk (3). In classification, this principle usually leads to a NP-hard problem, which can be circumvented by using a smooth proxy of the misclassification loss. For example, Vapnik (1995) motivated the hinge loss as a “computationally simple” (i.e., convex) surrogate of classification error. The following section is dedicated to the construction of such a surrogate for classification with a reject option. 3 Training Criterion One method to get around the hardness of learning decision functions is to replace the conditional probability P(Y = 1|X = x) with its estimation bP(Y = 1|X = x), and then plug this estimation back in (4) to build a classification rule (Herbei & Wegkamp, 2006). One of the most widespread 2 ℓp−,p+(+1, f(x)) 0 2.5 5.1 0 f+ f− f(x) ℓp−,p+(−1, f(x)) 0 2.5 5.1 0 f+ f− f(x) Figure 1: Double hinge loss function ℓp−,p+ for positive (left) and negative examples (right), with p−= 0.4 and p+ = 0.8 (solid: double hinge, dashed: likelihood). Note that the decision thresholds f+ and f−are not symmetric around zero. representative of this line of attack is the logistic regression model, which estimates the conditional probability using the maximum (penalized) likelihood framework. As a starting point, we consider the generalized logistic regression model for binary classification, where bP(Y = y|X = x) = 1 1 + exp(−yf(x)) , (5) and the function f : X →R is estimated by the minimization of a regularized empirical risk on the training sample T = {(xi, yi)}n i=1 n X i=1 ℓ(yi, f(xi)) + λΩ(f) , (6) where ℓis a loss function and Ω(·) is a regularization functional, such as the (squared) norm of f in a suitable Hilbert space Ω(f) = ∥f∥2 H, and λ is a regularization parameter. In the standard logistic regression procedure, ℓis the negative log-likelihoood loss ℓ(y, f(x)) = log(1 + exp(−yf(x))) . This loss function is convex and decision-calibrated (Bartlett & Tewari, 2007), but it lacks an appealing feature of the hinge loss used in SVMs, that is, it does not lead to sparse solutions. This drawback is the price to pay for the ability to estimate the posterior probability P(Y = 1|X = x) on the whole range (0, 1) (Bartlett & Tewari, 2007). However, the definition of the Bayes’ rule (4) clearly shows that the estimation of P(Y = 1|X = x) does not have to be accurate everywhere, but only in the vicinity of p+ and p−. This motivates the construction of a training criterion that focuses on this goal, without estimating P(Y = 1|X = x) on the whole range as an intermediate step. Our purpose is to derive such a loss function, without sacrifying sparsity to the consistency of the decision rule. Though not a proper negative log-likelihood, the hinge loss can be interpreted in a maximum a posteriori framework: The hinge loss can be derived as a relaxed minimization of negative loglikelihood (Grandvalet et al., 2006). According to this viewpoint, minimizing the hinge loss aims at deriving a loose approximation to the the logistic regression model (5) that is accurate only at f(x) = 0, thus allowing to estimate whether P(Y = 1|X = x) > 1/2 or not. More generally, one can show that, in order to have a precise estimate of P(Y = 1|X = x) = p, the surrogate loss should be tangent to the neg-log-likelihood at f = log(p/(1 −p)). Following this simple constructive principle, we derive the double hinge loss, which aims at reliably estimating P(Y = 1|X = x) at the threshold points p+ and p−. Furthermore, to encourage sparsity, we set the loss to zero for all points classified with high confidence. This loss function is displayed in Figure 1. Formally, for the positive examples, the double hinge loss satisfying the above conditions can be expressed as ℓp−,p+(+1, f(x)) = max  −(1 −p−)f(x) + H(p−), −(1 −p+)f(x) + H(p+), 0 , (7) 3 and for the negative examples it can be expressed as ℓp−,p+(−1, f(x)) = max  p+f(x) + H(p+), p−f(x) + H(p−), 0 , (8) where H(p) = −p log(p) −(1 −p) log(1 −p). Note that, unless p−= 1 −p+, there is no simple symmetry with respect to the labels. After training, the decision rule is defined as the plug-in estimation of (4) using the logistic regression probability estimation. Let f+ = log(p+/(1 −p+)) and f−= log(p−/(1 −p−)), the decision rule can be expressed in terms of the function f as follows dp−,p+(x; f) = ( +1 if f(x) > f+ −1 if f(x) < f− 0 otherwise . (9) The following result shows that the rule dp−,p+(·; f) is universally consistent when f is learned by minimizing empirical risk based on ℓp−,p+. Hence, in the limit, learning with the double hinge loss is optimal in the sense that the risk for the learned decision rule converges to the Bayes’ risk. Theorem 1. Let H be a functional space that is dense in the set of continuous functions. Suppose that we have a positive sequence {λn} with λn →0 and nλ2 n/ log n →∞. We define f ∗ n as arg min f∈H 1 n n X i=1 ℓp−,p+(yi, f(xi)) + λn∥f∥2 H . Then, limn→∞R(dp−,p+(X; f ∗ n)) = R(d∗(X)) holds almost surely, that is, the classifier dp−,p+(·; f ∗ n) is strongly universally consistent. Proof. Our theorem follows directly from (Steinwart, 2005, Corollary 3.15), since ℓp−,p+ is regular (Steinwart, 2005, Definition 3.9). Besides mild regularity conditions that hold for ℓp−,p+, a loss function is said regular if, for every α ∈[0, 1], and every tα such that tα = arg min t α ℓp−,p+(+1, t) + (1 −α) ℓp−,p+(−1, t) , we have that dp−,p+(tα, x) agrees with d∗(x) almost everywhere. Let f1 = −H(p−)/p−, f2 = −(H(p+) −H(p−))/(p+ −p−) and f3 = H(p+)/(1 −p+) denote the hinge locations in ℓp−,p+(±1, f(x)). Note that we have f1 < f−< f2 < f+ < f3, and that tα ∈          (−∞, f1] if 0 ≤α < p− [f1, f2] if α = p− {f2} if p−< α < p+ [f2, f3] if α = p+ [f3, ∞) if p+ < α ≤1 ⇒dp−,p+(tα, x) =          −1 if P(Y = 1|x) < p− −1 or 0 if P(Y = 1|x) = p− 0 if p−< P(Y = 1|x) < p+ 0 or + 1 if P(Y = 1|x) = p+ +1 if P(Y = 1|x) > p+ which is the desired result. Note also that the analysis of Bartlett and Tewari (2007) can be used to show that minimizing ℓp−,p+ cannot provide consistent estimates of P(Y = 1|X = x) = p for p /∈{p−, p+}. This property is desirable regarding sparsity, since sparseness does not occur when the conditional probabilities can be unambiguously estimated . Note on a Close Relative A double hinge loss function has been proposed recently with a different perspective by Bartlett and Wegkamp (2008). Their formulation is restricted to symmetric classification, where c+ = c−= 1 and r+ = r−= r. In this situation, rejection may occur only if 0 ≤r < 1/2, and the thresholds on the conditional probabilities in Bayes’ rule (4) are p−= 1 −p+ = r. For symmetric classification, the loss function of Bartlett and Wegkamp (2008) is a scaled version of our proposal that leads to equivalent solutions for f, but our decision rule differs. While our probabilistic derivation of the double hinge loss motivates the decision function (9), the decision rule of Bartlett and Wegkamp (2008) has a free parameter (corresponding to the threshold f+ = −f−) whose value is set by optimizing a generalization bound. Our decision rule rejects more examples when the loss incurred by rejection is small and fewer examples otherwise. The two rules are identical for r ≃0.24. We will see in Section 5 that this difference has noticeable outcomes. 4 4 SVMs with Double Hinge In this section, we show how the standard SVM optimization problem is modified when the hinge loss is replaced by the double hinge loss. The optimization problem is first written using a compact notation, and the dual problem is then derived. 4.1 Optimization Problem Minimizing the regularized empirical risk (6) with the double hinge loss (7–8) is an optimization problem akin to the standard SVM problem. Let C be an arbitrary constant, we define D = C(p+ − p−), Ci = C(1 −p+) for positive examples, and Ci = Cp−for negative examples. With the introduction of slack variables ξ and η, the optimization problem can be stated as            min f,b,ξ,η 1 2∥f∥2 H + n X i=1 Ciξi + D n X i=1 ηi s. t. yi(f(xi) + b) ≥ti −ξi i = 1, . . . , n yi(f(xi) + b) ≥τi −ηi i = 1, . . . , n ξi ≥0 , ηi ≥0 i = 1, . . . , n , (10) where, for positive examples, ti = H(p+)/(1 −p+), τi = −(H(p−) −H(p+))/(p−−p+), while, for negative examples ti = H(p−)/p−, τi = (H(p−) −H(p+))/(p−−p+). For functions f belonging to a Hilbert space H endowed with a reproducing kernel k(·, ·), efficient optimization algorithms can be drawn from the dual formulation:          min α,γ 1 2γT Gγ −τ T γ −(t −τ)T α s. t. yT γ = 0 0 ≤αi ≤Ci i = 1, . . . , n 0 ≤γi −αi ≤D i = 1, . . . , n . (11) where y = (y1, . . . , yn)T , t = (t1, . . . , tn)T and τ = (τ1, . . . , τn)T are vectors of Rn and G is the n × n Gram matrix with general entry Gij = yiyjk(xi, xj). Note that (11) is a simple quadratic problem under box constraints. Compared to the standard SVM dual problem, one has an additional vector to optimize, but, with the active set we developed, we only have to optimize a single vector of Rn. The primal variables f and b are then derived from the Karush-Kuhn-Tucker (KKT) conditions. For f, we have: f(·) = Pn i=1 γiyik(·, xi), and b is obtained in the optimization process described below. 4.2 Solving the Problem To solve (11), we use an active set algorithm, following a strategy that proved to be efficient in SimpleSVM (Vishwanathan et al., 2003). This algorithm solves the SVM training problem by a greedy approach, in which one solves a series of small problems. First, the repartition of training examples in support and non-support vectors is assumed to be known, and the training criterion is optimized considering that this partition fixed. Then, this optimization results in an updated partition of examples in support and non-support vectors. These two steps are iterated until some level of accuracy is reached. Partitioning the Training Set The training set is partitioned into five subsets defined by the activity of the box constraints of Problem (11). The training examples indexed by: I0 , defined by I0 = {i|γi = 0}, are such that yi(f(xi) + b) > ti; It , defined by It = {i|0 < γi < Ci}, are such that yi(f(xi) + b) = ti; IC , defined by IC = {i|γi = Ci}, are such that τi < yi(f(xi) + b) ≤ti; Iτ , defined by Iτ = {i|Ci < γi < Ci + D}, are such that yi(f(xi) + b) = τi; ID , defined by ID = {i|γi = Ci + D}, are such that yi(f(xi) + b) ≤τi. When example i belongs to one of the subsets described above, the KKT conditions yield that αi is either equal to γi or constant. Hence, provided that the repartition of examples in the subsets I0, It, IC, Iτ and ID is known, we only have to consider a problem in γ. Furthermore, γi has to be computed only for i ∈It ∪Iτ. 5 Updating Dual Variables Assuming a correct partition, Problem (11) reduces to the considerably smaller problem of computing γi for i ∈IT = It ∪Iτ:        min {γi|i∈IT } 1 2 X i∈IT ,j∈IT γiγjGij − X i∈IT γisi s. t. X i∈IT yiγi + X i∈IC Ciyi + X i∈ID (Ci + D) yi = 0 , (12) where si = ti −P j∈IC CjGji −P j∈ID(Cj + D) Gji for i ∈It and si = τi −P j∈IC CjGji − P j∈ID(Cj + D) Gji for i ∈Iτ. Note that the box constraints of Problem (11) do not appear here, because we assumed the partition to be correct. The solution of Problem (12) is simply obtained by solving the following linear system resulting from the first-order optimality conditions:        X j∈IT Gijγj + yiλ = si for i ∈IT X i∈IT yiγi = − X i∈IC Ciyi − X i∈ID (Ci + D) yi , (13) where λ, which is the (unknown) Lagrange parameter associated to the equality constraint in (12), is computed along with γ. Note that the |IT | equations of the linear system given on the first line of (13) express that, for i ∈It, yi(f(xi) + λ) = ti and for i ∈Iτ, yi(f(xi) + λ) = τi. Hence, the primal variable b is equal to λ. Algorithm The algorithm, described in Algorithm 1, simply alternates updates of the partition of examples in {I0, It, IC, Iτ, ID}, and the ones of coefficients γi for the current active set IT . As for standard SVMs, the initialization step consists in either using the solution obtained for a different hyper-parameter, such as a higher value of C, or in picking one or several examples of each class to arbitrarily initialize It to a non-empty set, and putting all the other ones in I0 = {1, . . . , n} \ It. Algorithm 1 SVM Training with a Reject Option input {xi, yi}1≤i≤n and hyper-parameters C, p+, p− initialize γold IT = {It, Iτ}, IT = {I0, IC, ID}, repeat solve linear system (13) →(γi)i∈IT , b = λ. if any (γi)i∈IT violates the box constraints (11) then Compute the largest ρ s. t., for all i ∈IT γnew i = γold i + ρ(γi −γold i ) obey box constraints Let j denote the index of (γnew i )i∈IT at bound, IT = IT \ {j}, IT = IT ∪{j} γold j = γnew j else for all i ∈IT do γnew i = γi if any (yi(f(xi) + b))i∈IT violates primal constraints (10) then select i with violated constraint IT = IT \ {i}, IT = IT ∪{i} else exact convergence end if for all i ∈IT do γold i = γnew i end if until convergence output f, b. The exact convergence is obtained when all constraints are fulfilled, that is, when all examples belong to the same subset at the begining and the end of the main loop. However, it is possible to relax the convergence criteria while having a good control on the precision on the solution by monitoring the duality gap, that is the difference between the primal and the dual objectives, respectively provided in the definition of Problems (10) and (11). 6 Table 1: Performances in terms of average test loss, rejection rate and misclassification rate (rejection is not an error) with r+ = r−= 0.45, for the three rejection methods over four different datasets. . Average Test Loss Rejection rate (%) Error rate (%) Wdbc Naive 2.9 ± 1.6 0.7 2.6 B&W’s 3.5 ± 1.8 3.9 1.8 Our’s 2.9 ± 1.7 1.2 2.4 Liver Naive 28.9 ± 5.4 3.3 27.4 B&W’s 30.9 ± 4.0 34.5 15.4 Our’s 28.8 ± 5.1 7.9 25.2 Thyroid Naive 4.1 ± 2.9 0.9 3.7 B&W’s 4.4 ± 2.7 6.1 1.6 Our’s 3.7 ± 2.7 2.1 2.8 Pima Naive 23.7 ± 1.9 7.5 20.3 B&W’s 24.7 ± 2.1 24.3 13.8 Our’s 23.1 ± 1.3 6.9 20.0 Theorem 2. Algorithm 1 converges in a finite number of steps to the exact solution of (11). Proof. The proof follows the ones used to prove the convergence of active set methods in general, and SimpleSVM in particular, see Propositon 1 in (Vishwanathan et al., 2003)). 5 Experiments We compare the performances of three different rejection schemes based on SVMs. For this purpose, we selected the datasets from the UCI repository related to medical problems, as medical decision making is an application domain for which rejection is of primary importance. Since these datasets are small, we repeated 10 trials for each problem. Each trial consists in splitting randomly the examples into a training set with 80 % of examples and an independent test set. Note that the training examples were normalized to zero-mean and unit variance before cross-validation (test sets were of course rescaled accordingly). In a first series of experiments, to compare our decision rule with the one proposed by Bartlett and Wegkamp (2008) (B&W’s), we used symmetric costs: c+ = c−= 1 and r+ = r−= r. We also chose r = 0.45, which corresponds to rather low rejection rates, in order to favour different behaviors between these two decision rules (recall that they are identical for r ≃0.24). Besides the double hinge loss, we also implemented a “naive” method that consists in running the standard SVM algorithm (using the hinge loss) and selecting a symmetric rejection region around zero by cross-validation. For all methods, we used Gaussian kernels. Model selection is performed by cross-validation. This includes the selection of the kernel widths, the regularization parameter C for all methods and additionally of the rejection thresholds for the naive method. Note that B&W’s and our decision rules are based on learning with the double-hinge loss. Hence, the results displayed in Table 1 only differ due to the size of the rejection region, and to the disparities that arise from the choice of hyper-parameters that may arise in the cross-validation process (since the decision rules differ, the cross-validation scores differ also). Table 1 summarizes the averaged performances over the 10 trials. Overall, all methods lead to equivalent average test losses, with an unsignificant but consistent advantage for our decision rule. We also see that the naive method tends to reject fewer test examples than the consistent methods. This means that, for comparable average losses, the decision rules based on the scores learned by minimizing the double hinge loss tend to classify more accurately the examples that are not rejected, as seen on the last column of the table. For noisy problems such as Liver and Pima, we observed that reducing rejection costs considerably decrease the error rate on classified examples (not shown on the table). The performances of the two learning methods based on the double-hinge get closer, and there is still no significant gain 7 compared to the naive approach. Note however that the symmetric setting is favourable to the naive approach, since we only have to estimate a single decision thershold. We are experimenting to see whether the double-hinge loss shows more substantial improvements for asymmetric losses and for larger training sets. 6 Conclusion In this paper we proposed a new solution to the general problem of classification with a reject option. The double hinge loss was derived from the simple desiderata to obtain accurate estimates of posterior probabilities only in the vicinity of the decision boundaries. Our formulation handles asymmetric misclassification and rejection costs and compares favorably to the one of Bartlett and Wegkamp (2008). We showed that for suitable kernels, including usual ones such as the Gaussian kernel, training a kernel machine with the double hinge loss provides a universally consistent classifier with reject option. Furthermore, the loss provides sparse solutions, with a limited number of support vectors, similarly to the standard L1-SVM classifier. We presented what we believe to be the first principled and efficient implementation of SVMs for classification with a reject option. Our optimization scheme is based on an active set method, whose complexity compares to standard SVMs. The dimension of our quadratic program is bounded by the number of examples, and is effectively limited to the number of support vectors. The only computational overhead is brought by monitoring five categories of examples, instead of the three ones considered in standard SVMs (support vector, support at bound, inactive example). Our approach for deriving the double hinge loss can be used for other decision problems relying on conditional probabilities at specific values or in a limited range or values. As a first example, one may target the estimation of discretized confidence ratings, such as the ones reported in weather forecasts. Multi-category classification also belongs to this class of problems, since there, decisions rely on having precise conditional probabilities within a predefined interval. Acknowledgements This work was supported in part by the French national research agency (ANR) through project GD2GS, and by the IST Programme of the European Community through project DIRAC. References Bartlett, P. L., & Tewari, A. (2007). Sparseness vs estimating conditional probabilities: Some asymptotic results. Journal of Machine Learning Research, 8, 775–790. Bartlett, P. L., & Wegkamp, M. H. (2008). Classification with a reject option using a hinge loss. Journal of Machine Learning Research, 9, 1823–1840. Chow, C. K. (1970). On optimum recognition error and reject tradeoff. IEEE Trans. on Info. Theory, 16, 41–46. Fumera, G., & Roli, F. (2002). Support vector machines with embedded reject option. Pattern Recognition with Support Vector Machines: First International Workshop (pp. 68–82). Springer. Grandvalet, Y., Mari´ethoz, J., & Bengio, S. (2006). A probabilistic interpretation of SVMs with an application to unbalanced classification. NIPS 18 (pp. 467–474). MIT Press. Herbei, R., & Wegkamp, M. H. (2006). Classification with reject option. The Canadian Journal of Statistics, 34, 709–721. Kwok, J. T. (1999). Moderating the outputs of support vector machine classifiers. IEEE Trans. on Neural Networks, 10, 1018–1031. Steinwart, I. (2005). Consistency of support vector machine and other regularized kernel classifiers. IEEE Trans. on Info. Theory, 51, 128–142. Vapnik, V. N. (1995). The nature of statistical learning theory. Springer Series in Statistics. Springer. Vishwanathan, S. V. N., Smola, A., & Murty, N. (2003). SimpleSVM. Proceedings of the Twentieth International Conference on Machine Learning (pp. 68–82). AAAI. 8
2008
74
3,564
Generative versus discriminative training of RBMs for classification of fMRI images Tanya Schmah Department of Computer Science University of Toronto Toronto, Canada schmah@cs.toronto.edu Geoffrey E. Hinton Department of Computer Science University of Toronto Toronto, Canada hinton@cs.toronto.edu Richard S. Zemel Department of Computer Science University of Toronto Toronto, Canada zemel@cs.toronto.edu Steven L. Small Department of Neurology The University of Chicago Chicago, USA small@uchicago.edu Stephen Strother The Rotman Research Institute Baycrest Toronto, Canada sstrother@rotman-baycrest.on.ca Abstract Neuroimaging datasets often have a very large number of voxels and a very small number of training cases, which means that overfitting of models for this data can become a very serious problem. Working with a set of fMRI images from a study on stroke recovery, we consider a classification task for which logistic regression performs poorly, even when L1- or L2- regularized. We show that much better discrimination can be achieved by fitting a generative model to each separate condition and then seeing which model is most likely to have generated the data. We compare discriminative training of exactly the same set of models, and we also consider convex blends of generative and discriminative training. 1 Introduction Pattern classification approaches to analyzing functional neuroimaging data have become increasingly popular [12] [3] [4]. These approaches allow one to use well-founded classification methods to test whether the imaging data contains enough information to discriminate between different conditions. They may also lead to insight into underlying neural representations, highlighting brain regions that are most informative with respect to particular experimental variables. One difficulty in applying these models is the paucity of data: the number of images available to analyze is typically very small relative to the data dimensionality, particularly if one does not want to restrict a priori the input to subsets of the voxels. Generative models are therefore of great interest, because building a density model of the imaging data itself can often uncover features of the data that are useful for classification as well as for generation. In regimes in which the number of training examples is relatively small, it has been shown that classifiers based on generative models can outperform discriminative classifiers, e.g., naive Bayes classifiers can beat logistic regression [11]. In this paper we investigate ways of using generative models to improve the discrimination of different conditions in functional neuroimaging data. Our primary interest with respect to the imaging data is to elucidate the brain changes that occur during recovery from a stroke. Towards this aim, we define an early-late discrimination task to see if the learning approach can find properties that distinguish pre-recovery from post-recovery scans. 2 Restricted Boltzmann Machines A set of fMRI volumes can be modeled using a two-layer network called a “Restricted Boltzmann Machine” (RBM) [5], in which stochastic “visible units” are connected to stochastic “hidden units” using symmetrically weighted connections. The visible units of the RBM correspond to voxels, while the hidden units can be thought of as feature detectors. In the typical RBM, both visible and hidden units are binary, but we use a version in which the visible units are continuous and have Gaussian marginal distributions [15] [7] [1]. For simplicity, and since we are free to scale the data, we choose unit variance for the marginal distributions of the visible units. The energy of the joint configuration (v, h) of the visible and hidden units is E(v, h) := − X i,j viwijhj − X j cjhj + 1 2 X i (vi −bi)2, (1) where wij, bi, cj are fixed parameters. The joint distribution over visible and hidden variables is P(v, h) := 1 Z exp (−E(v, h)) , (2) with partition function Z := R du P g exp (−E(u, g)). The marginal distribution over the visible units can be expressed as: P(v) = X h P(v, h) = 1 Z exp (−F(v)) , (3) where F is the free energy: F(v) = −log X h exp (−E(v, h)) ! = − X j log (1 + exp (viwij + cj)) + 1 2 X i (vi −bi)2 . (4) The marginal distribution over the visible units is typically intractable because of the partition function Z. However Gibbs sampling can be used to sample from an approximation to the marginal distribution, since the conditional probability distributions P(v|h) and P(h|v) are tractable: P(v|h) = Y i N  bi + X j wijhj , 1  , P(h|v) = Y j σ X i viwij + cj ! , where σ is the logistic function, σ(z) := 1/1 + exp(−z). Note that the conditional probabilities of the hidden units are the same as for binary-only RBMs. The aim of generative training of an RBM is to model the marginal distribution of the visible units P(v). In maximum likelihood learning, the aim is to minimize the negative log probability of the training data, Lgen = − X v∈S log P(v|θ), (5) where S is the training set and θ is the vector of all parameters wij, bi, cj. The gradient of this function is intractable, however there is an approximation to maximum likelihood learning called Contrastive Divergence (CD), which works well in practice [5]. We use a n-step version of CD, with n equal to either 3 or 6. At each iteration, the parameter increments are: ∆wij = ⟨vihj⟩0 −⟨vihj⟩n ∆bi = ⟨vi −bi⟩0 −⟨vi −bi⟩n ∆cj = ⟨hj⟩0 −⟨hj⟩n In this definition, angle brackets denote expected value over a certain distribution over the visible units, with the hidden units distributed according to the conditional distribution P(h|v). A subscript 0 indicates that the data distribution is used, i.e., visible units are given values corresponding to observed fMRI volumes; while a subscript n indicates that n steps of Gibbs sampling have been done, beginning at data points, to give an approximation to an expected value over the true distribution P(v). 3 Classification using multiple RBMs We consider binary classification tasks. The methods clearly generalize to arbitrary numbers of classes. 3.1 Classification via (mostly) generative training We begin by generatively training two independent RBMs, one for each data class. For maximum likelihood learning, the cost function is the negative log probability of the training data: Lgen = LA gen(SA) + LB gen(SB) := − X v∈SA log P(v|θA) − X v∈SB log P(v|θB), (6) where SA and SB are the training data from classes A and B, and θA and θB are the parameter vectors for the two RBMs. In practice we regularize by adding a term to this cost function that corresponds to putting a prior distribution on the weights wij. In general, given probabilistic generative models for each of two classes, A and B, data can be classified by Bayes’ theorem. For brevity, we write “A” for “v is of class A”, and similarly for B. If we assume that v is a priori equally likely to belong to both classes, then P(A|v) = P(v|A) P(v|A) + P(v|B). If the distributions P(v|A) and P(v|B) are defined by RBMs, then they can be expressed in terms of free energies FA and FB and partition functions ZA and ZB, as in Equation (3). Substituting this into Bayes’ theorem gives P(A|v) = σ (FB(v) −FA(v) −T) , (7) where T := log (ZA/ZB). The free energies in this formula can be calculated easily using (4). However the partition functions are intractable for RBMs with large numbers of hidden and visible units. For this reason, we replace the unknown “threshold” T with an independent parameter ∆, and fit it discriminatively. (Thus this method is not pure generative training.) The aim of discriminative training is to model the conditional probability of the class labels given the visible units. In maximum likelihood learning, the cost function to be minimized is the negative log conditional probability of the class labels of the training data, Ldisc = − X v∈S log P(class of v|v, θA, θB, ∆). (8) 3.2 Classification via discriminative training As an alternative to generative training, the function Ldisc (defined in the previous equation) can be minimized directly, with respect to all parameters simultaneously: the wij, bi and cj of both RBMs and the threshold parameter ∆. We use exactly the same model of P(class of v|v) as before, summarized in Equation (7). By substituting Equations (7) and (4) into Equation (8) the gradient of Ldisc with respect to all parameters can be calculated exactly. Substituting Equations (7) into Equation (8) gives C = − X v∈SA log σ (FB(v) −FA(v) −∆) − X v∈SB log σ (∆+ FA(v) −FB(v)) . where SA and SB are the sets of training data in classes A and B. Since d dz log σ(z) = σ(−z), the partial derivative of C with respect to the threshold parameter is: ∂C ∂∆= X v∈SA σ (∆+ FA(v) −FB(v)) − X v∈SB σ (FB(v) −FA(v) −∆) . The free energies depend on the weights of the RBMs (suppressed in the above equation for ease of notation). If the parameters for the two RBMs are not linked in any way, then any given parameter θ affects either model A or model B but not both, so either ∂FB/∂θ = 0 or ∂FA/∂θ = 0. From (4) it follows that ∂ ∂wij F(v) = −pjvi , ∂ ∂cj F(v) = −pj, ∂ ∂bi F(v) = bi −vi , where pj := σ(zj) = P(hj|v). It follows that, setting M(v) := σ (FB(v) −FA(v) −∆) , the derivatives for the parameters of model A are: ∂C ∂wij = X v∈SA (1 −M(v)) (−vi pj) + X v∈SB M(v) (vi pj) , ∂C ∂cj = X v∈SA (1 −M(v)) (−pj) + X v∈SB M(v) (pj) , ∂C ∂bi = X v∈SA (1 −M(v)) (bi −vi) + X v∈SB M(v) (vi −bi) ; where pj := PA(hj|v). The formulae for model B are the same with opposite sign, and with pj := PB(hj|v). Note that there is no need to assume that both RBMs have the same number of hidden units. We note that discriminative training of a single RBM was suggested in [6] and [8]. 4 Experiments on fMRI data 4.1 Data and preprocessing For our numerical experiments, we use the fMRI data from a study of recovery from stroke described in [14]. A stroke permanently damages part of the brain ( the “lesion”), resulting in loss of the corresponding function. Some function can be recovered over a period of months or years. Since the lesion is still present, the patient must have learned to used other parts of the brain to compensate. Studying stroke patients during recovery with fMRI can help determine what changes in brain function occur during recovery, and to what degree these changes correlate with degree of recovery. The study of [14] analysed mean volumes of activation over 4 regions of interest in each hemisphere. The main conclusion of that paper is that patients with good recovery have higher activations (averaged over all sessions) in the ipsilateral cerebellum. Twelve subjects were studied at 1,2,3 and 6 months post-stroke. Due to data irregularities, we study only 9 of these subjects in this paper; Each of the four imaging sessions consisted of four continuous recording runs. During each run, the subject alternated two kinds of hand movement: tapping finger and thumb together, or wrist flexion/extension; with rest breaks in between. The movement was paced auditorily at 1Hz. During a single run, only one hand moved; during the following run, the other hand moved. Within a run, the experimental design was : (3 seconds rest, 6 seconds finger tap, 3 seconds rest, 6 seconds wrist flexion), repeated 8 times. The fMRI images, called “volumes”, are recorded every 4 seconds. The volumes are made up of 24 axial (i.e. horizontal) slices of thickness 6mm, and within each slice the pixel size is 2mm × 2mm. The data for all 9 subjects has been co-registered and motion-corrected using the Automated Image Registration (AIR) package [16]. For computational ease, we retain only 7 horizontal fMRI slices out of an available 24 (slices 2,3,4,5,21,22,23, with 24 being the top of the head), resulting in 10499 voxels. The choice of slices is based on prior assumptions about what parts of the brain are involved in finger and wrist motion. We temporally filter the data by dividing each “active” image (finger or wrist) by the mean of the previous two rest images. We linearly scaled all of the data for each subject in such a way that the each voxel has mean 0 and variance approximately 1. So as to avoid the long transients intrinsic in fMRI imaging, we discard the first image from each movement block, and all rest images. 4.2 Classification tasks We have studied two binary classification tasks. The first task is to predict whether a given fMRI volume was recorded “early” in the study, defined as the first or second recording session (1 or 2 months post-stroke) or “late” in the study, defined as the third or fourth recording session (3 or 6 months post-stroke). This task addresses our interest in the long-term changes in brain organisation and function during stroke recovery. The second task is to predict whether the volume was recorded during finger or wrist movement. Both classification tasks are complex, in the sense that each of the two classes is known to be heterogeneous. For example, in the early vs. late task, the “early” group is known to contain volumes in four sub-classes: healthy finger movement, healthy wrist movement, impaired finger movement and impaired wrist movement; and similarly for the “late” group. In addition, there are many sources of variability between volumes that are extraneous to the classification task and that are present in any fMRI study, including physiological noise, fatigue and attention. 4.3 Classification methods and testing procedures We used compared four basic methods: generatively- and discriminatively- trained pairs of RBMs; logistic regression and K nearest neighbours. Each method was tested on individual fMRI slices and also on the set of 7 slices described above. For the RBMs, minimization of the cost function was by gradient descent, while for logistic regression we used the conjugate gradient algorithm as implemented by Carl Rasmussen’s minimize.m. 1 Data for each subject is treated separately. For each subject, the data is split into three subsets: 75% training, 12.5% validation and 12.5% test. The splitting is done by first partitioning the data into 32 half-runs, each of which contains either all of the finger movement volumes or all of the wrist movement volumes for one run. One half-run contains 8 groups of 5 consecutively-recorded volumes. From each of these half-runs, one of the 8 groups was randomly chosen and assigned to the validation set, a second group was randomly assigned to the test set, and the remaining 6 were assigned to the training set. This random splitting of each half-run into training, validation and test sets was done 20 times with different random seeds, leading to 20 uncorrelated splittings. Each classification method is evaluated on each of the 20 different random splits for each subject. Logistic regression was L1-regularized, and the value of the regularization hyperparameter was chosen by validation. The number of nearest neighbours, K, was also chosen by validation. The RBMs were Cauchy-regularized, which we found to be a slight improvement over L1 regularization. When testing on individual fMRI slices (which vary in size), we used 500 hidden units; while 3500 hidden units were used when testing on the larger set of 7 slices, which contained 10499 voxels. The RBM training algorithm has many variations and hyperparameters, and is very slow to run on data of this size, so rather than doing formal validation, we adjusted the algorithm informally via many experiments on data from two of the subjects, mostly using only slice 4 but sometimes using all 7 slices. These subjects were then included in the test data, so we have not observed a strict separation of training and test data for the RBM-based methods. We note that our implementation of the discriminative gradient inadvertently used a residual variance of 1/2 instead of 1. 1We had originally used conjugate gradient for discriminatively-trained RBMs as well but we found, late in the study, that gradient descent ran faster and gave better results. We haven’t investigated this, beyond numerical verification of our gradients, but it suggests that care should be taken using conjugate gradient with very high-dimensional data. We also studied various blends of generative and discriminative training of a pair of RBMs, in which the cost function is a convex combination of the negative log likelihood functions, Lλ = (1 −λ)Lgen + λLdisc. (9) 4.4 Results The following two tables show mean misclassification errors, averaged over all 9 subjects and all 20 splittings of the data from each subject. Following each mean error is the standard deviation of the means for the 9 subjects. The first table shows mean misclassification errors early vs. late classification task: log. reg. K near. neigh. discrim. RBM gen. RBM Slice 2 28.6% ± 7.6 11.4% ± 6.0 7.6% ± 2.8 3.0% ± 2.0 Slice 3 27.1% ± 6.7 12.3% ± 6.1 10.4% ± 2.2 2.8% ± 1.9 Slice 4 28.1% ± 8.2 11.3% ± 5.4 9.7% ± 2.3 2.4% ± 1.8 Slice 5 26.3% ± 7.0 12.6% ± 5.7 10.0% ± 2.1 4.2% ± 3.2 Slice 21 24.2% ± 8.9 16.8% ± 7.4 10.6% ± 4.4 4.8% ± 3.4 Slice 22 23.7% ± 6.9 15.3% ± 6.5 9.2% ± 3.5 3.7% ± 2.1 Slice 23 24.1% ± 4.7 13.0% ± 4.7 7.7% ± 2.6 5.2% ± 3.7 All 7 slices 20.1% ± 8.7 10.0% ± 4.1 — 0.2% ± 0.2 In all cases, the generatively-trained RBMs outperform all of the other methods tested. We omitted discriminative training an RBM pair for the large dataset of all 7 slices together, due to the computional expense.2 The next table shows mean error rates for the finger vs. wrist classification task: log. reg. K near. neigh. discrim. RBM gen. RBM Slice 4 17.0% ± 2.8 11.7% ± 1.5 9.7% ± 2.3 21.8% ± 4.6 All 7 slices 7.9% ± 3.0 10.6% ± 2.3 (6.9% ± 2.3) 11.5% ± 1.5 For this task, we did discriminatively train an RBM pair on the entire dataset, however due to the computational expense we used only 1000 hidden units instead of the the 3500 used in generative training. Experiments on one subject suggests that the results for discriminative training are not very sensitive to the number of hidden units. Figure 1 shows the performance of several convex blends of generative and discriminative training, tested on fMRI Slice 4. Due to the computational intensity, only 5 splittings of the data were tested for each blend. Note that the for the early vs. late task, pure generative training outperforms all other blends; while for the finger vs. wrist task, pure discriminative training outperforms all other blends. 5 Discussion This study shows that generative models, and in particular, Restricted Boltzmann Machines, can be very useful in discrimination tasks. It is been shown before that generative training can make use of unlabelled data to improve discriminative performance [7] [6]. The present study, like that of Ng and Jordan [11], shows that generative training can improve discriminative performance even if all data is labelled. 2As noted earlier, we began by using conjugate gradient to train these models, and found it to be extremely slow. Now that we have switched to gradient descent, discriminative training should be of comparable speed to the generative training, which is still very computationally intensive for this dataset. Figure 1: Misclassification rates for a combination of (1 −λ) times generative training plus λ times discriminative training, as in Equation (9). The λ axis has been warped to emphasize values near 0 and 1. For each λ value, the mean error rate across all subjects is marked with a circle. The smaller dots, joined by a vertical bar, show mean error rates for individual subjects. We studied two methods of training a pair of RBM models: one almost entirely generative, and one discriminative. To use the terminology of Ng and Jordan, the two algorithms form a generativediscriminative pair, since they use exactly the same models of the input data and differ only in the training criterion. We found that training a pair of RBM models generatively rather than discriminatively yielded better discriminative performance for one of the two tasks studied. This is consistent with the results of Ng and Jordan, who studied the generative-discriminative pair consisting of naive Bayes and logistic regression and found that naive Bayes can outperform logistic regression. Their theoretical and experimental results suggest that generative training is more likely to be superior to discriminative training when the number of training examples is small compared to the dimension of the input data space. Since fMRI studies are in this regime, generative training looks promising for fMRI-based classification tasks. The two tasks studied in the present work are: (i) classify fMRI volumes as having been recorded in either the earlier or later part of the study; and (ii) classify fMRI volumes as corresponding to either finger or wrist movement. We found that generative training yielded better results for the early vs. late task, while discriminative training was superior for the finger vs. wrist task. Why does the relative performance of the two methods vary so much between the two tasks? One general observation is that generative training is trying to model many different features at once, many of which may be irrelevant to the discrimination task; whereas, by definition, discriminative models always focus on the task at hand. Thus there is a possibility for generative models to be “distracted” (from the point of view of discrimination) by rich structure in the data that is extraneous to the discrimination task. It seems reasonable that, the more structure there is in the images that is irrelevant to the discrimination task, the poorer will be the discriminative power of the generative models. We hypothesize that a lot of the complex structure in the fMRI volumes is relevant to early vs. late classification, but that most of it is irrelevant to finger vs. wrist classification. In other words, we hypothesise that the long-term changes during stroke recovery are complex and distributed throughout the brain; and that, by contrast the differences in brain activation between finger and wrist movements are relatively simple. It is interesting to compare these results with those of [13] which shows, using the same data as the present study, that linear classification methods perform better than non-linear ones on the finger vs. wrist classification task, while for the early vs. late classification task the reverse is true. We have evaluated blends of generative and discriminative training, as other authors have found that a combination can out-perform both pure generative and pure discriminative training [9][2]. However this did not occur in our experiments for either of the classification tasks. From the point of view of neuroscience or medicine, this work has two ultimate aims. The first is to elucidate neural changes that occur during recovery from a stroke. This is why we chose to study the early vs. late task. This classification task may shed light on neural representations, as the regions that change over time will be those that are useful for making this discrimination. The present study identifies a specific method that is very successful at the early vs. late classification task, but does not go on to address the problem of “opening the box”, i.e. shedding light on how the classification method works. Interpreting a set of RBM parameters is known to be more difficult than for linear models, but there are avenues available such as automatic relevance determination [10] that can indicate which voxels are most significant in the discrimination. The second aim is find general classification methods that can eventually be applied in clinical studies to classify patients as likely responders or non-responders to certain treatments on the basis of fMRI scans. We have shown that RBM-based models warrant further investigation in this context. In future work we intend to evaluate such models for their power to generalize strongly across subjects and recording sessions. Acknowledgments We thank Natasa Kovacevic for co-registering and motion-correcting the fMRI data used in this study. This work was supported by the Brain Network Recovery Group through a grant from the James S. McDonnell Foundation (No. 22002082). References [1] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. Greedy layer-wise training of deep networks. In Advances in Neural Information Processing Systems 19, pages 153– 160. MIT Press, 2007. [2] C. M. Bishop and J. Lasserre. Generative or discriminative? getting the best of both worlds. Bayesian Statistics, 8:3 – 24, 2007. [3] L. K. Hansen. Multivariate strategies in functional magnetic resonance imaging. Brain and Language, 102:186–191, August 2007. [4] S. J. Hanson and Y.O. Halchenko. Brain Reading Using Full Brain Support Vector Machines for Object Recognition: There Is No “Face” Identification Area. Neural Computation, 20(2):486–503, 2008. [5] G. E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation, 14(8):1711–1800, 2002. [6] G. E. Hinton. To recognize shapes, first learn to generate images. In P. Cisek, T. Drew, and J. Kalaska, editors, Computational Neuroscience: Theoretical Insights into Brain Function. Elsevier, 2007. [7] G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504 – 507, July 2006. [8] H. Larochelle and Y. Bengio. Classification using discriminative restricted boltzmann machines. In ICML ’08: Proceedings of the 25th international conference on Machine learning. ACM, 2008. [9] Andrew McCallum, Chris Pal, Greg Druck, and Xuerui Wang. Multi-conditional learning: Generative/discriminative training for clustering and classification. In To appear in AAAI ’06: American Association for Artificial Intelligence National Conference on Artificial Intelligence, 2006. [10] R. M. Neal. Bayesian Learning for Neural Networks. Springer Verlag, 1996. [11] A. Ng and M. Jordan. On discriminative vs. generative classifiers: A comparison of logistic regression and naive Bayes. Advances in Neural Information Processing Systems 14: Proceedings of the 2002 [sic] Conference, 2002. [12] A.J. O’Toole, F. Jiang, H. Abdi, N. Penard, J.P. Dunlop, and M.A. Parent. Theoretical, statistical, and practical perspectives on pattern-based classification approaches to functional neuroimaging analysis. Journal of Cognitive Neuroscience, 19(11):1735–1752, 2007. [13] T. Schmah, G. Yourganov, R. S. Zemel, G. E. Hinton, S. L. Small, and S. Strother. A comparison of classification methods for longitudinal fmri studies. in preparation. [14] S. L. Small, P. Hlustik, D. C. Noll, C. Genovese, and A. Solodkin. Cerebellar hemispheric activation ipsilateral to the paretic hand correlates with functional recovery after stroke. Brain, 125(7):1544, 2002. [15] M. Welling, M. Rosen-Zvi, and G. E. Hinton. Exponential family harmoniums with an application to information retrieval. In Advances in Neural Information Processing Systems 17. MIT Press, 2005. [16] R. P. Woods, S. T. Grafton, C. J. Holmes, S. R. Cherry, and J. C. Mazziotta. Automated image registration: I. general methods and intrasubject, intramodality validation. Journal of Computer Assisted Tomography, 22:139–152, 1998.
2008
75
3,565
Particle Filter-based Policy Gradient in POMDPs Pierre-Arnaud Coquelin CMAP, Ecole Polytechnique coquelin@cmapx.polytechnique.fr Romain Deguest∗ CMAP, Ecole Polytechnique deguest@cmapx.polytechnique.fr R´emi Munos INRIA Lille - Nord Europe, SequeL project, remi.munos@inria.fr Abstract Our setting is a Partially Observable Markov Decision Process with continuous state, observation and action spaces. Decisions are based on a Particle Filter for estimating the belief state given past observations. We consider a policy gradient approach for parameterized policy optimization. For that purpose, we investigate sensitivity analysis of the performance measure with respect to the parameters of the policy, focusing on Finite Difference (FD) techniques. We show that the naive FD is subject to variance explosion because of the non-smoothness of the resampling procedure. We propose a more sophisticated FD method which overcomes this problem and establish its consistency. 1 Introduction We consider a Partially Observable Markov Decision Problem (POMDP) (see e.g. (Lovejoy, 1991; Kaelbling et al., 1998)) defined by a state process (Xt)t≥1 ∈X, an observation process (Yt)t≥1 ∈ Y , a decision (or action) process (At)t≥1 ∈A which depends on a policy (mapping from all possible observation histories to actions), and a reward function r : X →R. Our goal is to find a policy π that maximizes a performance measure J(π), function of future rewards, for example in a finite horizon setting: J(π) def = E £ n X t=1 r(Xt) ¤ . (1) Other performance measures (such as in infinite horizon with discounted rewards) could be handled as well. In this paper, we consider the case of continuous state, observation, and action spaces. The state process is a Markov decision process taking its values in a (measurable) state space X, with initial probability measure µ ∈M(X) (i.e. X1 ∼µ), and which can be simulated using a transition function F and independent random numbers, i.e. for all t ≥1, Xt+1 = F(Xt, At, Ut), with Ut i.i.d. ∼ν, (2) where F : X × A × U →X and (U, σ(U), ν) is a probability space. In many practical situations U = [0, 1]p and Ut is a p-uple of pseudo random numbers. For simplicity, we adopt the notations F(x0, a0, u) def = Fµ(u), where Fµ is the first transition function (i.e. X1 = Fµ(U0) with U0 ∼ν). The observation process (Yt)t≥1 lies in a (measurable) space Y and is linked with the state process by the conditional probability measure P(Yt ∈dyt|Xt = xt) = g(xt, yt) dyt, where g : X × Y → [0, 1] is the marginal density function of Yt given Xt. We assume that observations are conditionally independent given the state process. Here also, we assume that we can simulate an observation using a transition function G and independent random numbers, i.e. ∀t ≥1, Yt = G(Xt, Vt), ∗Also affiliated to Columbia University 1 where Vt i.i.d. ∼ ν (for the sake of simplicity we consider the same probability space (U, σ(U), ν)). Now, the action process (At)t≥1 depends on a policy π which assigns to each possible observation history Y1:t (where we adopt the usual notation “1:t” to denote the collection of integers s such that 1 ≤s ≤t), an action At ∈A. In this paper we will consider policies that depend on the belief state (also called filtering distribution) conditionally to past observations. The belief state, written bt, belongs to M(X) (the space of all probability measures on X) and is defined by bt(dxt, Y1:t) def = P(Xt ∈dxt|Y1:t), and will be written bt(dxt) or even bt for simplicity when there is no risk of confusion. Because of the Markov property of the state dynamics, the belief state bt(·, Y1:t) is the most informative representation about the current state Xt given the history of past observations Y1:t. It represents sufficient statistics for designing an optimal policy in the class of observations-based policies. The temporal and causal dependencies of the dynamics of a generic POMDP using belief-based policies is summarized in Figure 1 (left): at time t, the state Xt is unknown, only Yt is observed, which enables (at least in theory) to update bt based on the previous belief bt−1. The policy π takes as input the belief state bt and returns an action At (the policy may be deterministic or stochastic). However, since the belief state is an infinite dimensional object, and thus cannot be represented in a computer, we first simplify the class of policies that we consider here to be defined over a finite dimensional space of belief-features f : M(X) →RK which represents relevant statistics of the filtering distribution. We write bt(fk) for the value of the k-th feature (among K) (where we use the usual notation b(f) def = R X f(x)b(dx) for any function f defined on X and measure b ∈M(X)), and denote bt(f) the vector (of size K) with components bt(fk). Examples of features are: f(x) = x (mean value), f(x) = x′x (for the covariance matrix). Other more complex features (e.g. entropy measure) could be used as well. Such a policy π : RK →A selects an action At = π(bt(f)), which in turn, yields a new state Xt+1. Except for simple cases, such as in finite-state finite-observation processes (where a Viterbi algorithm could be applied (Rabiner, 1989)), and the case of linear dynamics and Gaussian noise (where a Kalman filter could be used), there is no closed-form representation of the belief state. Thus bt must be approximated in our general setting. A popular method for approximating the filtering distribution is known as Particle Filters (PF) (also called Interacting Particle Systems or Sequential Monte-Carlo). Such particle-based approaches have been used in many applications (see e.g. (Doucet et al., 2001) and (Del Moral, 2004) for a Feynman-Kac framework) for example for parameter estimation in Hidden Markov Models and control (Andrieu et al., 2004) and mobile robot localization (Fox et al., 2001). An PF approximates the belief state bt ∈M(X) by a set of particles (x1:N t ) (points of X), which are updated sequentially at each new observation by a transitionselection procedure. In particular, the belief feature bt(f) is approximated by 1 N PN i=1 f(xi t), and the policy is thus a function that takes as input the activation of the feature f at the position of the particles: At = π( 1 N PN i=1 f(xi t)). For such methods, the general scheme for POMDPs using Particle Filter-based policies is described in Figure 1 (right). In this paper, we consider a class of policies πθ parameterized by a (multi-dimensional) parameter θ and we search for the value of θ that maximizes the resulting criterion J(πθ), now written J(θ) for simplicity. We focus on a policy gradient approach: the POMDP is replaced by an optimization problem on the space of policy parameters, and a (stochastic) gradient ascent on J(θ) is considered. For that purpose (and this is the object of this work) we investigate the estimation of ∇J(θ) (where the gradient ∇refers to the derivative w.r.t. θ), with an emphasis on Finite-Difference techniques. There are many works about such policy gradient approach in the field of Reinforcement Learning, see e.g. (Baxter & Bartlett, 1999), but the policies considered are generally not based on the result of an PF. Here, we explicitly consider a class of policies that are based on a belief state constructed by a PF. Our motivations for investigating this case are based on two facts: (1) the belief state represents sufficient statistics for optimality, as mentioned above. (2) PFs are a very popular and efficient tool for constructing the belief state in continuous domains. After recalling the general approach for evaluating the performance of a PF-based policy (Section 2), we describe (in Section 3.1) a naive Finite-Difference (FD) approach (defined by a step size h) for estimating ∇J(θ). We discuss the bias and variance tradeoff and explain the problem of variance explosion when h is small. This problem is a consequence of the discontinuity of the resampling operation w.r.t. the parameter θ. Our contribution is detailed in Section 3.2: We propose a modified 2 FD estimate for ∇J(θ) which (along the random sample path) has bias O(h2) and variance O(1/N), thus overcomes the drawback of the previous naive method. An algorithm is described and illustrated in Section 4 on a simple problem where the optimal policy exhibits a tradeoff between greedy reward optimization and localization. Xt Yt At πθ πθ πθ A X Y X Y A t−1 t−1 t−1 t−1 t+1 t+1 t+1 t+1 b t b b t−1 t t+1 b (f ) rt−1 r t rt+1 b (f) b (f ) Reward Observation Belief features State Policy Action Belief state Xt Yt At πθ πθ πθ Reward Particles Features Policy Action State Observation A X Y X Y A t−1 t−1 t−1 t+1 t+1 t+1 r r t r t+1 t−1 t−1 1:N 1:N t t+1 1:N 1:N t−1 1:N t t+1 1:N x x f( ) x f( ) x f( ) x x Figure 1: Left figure: Causal and temporal dependencies in a POMDP. Right figure: PF-based scheme for POMDPs where the belief feature bt(f) is approximated by 1 N PN i=1 f(xi t). 2 Particle Filters (PF) We first describe a generic PF for estimating the belief state based on past observations. In Subsection 2.1 we detail how to control a real-world POMDP and in Subsection 2.2 how to estimate the performance of a given policy in simulation. In both cases, we assume that the models of the dynamics (state, observation) are known. The basic PF, called Bootstrap Filter, see (Doucet et al., 2001) for details, approximates the belief state bn by an empirical distribution bN n def = PN i=1 wi nδxi n (where δ denotes a Dirac distribution) made of N particles x1:N n . It consists in iterating the two following steps: at time t, given observation yt, • Transition step: (also called importance sampling or mutation) a successor particles population ex1:N t is generated according to the state dynamics from the previous population x1:N t−1. The (importance sampling) weights w1:N t def = g(ex1:N t ,yt) PN j=1 g(exj t,yt) are evaluated, • Selection step: Resample (with replacement) N particles x1:N t from the set ex1:N t according to the weights w1:N t . We write x1:N t def = exk1:N t t where k1:N t are the selection indices. Resampling is used to avoid the problem of degeneracy of the algorithm, i.e. that most of the weights decreases to zero. It consists in selecting new particle positions such as to preserve a consistency property (i.e. PN i=1 wi tφ(exi t) = E[ 1 N PN i=1 φ(xi t)]). The simplest version introduced in (Gordon et al., 1993) chooses the selection indices k1:N t by an independent sampling from the set 1 : N according to a multinomial distribution with parameters w1:N t , i.e. P(ki t = j) = wj t, for all 1 ≤ i ≤N. The idea is to replicate the particles in proportion to their weights. Many variants have been proposed in the literature, among which the stratified resampling method (Kitagawa, 1996) which is optimal in terms of variance, see e.g. (Capp´e et al., 2005). Convergence issues of bN n (f) to bn(f) (e.g. Law of Large Numbers or Central Limit Theorems) are discussed in (Del Moral, 2004) or (Douc & Moulines, 2008). For our purpose we note that under weak conditions on the feature f, we have the consistency property: bN(f) →b(f), almost surely. 2.1 Control of a real system by an PF-based policy We describe in Algorithm 1 how one may use an PF-based policy πθ for the control of a real-world system. Note that from our definition of Fµ, the particles are initialized with: ex1:N 1 iid ∼µ. 2.2 Estimation of J(θ) in simulation Now, for the purpose of policy optimization, one should be capable of evaluating the performance of a policy in simulation. J(θ), defined by (1), may be estimated in simulation provided that 3 Algorithm 1 Control of a real-world POMDP for t = 1 to n do Observe: yt, Particle transition step: Set ex1:N t = F(x1:N t−1, at−1, u1:N t−1) with u1:N t−1 iid ∼ν. Set w1:N t = g(ex1:N t ,yt) PN j=1 g(exj t,yt), Particle resampling step: Set x1:N t = exk1:N t t where k1:N t are given by the selection step according to the weights w1:N t . Select action: at = πθ( 1 N PN i=1 f(xi t)), end for the dynamics of the state and observation are known. Making explicit the dependency w.r.t. the random sample path, written ω (which accounts for the state and observation stochastic dynamics and the random numbers used in the PF-based policy), we write J(θ) = Eω[Jω(θ)], where Jω(θ) def = Pn t=1 r(Xt,ω(θ)), making the dependency of the state w.r.t. ω and θ explicit. Algorithm 2 describes how to evaluate an PF-based policy in simulation. The function returns an estimate, written JN ω (θ), of Jω(θ). Using previously mentioned asymptotic convergence results for PF, one has limN→∞JN ω (θ) = Jω(θ), almost surely (a.s.). In order to approximate J(θ), one would perform several calls to the algorithm, receiving JN ωm(θ) (for 1 ≤m ≤M), and calculate their empirical mean 1 M PM m=1 JN ωm(θ), which tends to J(θ) a.s., when M, N →∞. Algorithm 2 Estimation of Jω(θ) in simulation for t = 1 to n do Define state: xt = F(xt−1, at−1, ut−1) with ut−1 ∼ν, Define observation: yt = G(xt, vt) with vt ∼ν, Particle transition step: Set ex1:N t = F(x1:N t−1, at−1, u1:N t−1) with u1:N t−1 iid ∼ν. Set w1:N t = g(ex1:N t ,yt) PN j=1 g(exj t,yt), Particle resampling step: Set x1:N t = exk1:N t t where k1:N t are given by the selection step according to the weights w1:N t , Select action: at = πθ( 1 N PN i=1 f(xi t)), end for Return JN ω (θ) def = Pn t=1 r(xt). 3 A policy gradient approach Now we want to optimize the value of the parameter in simulation. Then, once a “good” parameter θ∗is found, we would use Algorithm 1 to control the real system using the corresponding PF-based policy πθ∗. Gradient approaches have been studied in the field of continuous space Hidden Markov Models in (Fichoud et al., 2003; C´erou et al., 2001; Doucet & Tadic, 2003). The authors have used a likelihood ratio approach to evaluate ∇J(θ). Such methods suffer from high variance, in particular for problems with small noise. In order to reduce the variance, it has been proposed in (Poyadjis et al., 2005) to use a marginal particle filter instead of a simple path-based particle filter. This approach is efficient in terms of variance reduction but its computational complexity is O(N 2). Here we investigate a pathwise (i.e. along the random sample path ω) sensitivity analysis of Jω(θ) (w.r.t. θ) for the purpose of (stochastic) gradient optimization. We start with a naive Finite Difference (FD) approach and show the problem of variance explosion. Then we provide an alternative, called common indices FD, which overcomes this problem. In the sequel, we make the assumptions that all relevant functions (F, g, f, π) are continuously differentiable w.r.t. their respective variables. Note that although this is not explicitly mentioned, all such functions may depend on time. 4 3.1 Naive Finite-Difference (FD) method Let us consider the derivative of J(θ) component-wisely, writing ∂J(θ) the derivative of J(θ) w.r.t. a one-dimensional parameter. If the parameter θ is multi-dimensional, the derivative will be calculated in each direction. For h > 0 we define the centered finite-difference quotient Ih def = J(θ+h)−J(θ−h) 2h . Since J(θ) is differentiable then limh→0 Ih = ∂J(θ). Consequently, a method for approximating ∂J(θ) would consist in estimating Ih for a sufficiently small h. We know that J(θ) can be numerically estimated by 1 M PM m=1 JN ωm(θ). Thus, it seems natural to estimate Ih by IN,M h def = 1 2h h 1 M M X m=1 JN ωm(θ + h) −1 M M X m′=1 JN ωm′(θ −h) i where we used independent random numbers to evaluate J(θ + h) and J(θ −h). From the consistency of the PF, we deduce that limh→0 limM,N→∞IN,M h = ∂J(θ). This naive FD estimate exhibits the following bias-variance tradeoff (see (Coquelin et al., 2008) for the proof): Proposition 1 (Bias-variance trade-off). Assume that J(θ) is three times continuously differentiable in a small neighborhood of θ, then the asymptotic (when N →∞) bias of the naive FD estimate IN,M h is of order O(h2) and its variance is O(N −1M −1h−2). In order to reduce the bias, one should choose a small h, but then the variance would blow up. Additional computational resource (larger number of particles N) will help controlling the variance. However, in practice, e.g. for stochastic optimization, this leads to an intractable amount of computational effort since any consistent FD-based optimization algorithm (e.g. such as the KieferWolfowitz algorithm) will need to consider a sequence of steps h that decreases with the number of gradient iterations. But if the number of particles is bounded, the variance term will diverge, which may prevent the stochastic gradient algorithm from converging to a local optimum. In order to reduce the variance of the previous estimator when h is small, one may use common random numbers to estimate both J(θ + h) and J(θ −h) (i.e. ωm = ωm′). The variance then reduces to O(N −1M −1h−1) (see e.g. (Glasserman, 2003)), which still explodes for small h. Now, under the additional assumption that along almost all random sample path ω, the function θ 7→JN ω (θ) is a.s. continuous, then the variance would reduce to O(N −1M −1) (see Section (7.1) of (Glasserman, 2003)). Unfortunately, this is not the case here because of the discontinuity of the PF resampling operation w.r.t. θ. Indeed, for a fixed ω, the selection indices k1:N t (taking values in a finite set 1:N) are usually a non-smooth function of the weights w1:N t , which depend on θ. Therefore the naive FD method using PF cannot be applied in general because of variance explosion of the estimate when h is small, even when using common random number. 3.2 Common-indices Finite-Difference method Let us consider Jω(θ) = Pn t=1 r(Xt,ω(θ)) making explicit the dependency of the state w.r.t. θ and a random sample path ω. Under our assumptions, the gradient ∂Jω(θ) is well defined. Now, let us fix ω. For clarity, we now omit to write the ω dependency when no confusion is possible. The function θ 7→Xt(θ) (for any 1 ≤t < n) is smooth because all transition functions are smooth, the policy is smooth, and the belief state bt is smooth w.r.t. θ. Underlying the belief feature bt,θ(f) dependency w.r.t. θ, we write: θ smooth 7−→bt,θ(f) smooth 7−→Xt(θ) smooth 7−→Jω(θ). As already mentioned, the problem with the naive FD method is that the PF estimate bN t,θ(f) = 1 N PN i=1 f(xi t(θ)) of bt,θ(f) is not smooth w.r.t. θ because it depends on the selection indices k1:N 1:t (θ) which, taken as a function of θ (through the weights), is not continuous. We write θ non-smooth 7−→ bN t,θ(f) = 1 N N X i=1 f(xi t(θ)) smooth 7−→JN ω (θ). So a natural idea to recover continuity in a FD method would consists in using exactly the same selection indices for quantities related to θ + h and θ −h. However, using the same indices means using the same weights during the selection procedure for both trajectories. But this would lead to a wrong estimator because the weights strongly depends on θ through the observation function g. 5 Our idea is thus to use the same selection indices but use a likelihood ratio in the belief feature estimation. More precisely, let us write k1:N t (θ) the selection indices obtained for parameter θ, and consider a parameter θ′ in a small neighborhood of θ. Then, an PF estimate for bt,θ′(f) is bN t,θ′(f) def = N X i=1 li t(θ, θ′) PN j=1 lj t(θ, θ′) f(xi t(θ′)), with li t(θ, θ′) def = Qt s=1 g(xi s(θ′), ys(θ′)) Qt s=1 g(xis(θ), ys(θ)) (3) being the likelihood ratios computed along the particle paths, and where the particles x1:N 1:t (θ′) have been generated using the same selection indices k1:N 1:t (θ) (and the same random sample path ω) as those used for θ. The next result states the consistency of this estimate and is our main contribution (see (Coquelin et al., 2008) for the proof). Proposition 2. Under weak conditions on f (see e.g. (Del Moral & Miclo, 2000)), there exists a neighborhood of θ, such that for any θ′ in this neighborhood, bN t,θ′(f) defined by (3) is a consistent estimator of bt,θ′(f), i.e. limN→∞bN t,θ′(f) = bt,θ′(f) almost surely. Thus, for any perturbed value θ′ around θ, we may run an PF where in the resampling step, we use the same selection indices k1:N 1:n (θ) as those obtained for θ. Thus the mapping θ′ 7→bN t,θ′(f) is smooth. We write: θ′ smooth 7−→bN t,θ′(f) defined by (3) smooth 7−→JN ω (θ′). From the previous proposition we deduce that JN ω (θ) is a consistent estimator for Jω(θ). A possible implementation for the gradient estimation is described by Algorithm 3. The algorithm works by updating 3 families of state, observation, and particle populations, denoted by ’+’, ’-’, and ’o’ for the values of the parameter θ + h, θ −h, and θ respectively. For the performance measure defined by (1), the algorithm returns the common indices FD estimator: ∂hJN ω def = 1 2h Pn t=1 r(x+ t ) −r(x− t ) where x+ 1:n and x− 1:n are upper and lower trajectories simulated under the random sample path ω. Note that although the selection indices are the same, the particle populations ’+’, ’-’, and ’o’ are different, but very close (when h is small). Hence the likelihood ratios l1:N t converge to 1 when h →0, which avoids a source of variance when h is small. The resulting estimator ∂M h JN ω def = 1 M PM m=1 ∂hJN ωm for J(θ) would calculate an average over M sample paths ω1:M of the return of Algorithm 3 called M times. This estimator overcomes the drawbacks of the naive FD estimate: Its asymptotic bias is of order O(h2) (like any centered FD scheme) but its variance is of order O(N −1M −1) (the Central Limit Theorem applies to the belief feature estimator (3) thus to ∂hJN ω as well). Since the variance does not degenerate when h is small, one should choose h as small as possible to reduce the mean-squared estimation error. The complexity of Algorithm 3 is linear in the number of particles N. Note that in the current implementation we used 3 populations of particles per derivative. Of course, we could consider a non-centered FD scheme approximating the derivative with J(θ+h)−J(θ) h , which is of first order but which only requires 2 particle populations. If the parameter is multidimensional, the full gradient estimate could be obtained by using K + 1 populations of particles. Of course, in gradient ascent methods, such FD gradient estimate may be advantageously combined with clever techniques such as simultaneous perturbation stochastic approximation (Spall, 2000), conjugate or second-order gradient approaches. Note that when h →0, our estimator converges to an Infinitesimal Perturbation Analysis (IPA) estimator (Glasserman, 1991). The same ideas as those presented above could be used to derive an IPA estimator. The advantage of IPA is that it would use one population of particles only (for the full gradient) which may be interesting when the number of parameters K is large. However, the main drawback is that this approach would require to compute analytically the derivatives of all the functions w.r.t. their respective variables, which may be time consuming for the programmer. 4 Numerical Experiment Because of space constraints, our purpose here is simply to illustrate numerically the theoretical findings of previous FD methods (in terms of bias-variance contributions) rather than to provide a full example of POMDP policy optimization. We consider a very simple navigation task for a 2d robot. The robot is defined by its coordinates xt ∈R2. The observation is a noisy measurement 6 Algorithm 3 Common-indices Finite Difference estimate of ∂Jω Initialize likelihood ratios: Set l1:N,+ 0 = 1, l1:N,− 0 = 1, for t = 1 to n do State processes: Sample ut−1 ∼ν and Set xo t = F(xo t−1, ao t−1, ut−1), set x+ t = F(x+ t−1, a+ t−1, ut−1), set x− t = F(x− t−1, a− t−1, ut−1), Observation processes: Sample vt ∼ν and Set yo t = G(xo t, vt), set y+ t = G(x+ t , vt), set y− t = G(x− t , vt), Particle transition step: Draw u1:N t−1 iid ∼ν and Set ex1:N,o t = F(x1:N,o t−1 , ao t−1, u1:N t−1), Set ex1:N,+ t = F(x1:N,+ t−1 , a+ t−1, u1:N t−1), set ex1:N,− t = F(x1:N,− t−1 , a− t−1, u1:N t−1), Set w1:N t = g(ex1:N,o t ,yo t ) PN j=1 g(exj,o t ,yo t ), Set l1:N,+ t = g(ex1:N,+ t ,y+ t ) g(ex1:N,o t ,yo t ) l1:N,+ t−1 , set l1:N,− t = g(ex1:N,− t ,y− t ) g(ex1:N,o t ,yo t ) l1:N,− t−1 , Particle resampling step: Let k1:N t be the selection indices obtained from the weights w1:N t , Set x1:N,o t = exk1:N t ,o t , set x1:N,+ t = exk1:N t ,+ t , set x1:N,− t = exk1:N t ,− t , Set l1:N,+ t = lk1:N t ,+ t , set l1:N,− t = lk1:N t ,− t , Actions: Set ao t = πθ ¡ 1 N PN i=1 f(xi,o t ) ¢ , Set a+ t = πθ+h ¡ PN i=1 li,+ t PN j=1 lj,+ t f(xi,+ t ) ¢ , set a− t = πθ−h ¡ PN i=1 li,− t PN j=1 lj,− t f(xi,− t ) ¢ , end for Return: ∂hJN ω def = Pn t=1 r(x+ t )−r(x− t ) 2h . of the squared distance to the origin (the goal): yt def = ||xt||2 + vt, where vt iid ∼N(0, σ2 y) (σ2 y is the variance of the noise). At each time step, the agent may choose a direction at (with ||at|| = 1), which results in moving the state, of a step d, in the corresponding direction: xt+1 = xt + dat + ut, where ut i.i.d. ∼ N(0, σ2 xI) is an additive noise. The initial state x1 is drawn from ν, a uniform distribution over the square [−1, 1]2. We consider a class of policies that depend on a single feature belief: the mean of the belief state (i.e. f(x) = x). The PF-based policy thus uses the barycenter of the particle population mt def = 1 N PN i=1 xi t. Let us write m⊥the +90o rotation of a vector m. We consider policies πθ(m) = −(1−θ)m+θm⊥ ||−(1−θ)m+θm⊥|| parameterized by θ ∈[0, 1]. The chosen action is thus at = πθ(mt). If the robot was well localized (i.e. mt close to xt), then the policy πθ=0 would move the robot towards the direction of the goal, whereas πθ=1 would move it in an orthogonal direction. The performance measure (to be minimized) is defined as J(θ) = E[||xn||2], where n is a fixed time. We plot in Figure 2 the performance and gradient estimation obtained when running Algorithms 2 and 3, respectively. We used the numerical values: N = 103, M = 102, h = 10−6, n = 10, σx = 0.05, σy = 0.05, d = 0.1. It is interesting to note that in this problem, the performance is optimal for θ∗≃0.3 (which is slightly better than for θ = 0). θ = 0 would correspond to the best feed-back policy if the state was perfectly known. However, moving in an direction orthogonal to the goal helps improving localization. Here, the optimal policy exhibits a tradeoff between greedy optimization and localization. h = 100 h = 10−2 h = 10−4 h = 10−6 Bias / Variance NFD 0.57 / 6.05 × 10−3 0.31 / 0.13 unreliable / 25.3 unreliable / 6980 Bias / Variance CIFD 0.428 / 0.022 0.00192 / 0.019 0.00247 / 0.02 0.00162 / 0.0188 The table above shows the (empirically measured) bias and variance of the naive FD (NFD) (using common random numbers) method and the common indices FD (CIFD) method, for a specific value θ = 0.5 (with N = 103, M = 500). As predicted, the variance of the NFD approach makes this method inapplicable, whereas that of the CIFD is reasonable. 7 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 parameter θ Performance estimate 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 parameter θ Gradient estimate Figure 2: Left: Estimator 1 M PM m=1 JN ωm(θ) of J(θ) and confidence intervals ± p Var[JN ω (θ)]/M. Right: Estimator 1 M PM m=1 ∂hJN ωm(θ) of ∂J(θ) and confidence intervals ± p Var[∂hJN ω (θ)]/M. References Andrieu, C., Doucet, A., Singh, S., & Tadic, V. (2004). Particle methods for change detection, identification and control. Proceedings of the IEEE, 92, 423–438. Baxter, J., & Bartlett, P. (1999). Direct gradient-based reinforcement learning. Journal of Artificial Inteligence Reseach. Capp´e, O., Douc, R., & Moulines, E. (2005). Comparaison of resampling schemes for particle filtering. 4th International Symposium on Image and Signal Processing and Analysis. C´erou, F., LeGland, F., & Newton, N. (2001). Stochastic particle methods for linear tangent filtering equations, 231–240. IOS Press, Amsterdam. Coquelin, P., Deguest, R., & Munos, R. (2008). Sensitivity analysis in particle filters. Application to policy optimization in POMDPs (Technical Report). INRIA, RR-6710. Del Moral, P. (2004). Feynman-kac formulae, genealogical and interacting particle systems with applications. Springer. Del Moral, P., & Miclo, L. (2000). Branching and interacting particle systems. approximations of feynman-kac formulae with applications to non-linear filtering. S´eminaire de probabilit´es de Strasbourg, 34, 1–145. Douc, R., & Moulines, E. (2008). Limit theorems for weighted samples with applications to sequential monte carlo methods. To appear in Annals of Statistics. Doucet, A., Freitas, N. D., & Gordon, N. (2001). Sequential monte carlo methods in practice. Springer. Doucet, A., & Tadic, V. (2003). Parameter estimation in general state-space models using particle methods. Ann. Inst. Stat. Math. Fichoud, J., LeGland, F., & Mevel, L. (2003). Particle-based methods for parameter estimation and tracking : numerical experiments (Technical Report 1604). IRISA. Fox, D., Thrun, S., Burgard, W., & Dellaert, F. (2001). Particle filters for mobile robot localization. Sequential Monte Carlo Methods in Practice. New York: Springer. Glasserman, P. (1991). Gradient estimation via perturbation analysis. Kluwer. Glasserman, P. (2003). Monte carlo methods in financial engineering. Springer. Gordon, N., Salmond, D., & Smith, A. F. M. (1993). Novel approach to nonlinear and non-gaussian bayesian state estimation. Proceedings IEE-F (pp. 107–113). Kaelbling, L. P., Littman, M. L., & Cassandra, A. R. (1998). Planning and acting in partially observable stochastic domains. Artificial Intelligence, 101, 99–134. Kitagawa, G. (1996). Monte-Carlo filter and smoother for non-Gaussian nonlinear state space models. J. Comput. Graph. Stat., 5, 1–25. Lovejoy, W. S. (1991). A survey of algorithmic methods for partially observable Markov decision processes. Annals of Operations Research, 28, 47–66. Poyadjis, G., Doucet, A., & Singh, S. (2005). Particle methods for optimal filter derivative: Application to parameter estimation. IEEE ICASSP. Rabiner, L. R. (1989). A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77, 257–286. Spall, J. C. (2000). Adaptive stochastic approximation by the simultaneous perturbation method. IEEE transaction on automatic control, 45, 1839–1853. 8
2008
76
3,566
Algorithms for Infinitely Many-Armed Bandits Yizao Wang∗ Department of Statistics - University of Michigan 437 West Hall, 1085 South University, Ann Arbor, MI, 48109-1107, USA yizwang@umich.edu Jean-Yves Audibert Université Paris Est, Ecole des Ponts, ParisTech, Certis & Willow - ENS / INRIA, Paris, France audibert@certis.enpc.fr Rémi Munos INRIA Lille - Nord Europe, SequeL project, 40 avenue Halley, 59650 Villeneuve d’Ascq, France remi.munos@inria.fr Abstract We consider multi-armed bandit problems where the number of arms is larger than the possible number of experiments. We make a stochastic assumption on the mean-reward of a new selected arm which characterizes its probability of being a near-optimal arm. Our assumption is weaker than in previous works. We describe algorithms based on upper-confidence-bounds applied to a restricted set of randomly selected arms and provide upper-bounds on the resulting expected regret. We also derive a lower-bound which matches (up to a logarithmic factor) the upper-bound in some cases. 1 Introduction Multi-armed bandit problems describe typical situations where learning and optimization should be balanced in order to achieve good cumulative performances. Usual multi-armed bandit problems (see e.g. [8]) consider a finite number of possible actions (or arms) from which the learner may choose at each iteration. The number of arms is typically much smaller than the number of experiments allowed, so exploration of all possible options is usually performed and combined with exploitation of the apparently best ones. In this paper, we investigate the case when the number of arms is infinite (or larger than the available number of experiments), which makes the exploration of all the arms an impossible task to achieve: if no additional assumption is made, it may be arbitrarily hard to find a near-optimal arm. Here we consider a stochastic assumption on the mean-reward of any new selected arm. When a new arm k is pulled, its mean-reward µk is assumed to be an independent sample from a fixed distribution. Moreover, given the mean-reward µk for any arm k, the distribution of the reward is only required to be uniformly bounded and non-negative without any further assumption. Our assumptions essentially characterize the probability of pulling near-optimal arms. That is, given µ∗∈[0, 1] as the best possible mean-reward and β ≥0 a parameter of the mean-reward distribution, the probability that a new arm is ϵ-optimal is of order ϵβ for small ϵ, i.e. P(µk ≥µ∗−ϵ) = Θ(ϵβ) for ϵ →0. Note that we write f(ϵ) = Θ(g(ϵ)) for ϵ →0 when ∃c1, c2, ϵ0 > 0 such that ∀ϵ ≤ϵ0, c1g(ϵ) ≤f(ϵ) ≤c2g(ϵ). ∗The major part of this work was completed during the research internship at Certis and INRIA SequeL. 1 Like in multi-armed bandits, this setting exhibits a trade off between exploitation (selection of the arms that are believed to perform well) and exploration. The exploration takes two forms here: discovery (pulling a new arm that has never been tried before) and sampling (pulling an arm already discovered in order to gain information about its actual mean-reward). Numerous applications can be found e.g. in [5]. It includes labor markets (a worker has many opportunities for jobs), mining for valuable resources (such as gold or oil) when there are many areas available for exploration (the miner can move to another location or continue in the same location, depending on results), and path planning under uncertainty in which the path planner has to decide among a route that has proved to be efficient in the past (exploitation), or a known route that has not been explored many times (sampling), or a brand new route that has never been tried before (discovery). Let us write kt the arm selected by our algorithm at time t. We define the regret up to time n as Rn = nµ∗−Pn t=1 µkt. From the tower rule, ERn is the expectation of the difference between the rewards we would have obtained by drawing an optimal arm (an arm having a mean-reward equal to µ∗) and the rewards we did obtain during the time steps 1, . . . , n. Our goal is to design an arm-pulling strategy such as to minimize this regret. Overview of our results: We write vn = ˜O(un) when for some n0, C > 0, vn ≤Cun(log(un))2, for all n ≥n0. We assume that the rewards of the arms lie in [0, 1]. Our regret bounds depend on whether µ∗= 1 or µ∗< 1. For µ∗= 1, our algorithms are such that ERn = ˜O(nβ/(1+β)). For µ∗< 1, we have ERn = ˜O(nβ/(1+β)) if β > 1, and (only) ERn = ˜O(n1/2) if β ≤1. Moreover we derive the lower-bound: for any β > 0, µ∗≤1, any algorithm satisfies ERn ≥Cnβ/(1+β) for some C > 0. Finally we propose an algorithm having the anytime property, which is based on an arm-increasing rule. Our algorithms essentially consist in pulling K different arms randomly chosen, where K is of order nβ/2 if µ∗< 1 and β < 1, and nβ/(1+β) otherwise, and using a variant of the UCB (Upper Confidence Bound) algorithm ([3],[2]) on this set of K arms, which takes into account the empirical variance of the rewards. This last point is crucial to get the proposed rate for µ∗= 1 and β < 1, i.e. in cases where there are many arms with small variance. Previous works on many-armed bandits: In [5], a specific setting of an infinitely many-armed bandit is considered, namely that the rewards are Bernoulli random variables with parameter p, where p follows a uniform law over a given interval [0, µ∗]. All mean-rewards are therefore in [0, µ∗]. They proposed three algorithms. (1) The 1-failure strategy where an arm is played as long as 1s are received. When a 0 is received, a new arm is played and this strategy is repeated forever. (2) The m-run strategy uses the 1-failure strategy until either m continuous 1s are received (from the same arm) or m different arms have been played. In the first case, we continue to play forever the current arm. In the second case, the arm that gave the most wins is chosen to play for the remaining rounds. Finally, (3) the m-learning strategy uses the 1-failure strategy during the first m rounds, and for the remaining rounds it chooses the arm that gave the most 1s during the first m rounds. For µ∗= 1, the authors of [5] have shown that 1-failure strategy, √n-run strategy, and log(n)√nlearning strategy have a regret ERn ≤2√n. They also provided a lower bound on the regret of any strategy: ERn ≥ √ 2n. For µ∗< 1, the corresponding optimal strategies are √nµ∗-run strategy and √nµ∗log(nµ∗)-learning strategy. All these algorithms require the knowledge of the horizon n of the game. In many applications, it is important to design algorithms having the anytime property, that is, the upper bounds on the expected regret ERn have the similar order for all n. Under the same Bernoulli assumption on the reward distributions, such algorithms has been obtained in [9]. In comparison to their setting (uniform distribution corresponds to β = 1), our upper- and lowerbounds are also of order √n up to a logarithmic factor, and we do not assume that we know exactly the distribution of the mean-reward. However it is worth noting that the proposed algorithms in [5, 9] heavily depend on the Bernoulli assumption of the rewards and are not easily transposable to general distributions. Note also that the Bernoulli assumption does not work for the real problems mentioned above, where the outcomes may take several possible values. Thus an important aspect of our work, compared to previous many-armed bandits, is that our setting allows general reward distributions for the arms, under a simple assumption on the mean-reward. 2 2 Main results In our framework, each arm of a bandit is characterized by the distribution of the rewards (obtained by drawing that arm) and the essential parameter of the distribution of rewards is its expectation. Another parameter of interest is the standard deviation. With low variance, poor arms will be easier to spot while good arms will have higher probability of not being disregarded at the beginning due to unlucky trials. To draw an arm is equivalent to draw a distribution ν of mean-rewards. Let µ = R wν(dw) and σ2 = R (w −µ)2ν(dw) denote the expectation and variance of ν. The quantities µ and σ are random variables. Our assumptions are the following: (A) Rewards are uniformly bounded: without loss of generality, we assume all rewards are in [0, 1]. (B) the expected reward of a randomly drawn arm satisfies: there exist µ∗∈(0, 1] and β > 0 s.t. P{µ > µ∗−ϵ} = Θ(ϵβ), for ϵ →0 (1) (C) there is a function V : [0, 1] →R such that P{σ2 ≤V (µ∗−µ)} = 1. The key assumption here is (B). It gives us (the order of) the number of arms that needs to be drawn before finding an arm that is ϵ-close to the optimum1 (i.e., an arm for which µ ≥µ∗−ϵ). Assumption (B) implies that there exists positive constants c1 and c2 such that for any ϵ ∈[0, µ∗], we have2 c1ϵβ ≤P{µ > µ∗−ϵ} ≤P{µ ≥µ∗−ϵ} ≤c2ϵβ. (2) For example, the uniform distribution on (0, µ∗) satisfies the Condition (1) with β = 1. Assumption (C) always holds for V (u) = µ∗(1 −µ∗+ u) (since Var W ≤EW(1 −EW) when W ∈[0, 1]). However it is convenient when the near-optimal arms have low variance (for instance, this happens when µ∗= 1). Let Xk,1, Xk,2, . . . denote the rewards obtained when pulling arm k. These are i.i.d. random variables with common expected value denoted µk. Let Xk,s ≜ 1 s Ps j=1 Xk,j and Vk,s ≜ 1 s Ps j=1(Xk,j −Xk,s)2 be the empirical mean and variance associated with the first s draws of arm k. Let Tk(t) denote the number of times arm k is chosen by the policy during the first t plays. We will use as a subroutine of our algorithms the following version of UCB (Upper Confidence Bound) algorithm as introduced in [2]. Let (Et)t≥0 be a nondecreasing sequence of nonnegative real numbers. It will be referred to as the exploration sequence since the larger it is, the more UCB explores. For any arm k and nonnegative integers s, t, introduce Bk,s,t ≜Xk,s + r 2Vk,sEt s + 3Et s (3) with the convention 1/0 = +∞. Define the UCB-V (for Variance estimate) policy: UCB-V policy for a set K of arms: At time t, play an arm in K maximizing Bk,Tk(t−1),t. From [2, Theorem 1], the main property of Bk,s,t is that with probability at least 1−5(log t)e−Et/2, for any s ∈[0, t] we have µk ≤Bk,s,t. So provided that Et is large, Bk,Tk(t−1),t is an observable quantity at time t which upper bounds µk with high probability. We consider nondecreasing sequence (Et) in order that these bounds hold with probability increasing with time. This ensures that the low probability event, that the algorithm might concentrate the draws on suboptimal arms, has a decreasing probability with time. 2.1 UCB revisited for the infinitely many-armed bandit When the number of arms of the bandit is greater than the total number of plays, it makes no sense to apply UCB-V algorithm (or other variants of UCB [3]) since its first step is to draw each arm once (to have Bk,Tk(t−1),t finite). A more meaningful and natural approach is to decide at the beginning 1Precise computations lead to a number which is of order ϵ−β up to possibly a logarithmic factor. 2Indeed, (1) implies that for some 0 < c′ 1 < c′ 2, there exists 0 < ϵ0 < µ∗such that for any ϵ ≤ϵ0, c′ 1ϵβ ≤P{µ > µ∗−ϵ} ≤P{µ ≥µ∗−ϵ} ≤c′ 2ϵβ. Then one may take c1 = c′ 1ϵβ 0 and c2 = max(ϵ−β 0 , c′ 2). 3 that only K arms will be investigated in the entire experiment. The K should be sufficiently small with respect to n (the total number of plays), as in this way we have fewer plays on bad arms and most of the plays will be on the best of K arms. The number K should not be too small either, since we want that the best of the K arms has an expected reward close to the best possible arm. It is shown in [2, Theorem 4] that in the multi-armed bandit, taking a too small exploration sequence (e.g. such as Et ≤1 2 log t) might lead to polynomial regret (instead of logarithmic for e.g. Et = 2 log t) in a simple 2-armed bandit problem. However, we will show that this is not the case in the infinitely many-armed bandit, where one may (and should) take much smaller exploration sequences (typically of order log log t). The reason for this phenomenon is that in this setting, there are typically many near-optimal arms so that the subroutine UCB-V may miss some good arms (by unlucky trials) without being hurt: there are many other near-optimal arms to discover! This illustrates a trade off between the two aspects of exploration: sample the current, not well-known, arms or discover new arms. We will start our analysis by considering the following UCB-V(∞) algorithm: UCB-V(∞) algorithm: Given parameters K and the exploration sequence (Et) • Randomly choose K arms, • Run the UCB-V policy on the set of the K selected arms. Theorem 1 If the exploration sequence satisfies 2 log(10 log t) ≤Et ≤log t, then for n ≥2 and K ≥2 the expected regret of the UCB-V(∞) algorithm satisfies: ERn ≤C n (log K)nK−1/β + K(log n)E h V (∆) ∆ + 1  ∧(n∆) io , (4) where ∆= µ∗−µ with µ the random variable corresponding to the expected reward of a sampled arm from the pool, and where C is a positive constant depending only on c1 and β (see (2)). Proof: The UCB-V(∞) algorithm has two steps: randomly choose K arms and run a UCB subroutine on the selected arms. The first part of the proof studies what happens during the UCB subroutine, that is, conditionally to the arms that have been randomly chosen during the first step of the algorithm. In particular we consider in the following that µ1, . . . , µK are fixed. From the equality (obtained using Wald’s theorem): ERn = PK k=1 E{Tk(n)}∆k (5) with ∆k = µ∗−µk, it suffices to bound ETk(n). The proof is inspired from the ones of Theorems 2 and 3 in [2]. The novelty of the following lemma is to include the product of probabilities in the last term of the right-hand-side. This enables us to incorporate the idea that if there are a lot of near-optimal arms, it is very unlikely that suboptimal arms are often drawn. Lemma 1 For any real number τ and any positive integer u, we have ETk(n) ≤u + Pn t=u+1 Pt s=u P Bk,s,t > τ  + Pn t=u+1 Q k′̸=k P(∃s′ ∈[0, t], Bk′,s′,t ≤τ  (6) where the expectations and probabilities are conditioned on the set of selected arms. Proof: We have Tk(n) −u ≤Pn t=u+1 Zk(u, t) where Zk(u, t) = 1It=k;Tk(t)>u. We have Zk(u, t) ≤ 1∀k′̸=k Bk,Tk(t−1),t≥Bk′,Tk′ (t−1),t;Tk(t−1)≥u ≤ 1∃s∈[u,t] Bk,s,t>τ + 1∀k′̸=k ∃s′∈[0,t] Bk′,s′,t≤τ where the last inequality holds since if the two terms in the last sum are equal to zero, then it implies that there exists k′ ̸= k such that for any s′ ∈[0, t] and any s ∈[u, t], Bk′,s′,t > τ ≥Bk,s,t. Taking the expectation of both sides, using a union bound and the independence between rewards obtained from different arms, we obtain Lemma 1. □ Now we use Inequality (6) with τ = µ∗+µk 2 = µk + ∆k 2 = µ∗−∆k 2 , and u the smallest integer larger than 32  σ2 k ∆2 k + 1 ∆k  log n. These choices are made to ensure that the probabilities in the r.h.s. 4 of (6) are small. Precisely, for any s ≥u and t ≤n, we have r 2[σ2 k + ∆k/4]Et s + 3Et s ≤ r [2σ2 k + ∆k/2] log n u + 3log n u ≤ r [2σ2 k+∆k/2]∆2 k 32[σ2 k+∆k] + 3∆2 k 32[σ2 k+∆k] = ∆k 4 r σ2 k+∆k/4 σ2 k+∆k + 3 8 ∆k σ2 k+∆k  ≤∆k 4 , where the last inequality holds since it is equivalent to (x −1)2 ≥0 for x = r σ2 k+∆k/4 σ2 k+∆k . Thus: P(Bk,s,t > τ) ≤P Xk,s + q 2Vk,sEt s + 3Et s > µk + ∆k/2  ≤P Xk,s + q 2[σ2 k+∆k/4]Et s + 3 Et s > µk + ∆k/2  + P Vk,s ≥σ2 k + ∆k/4  ≤P Xk,s −µk > ∆k/4  + P  P s j=1(Xk,j−µk)2 s −σ2 k ≥∆k/4  ≤2e−s∆2 k/(32σ2 k+8∆k/3), (7) where in the last step we used Bernstein’s inequality twice. Summing up we obtain t X s=u P(Bk,s,t > τ) ≤2 ∞ X s=u e−s∆2 k/(32σ2 k+8∆k/3) = 2 e−u∆2 k/(32σ2 k+8∆k/3) 1 −e−∆2 k/(32σ2 k+8∆k/3) ≤  80σ2 k ∆2 k + 7 ∆k  e−u∆2 k/(32σ2 k+8∆k/3) ≤  80σ2 k ∆2 k + 7 ∆k  n−1, (8) where we have used that 1 −e−x ≥4x/5 for 0 ≤x ≤3/8. Now let us bound the product of probabilities in (6). Since τ = µ∗−∆k/2, we have Y k′̸=k P(∃s ∈[0, t], Bk′,s,t ≤τ  ≤ Y k′:µk′>µ∗−∆k/2 P(∃s ∈[0, t], Bk′,s,t < µ′ k  . Now from [2, Theorem 1], with probability at least 1 −5(log t)e−Et/2, for any s ∈[0, t] we have µk ≤Bk,s,t. For Et ≥2 log(10 log t), this gives P(∃s ∈[0, t], Bk′,s,t < µ′ k  ≤1/2. Putting all the bounds of the different terms of (6) leads to ETk(n) ≤1 + 32  σ2 k ∆2 k + 1 ∆k  log n + 80σ2 k ∆2 k + 7 ∆k  + n2−N∆k , with N∆k the cardinal of  k′ ∈{1, . . . , K} : µk′ > a −∆k/2 . Since ∆k ≤µ∗≤1 and Tk(n) ≤n, the previous inequality can be simplified into ETk(n) ≤ nh 50  σ2 k ∆2 k + 1 ∆k  log n i ∧n o + n2−N∆k , (9) Here, for the sake of simplicity, we are not interested in having tight constants. From here on, we will take the expectations with respect to all sources of randomness, that is including the one coming from the first step of UCB-V(∞). The quantities ∆1, . . . , ∆K are i.i.d. random variables satisfying 0 ≤∆k ≤µ∗and P(∆k ≤ϵ) = Θ(ϵβ). The quantities σ1, . . . , σk are i.i.d. random variables satisfying almost surely σ2 k ≤V (∆k). From (5) and (9), we have ERn = KE  T1(n)∆1 ≤KE h 50  V (∆1) ∆1 + 1  log n i ∧(n∆1) + n∆12−N∆1  (10) Let p denote the probability that the expected reward µ of a randomly drawn arm satisfies µ > µ∗−δ/2 for a given δ. Conditioning on ∆1 = δ, the quantity N∆1 follows a binomial distribution with parameters K −1 and p, hence E(2−N∆1 |∆1 = δ) = (1 −p + p/2)K−1. By using (2), we get: E  ∆12−N∆1 = E  ∆1(1 −P(µ > µ∗−∆1/2)/2)K−1 ≤Eχ(∆1), with χ(u) = u(1 −c3uβ)K−1 and c3 = c1/2β. We have χ′(u) = (1 −c3uβ)K−2 1 −c3(1 + (K − 1)β)uβ so that χ(u) ≤χ(u0) with u0 = 1 [c3(1+(K−1)β)]1/β and χ(u0) = (1− 1 1+(K−1)β )K−1 [c3(1+(K−1)β)]1/β ≤ C′K−1/β for C′ a positive constant depending only c1 and β. For any u1 ∈[u0, µ∗], we have Eχ(∆1) ≤χ(u0)P(∆1 ≤u1) + χ(u1)P(∆1 > u1) ≤χ(u0)P(∆1 ≤u1) + χ(u1) . Let us take u1 = C′′ log K K 1/β for C′′ a positive constant depending on c1 and β sufficiently large to ensure u1 ≥u0 and χ(u1) ≤K−1−1/β. We obtain Eχ(∆1) ≤CK−1/β log K K for an appropriate constant C depending on c1 and β. Putting this into (10), we obtain the result of Theorem 1. □ 5 The r.h.s. of Inequality (4) contains two terms. The first term is the bias: when we randomly draw K arms, the expected reward of the best drawn arm is ˜O(K−1/β)-optimal. So the best algorithm, once the K arms are fixed, will yield a regret ˜O(nK−1/β). The second term is the estimation. It indicates the difference between the UCB subroutine’s performance and the best drawn arm. 2.2 Strategy for fixed play number Consider that we know in advance the total number of plays n and the value of β. In this case, one can use the UCB-V(∞) algorithm with parameter K of order of the minimizer of the r.h.s. of Inequality (4). This leads to the following UCB-F (for Fixed horizon) algorithm. UCB-F (fixed horizon): given total number of plays n, and parameters µ∗and β of (1) • Choose K arms with K of order ( n β 2 if β < 1, µ∗< 1 n β β+1 otherwise, i.e. if µ∗= 1 or β ≥1 • Run the UCB-V algorithm with the K chosen arms and an exploration sequence satisfying 2 log(10 log t) ≤Et ≤log t (11) Theorem 2 For any n ≥2, the expected regret of the UCB-F algorithm satisfies ERn ≤    C(log n)√n if β < 1 and µ∗< 1 C(log n)2√n if β = 1 and µ∗< 1 C(log n)n β 1+β otherwise, i.e. if µ∗= 1 or β > 1 (12) with C a constant depending only on c1, c2 and β (see (2)). Proof: The result comes from Theorem 1 by bounding the expectation E = E  V (∆) ∆ +1  ∧(n∆)  . First, as mentioned before, Assumption (C) is satisfied for V (∆) = µ∗(1 −µ∗+ ∆). So for µ∗= 1 and this choice of function V , we have E ≤2. For µ∗< 1, since ∆≤µ∗, we have E ≤EΨ(∆) with Ψ(t) = 2µ∗ t ∧(nt). The function Ψ is continuous and differentiable by parts. Using Fubini’s theorem and Inequality (2), we have EΨ(∆) = Ψ(µ∗) −E R µ∗ ∆Ψ′(t)dt = Ψ(µ∗) − R µ∗ 0 Ψ′(t)P(∆≤t)dt ≤ 2 + R 1√ 2/n 2 t2 c2tβdt ≤      2 + 2(1+β)/2c2 1−β n 1−β 2 if β < 1 2 + c2 log(n/2) if β = 1 2 + 2c2 β−1 if β > 1 . Putting these bounds in Theorem 1, we get ERn ≤          C n (log K)nK−1/β + (log n)Kn 1−β 2 o if β < 1 and µ∗< 1 C n (log K)nK−1/β + (log n)2K o if β = 1 and µ∗< 1 C n (log K)nK−1/β + (log n)K o otherwise: µ∗= 1 or β > 1 with C a constant only depending on c1, c2 and β. The number K of selected arms in UCB-F is taken of the order of the minimizer of these bounds up to a logarithmic factor. □ Theorem 2 makes no difference between a logarithmic exploration sequence and an iterated logarithmic exploration sequence. However in practice, it is clearly better to take an iterated logarithmic exploration sequence, for which the algorithm spends much less time on exploring all suboptimal arms. For sake of simplicity, we have fixed the constants in (11). It is easy to check that for Et = ζ logt and ζ ≥1, Inequality (12) still holds but with a constant C depending linearly in ζ. Theorem 2 shows that when µ∗= 1 or β ≥1, the bandit subroutine takes no time in spotting nearoptimal arms (the use of UCB-V algorithm using variance estimate is crucial for this), whereas for β < 1 and µ∗< 1, which means a lot of near-optimal arms with possibly high variances, the bandit subroutine has difficulties in achieving low regret. The next theorem shows that our regret upper bounds are optimal up to logarithmic terms except for the case β < 1 and µ∗< 1. We do not know whether the rate O(nβ/2 log n) for β < 1 and µ∗< 1 is improvable. This remains an open problem. 6 Theorem 3 For any β > 0 and µ∗≤1, any algorithm suffers a regret larger than cn β 1+β for some small enough constant c depending on c2 and β. Sketch of proof. If we want to have a regret smaller than cnβ/(1+β) we need that most draws are done on an arm having an individual regret smaller than ϵ = cn−1/(1+β). To find such an arm, we need to try a number of arms larger than C′ϵ−β = C′c−βnβ/(1+β) arms for some C′ > 0 depending on c2 and β. Since these arms are drawn at least once and since most of these arms give a constant regret, it leads to a regret larger than C′′c−βnβ/(1+β) with C′′ depending on c2 and β. For c small enough, this contradicts that the regret is smaller than cnβ/(1+β). So it is not possible to improve on the nβ/(1+β) rate. □ 2.3 Strategy for unknown play number To apply the UCB-F algorithm we need to know the total number of plays n and we choose the corresponding K arms before starting. When n is unknown ahead of time, we propose here an anytime algorithm with a simple and reasonable way of choosing K by adding a new arm from time to time into the set of sampled arms. Let Kn denote the number of arms played up to time n. We set K0 = 0. We define the UCB-AIR (for Arm-Increasing Rule): UCB-AIR (Arm-Increasing Rule): given parameters µ∗and β of (1), • At time n, try a new arm if Kn−1 < ( n β 2 if β < 1 and µ∗< 1 n β β+1 otherwise: µ∗= 1 or β ≥1 • Otherwise apply UCB-V on Kn−1 drawn arms with an exploration sequence satisfying 2 log(10 log t) ≤Et ≤log t This arm-increasing rule makes our algorithm applicable for the anytime problem. This is a more reasonable approach in practice than restarting-based algorithms like the ones using the doubling trick (see e.g. [4, Section 5.3]). Our second main result is to show that the UCB-AIR algorithm has the same properties as the UCB-F algorithm (proof omitted from this extended abstract). Theorem 4 For any horizon time n ≥2, the expected regret of the UCB-AIR algorithm satisfies ERn ≤  C(log n)2√n if β < 1 and µ∗< 1 C(log n)2n β 1+β otherwise, i.e. if µ∗= 1 or β ≥1 (13) with C a constant depending only on c1, c2 and β (see (2)). 3 Comparison with continuum-armed bandits and conclusion In continuum-armed bandits (see e.g. [1, 6, 4]), an infinity of arms is also considered. The arms lie in some Euclidean (or metric) space and their mean-reward is a deterministic and smooth (e.g. Lipschitz) function of the arms. This setting is different from ours since our assumption is stochastic and does not consider regularities of the mean-reward w.r.t. the arms. However, if we choose an arm-pulling strategy which consists in selecting randomly the arms, then our setting encompasses continuum-armed bandits. For example, consider the domain [0, 1]d and a mean-reward function µ assumed to be locally equivalent to a Hölder function (of order α ∈[0, +∞)) around any maximum x∗(the number of maxima is assumed to be finite), i.e. µ(x∗) −µ(x) = Θ(∥x∗−x∥α) when x →x∗. (14) Pulling randomly an arm X according to the Lebesgue measure on [0, 1]d, we have: P(µ(X) > µ∗−ϵ) = Θ(P(∥X −x∗∥α < ϵ)) = Θ(ϵd/α), for ϵ →0. Thus our assumption (1) holds with β = d/α, and our results say that if µ∗= 1, we have ERn = ˜O(nβ/(1+β)) = ˜O(nd/(α+d)). For d = 1, under the assumption that µ is α-Hölder (i.e. |µ(x)−µ(y)| ≤c ∥x −y∥α for 0 < α ≤1), [6] provides upper- and lower-bounds on the regret Rn = Θ(n(α+1)/(2α+1)). Our results gives 7 ERn = ˜O(n1/(α+1)) which is better for all values of α. The reason for this apparent contradiction is that the lower bound in [6] is obtained by the construction of a very irregular function, which actually does not satisfy our local assumption (14). Now, under assumptions (14) for any α > 0 (around a finite set of maxima), [4] provides the rate ERn = ˜O(√n). Our result gives the same rate when µ∗< 1 but in the case µ∗= 1 we obtain the improved rate ERn = ˜O(n1/(α+1)) which is better whenever α > 1 (because we are able to exploit the low variance of the good arms). Note that like our algorithm, the algorithms in [4] as well as in [6], do not make an explicit use (in the procedure) of the smoothness of the function. They just use a ‘uniform’ discretization of the domain. On the other hand, the zooming algorithm of [7] adapts to the smoothness of µ (more arms are sampled at areas where µ is high). For any dimension d, they obtain ERn = ˜O(n(d′+1)/(d′+2)), where d′ ≤d is their ’zooming dimension’. Under assumptions (14) we deduce d′ = α−1 α d using the Euclidean distance as metric, thus their regret is ERn = ˜O(n(d(α−1)+α)/(d(α−1)+2α)). For locally quadratic functions (i.e. α = 2), their rate is ˜O(n(d+2)/(d+4)), whereas ours is ˜O(nd/(2+d)). Again, we have a smaller regret although we do not use the smoothness of µ in our algorithm. Here the reason is that the zooming algorithm does not make full use of the fact that the function is locally quadratic (it considers a Lipschitz property only). However, in the case α < 1, our rates are worse than algorithms specifically designed for continuum armed bandits. Hence, the comparison between the many-armed and continuum-armed bandits settings is not easy because of the difference in nature of the basis assumptions. Our setting is an alternative to the continuum-armed bandit setting which does not require the existence of an underlying metric space in which the mean-reward function would be smooth. Our assumption (1) naturally deals with possibly very complicated functions where maxima may be located in any part of the space. For the continuum-armed bandit problems when there are relatively many near-optimal arms, our algorithm will be also competitive compared to the specifically designed continuum-armed bandit algorithms. This result matches the intuition that in such cases, a random selection strategy will perform well. To conclude, our contributions are: (i) Compared to previous results on many-armed bandits, our setting allows general mean-reward distributions for the arms, under a simple assumption on the probability of pulling near-optimal arms. (ii) We show that, for infinitely many-armed bandits, we need much less exploration of each arm than for finite-armed bandits (the log term may be replaced by log log). (iii) Our variant of UCB algorithm, making use of the variance estimate, enables to obtain higher rates in cases when the variance of the near-optimal arms is small. (iv) We propose the UCB-AIR algorithm, which is anytime, taking advantage of an arm-increasing rule. (v) We provide a lower-bound matching the upper-bound (up to a logarithmic factor) in the case β ≥1 or µ∗= 1. References [1] R. Agrawal. The continuum-armed bandit problem. SIAM J. Control and Optimization, 33:1926–1951, 1995. [2] J.-Y. Audibert, R. Munos, and C. Szepesvári. Tuning bandit algorithms in stochastic environments. In M. Hutter, R. A. Servedio, and E. Takimoto, editors, ALT, volume 4754 of Lecture Notes in Computer Science, pages 150–165. Springer, 2007. [3] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem. Machine Learning, 47(2/3):235–256, 2002. [4] P. Auer, R. Ortner, and C. Szepesvári. Improved rates for the stochastic continuum-armed bandit problem. 20th COLT, San Diego, CA, USA, 2007. [5] D. A. Berry, R. W. Chen, A. Zame, D. C. Heath, and L. A. Shepp. Bandit problems with infinitely many arms. The Annals of Statistics, 25(5):2103–2116, 1997. [6] R. Kleinberg. Nearly tight bounds for the continuum-armed bandit problem. In NIPS-2004, 2004. [7] R. Kleinberg, A. Slivkins, and E. Upfal. Multi-armed bandit problems in metric spaces. In Proceedings of the 40th ACM Symposium on Theory of Computing, 2008. [8] T. L. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics, 6:4–22, 1985. [9] O. Teytaud, S. Gelly, and M. Sebag. Anytime many-armed bandit. Conférence francophone sur l’Apprentissage automatique (CAp) Grenoble, France, 2007. 8
2008
77
3,567
On the asymptotic equivalence between differential Hebbian and temporal difference learning using a local third factor Christoph Kolodziejski1,2, Bernd Porr3, Minija Tamosiunaite1,2,4, Florentin Wörgötter1,2 1 Bernstein Center for Computational Neuroscience Göttingen 2 Georg-August University Göttingen, Department of Nonlinear Dynamics Bunsenstr. 10, 37073 Göttingen, Germany 3 University of Glasgow, Department of Electronics & Electrical Engineering Glasgow, GT12 8LT, Scotland 4 Vytautas Magnus University, Department of Informatics Vileikos 8, 44404, Kaunas, Lithuania kolo|minija|worgott@bccn-goettingen.de, b.porr@elec.gla.ac.uk Abstract In this theoretical contribution we provide mathematical proof that two of the most important classes of network learning - correlation-based differential Hebbian learning and reward-based temporal difference learning - are asymptotically equivalent when timing the learning with a local modulatory signal. This opens the opportunity to consistently reformulate most of the abstract reinforcement learning framework from a correlation based perspective that is more closely related to the biophysics of neurons. 1 Introduction The goal of this study is to prove that the most influential form of reinforcement learning (RL) [1], which relies on the temporal difference (TD) learning rule [2], is equivalent to correlation based learning (Hebb, CL) which is convergent over wide parameter ranges when using a local third factor, as a gating signal, together with a differential Hebbian emulation of CL. Recently there have been several contributions towards solving the question of equivalence of different rules [3, 4, 5, 6], which presented specific solutions to be discussed later (see section 4). Thus, there is more and more evidence emerging that Hebbian learning and reinforcement learning can be brought together under a more unifying framework. Such an equivalence would have substantial influence on our understanding of network learning as these two types of learning could be interchanged under these conditions. The idea of differential Hebbian learning was first used by Klopf [7] to describe classical conditioning relating to the stimulus substitution model of Sutton [8]. One of its most important features is the implicit introduction of negative weight changes (LTD), which leads to intrinsic stabilization properties in networks. Earlier approaches had to explicitly introduce negative weight changes into the learning rule, e.g. by ways of a threshold [9]. One drawback of reinforcement learning algorithms, like temporal difference learning, is their use of discrete time and discrete non-overlapping states. In real neural systems, time is continuous and the state space can only be represented by the activity of neurons, many of which will be active at the same time and for the same "space". This creates a rather continuous state space representation in real systems. In order to allow for overlapping states or for generalizing over a wider range of input regions, RL algorihtms are usually extended by value function approximation methods [1]. However, while biologically more realistic [10], this makes initially elegant RL algorithms often quite opaque and convergence can many times not be guaranteed anymore [11]. Here we are not concerned with function approximation, but instead address the question of how to transform an RL algorithm (TD-learning) to continuous time using differential Hebbian learning with a local third factor and remaining fully compatible with neuronally plausible operations. Biophysical considerations about how such a third factor might be implemented in real neural tissue are of secondary importance for this study. At this stage we are concerned with a formal proof only. 1.1 Emulating RL by Temporal Difference Learning Reinforcement learning maximizes the rewards r(s) an agent will receive in the future when following a policy π traveling along states s. The return R is defined as the sum of the future rewards: R(si) = P k γkr(si+k+1), where future rewards are discounted by a factor 0 < γ ≤1. One central goal of RL is to determine the values V (s) for each state given by the average expected return Eπ{R}, that can be obtained when following policy π. Many algorithms exist to determine the values, almost all of which rely on the temporal difference (TD) learning rule (Eq. 1) [2]. Every time the agent encounters a state si, it updates the value V (si) with the discounted value V (si+1) and the reward r(si+1) of the next state that is associated with the consecutive state si+1: V (si) →(1 −α)V (si) + α(r(si+1) + γV (si+1)) (1) where α is the learning rate. This rule is called TD(λ = 0), short TD(0), as it only evaluates adjacent states. For values of λ ̸= 0 more of the recently visited states are used for value-function update. TD(0) is by far the most influential RL learning rule as it is the simplest way to assure optimality of learning [12, 1]. 1.2 Differential Hebbian learning with a local third factor In traditional Hebbian learning, the change of a weight ρ relies on the correlation between input u(t) and output v(t) of a neuron: ρ′(t) = ˜α·u(t)·v(t), where ˜α is the learning rate and prime denotes the temporal derivative. If we consider the change of the post-synaptic signal and, therefore, replace v(t) with v′(t), we will arrive at differential Hebbian learning. Then, also negative weight changes are possible and this yields properties similar to experimental neurophysiological observations (spiketiming dependent plasticity, [13]). In order to achieve the equivalence (see section 4 for a discussion) we additionally introduce a local third modulatory factor Mk(t) responsible for controlling the learning [14]. Here local means that each input uk(t) controls a separate third factor Mk(t) which in turn modulates only the weight change of the corresponding weight ρk(t). The local three-factor differential-Hebbian learning rule is then: ρ′ k(t) = ˜α · uk(t) · v′(t) · Mk(t) (2) where uk(t) is the considered pre-synaptic signal and v(t) = X n ρn(t)un(t) (3) the post-synaptic activity of a model neuron with weights ρn(t). We will assume in the following that our modulatory signal Mk(t) is either 1 or 0, thus represented by a step function. 2 Analytical derivation We are going to analyze the weight change of weight ρi(t) when considering two consecutive signals ui(t) and ui+1(t) with the index i representing a temporal (and not e.g. a spatial) ordering. The local third factor Mi(t) opens a time window for its corresponding weight ρi(t) in which changes can occur. Although this time window could be located anywhere depending on the input ui(t) it should be placed at the end of the state si(t) as it makes only sense if states correlate with their successor. The relation between state s(t) and input u(t) is determined by a convolution: u(t) = R ∞ 0 s(z)h(t− z)dz with filter function h(t) which are identical for all states. As we are using only states that are either on or off during a visiting duration S, the input functions u(t) do not differ between states. Therefore we will use ui(t) (with index i) having a particular state in mind and u(t) (without index i) when pointing to functional development. Furthermore we define the time period between the end of a state si(t) and the beginning of the next state si+1(t) as T (T < 0 in case of overlapping states). Concerning the modulatory third factor Mi(t) we define its length as L, and the time period between beginning of Mi(t) and the end of the corresponding state si(t) as O. These four parameters (L, O, T, and S) are constant over states and are displayed in detail in fig. 1 B. 2.1 Analysis of the differential equation For the following analysis we need to substitute Eq. 3 in Eq. 2 and solve this differential equation which consists of a homogeneous and an inhomogeneous part: ρ′ i(t) = ˜α · Mi(t) · ui(t)[ui(t) · ρi(t)]′ + ˜α · Mi(t) · ui(t)[ X j̸=i uj(t) · ρj(t)]′ (4) where the modulator Mi(t) is defining the integration boundaries. The first summand leads us to the homogeneous solution which we will define as auto-correlation ρac(t). The second summand(s) on the other hand will lead to the inhomogeneous solution and this we will define as cross-correlation ρcc(t). Together we have ρ(t) = ρac(t) + ρcc(t). In general the overall change of the weight ρi(t) after integrating over the visiting duration of si(t) and si+1(t) and using the modulatory signal Mi(t) is: ∆ρi =: ∆i = ∆ac i + ∆cc i Without restrictions, we can now limit further analysis of Eq. 4, in particular of the cross-correlation term, to the case of j = i + 1 as the modulatory factor only effects the weight of the following state. Since weight changes are in general slow, we can assume a quasi-static process ( ρ′ i ρi ≪u′ i ui , α →0). As a consequence, the derivatives of ρ on the right hand side of Eq. 4 can be neglected. The solution of the auto-correlation ρac i (t) is then in general: ρac i (t) = ρac i (t0)e˜α·Mi(t)· 1 2 [u2 i (t)−u2 i (t0)] (5) and the overall weight change with the third factor being present between t = O + S and t = O + S + L (fig. 1 B) is therefore: ∆ac i = ρi(e˜α 1 2 [u2 i (O+S+L)−u2 i (O+S)] −1) (6) Using again the argument of a quasi-static process (˜α →0), we can expand the exponential function to the first order: ∆ac i : = −˜αρi 1 2[u2 i (O + S) −u2 i (O + S + L) + o(˜α)] (7) = −˜αρiκ (8) where we have defined κ in the following way: κ(L, O, S) = 1 2[u2(O + S) −u2(O + S + L) + o(˜α)] (9) which is independent of i since we assume all state signals as identical. Next we investigate the cross-correlation ρcc(t) again under the assumption of a quasi-static process. This leads us to: ρcc i (t) = ρcc i (t0) + ˜αρi+1 Z t 0 Mi(z) · ui(z)u′ i+1(z)dz (10) which yields assuming a time shift between signals ui and ui+1 of S+T, i.e. ui(t−S−T) = ui+1(t) an overall weight change of ∆cc i = ˜αρi+1 Z O+S+L O+S ui(z)u′ i(z −S −T)dz := ˜αρi+1τ (11) whereas the third factor was being present between t = O + S and t = O + S + L (fig. 1 B). Additionally we defined τ as follows: τ(L, O, T, S) = Z O+L−T O−T u(z + S + T)u′(z)dz (12) which, too, is independent of i. Both τ and κ depend on the actually used signal shape u(t) and the values for the parameters L, O, T and S. 2.2 Analysis of the network After the analysis of the auto- and cross-correlation of Eq. 4 we are going to discuss the weight changes in a network context with a reward only at the terminal state (non-terminal reward states will be discussed in section 4). Without restrictions, we can limit this discussion to the situation in Fig. 1 A where we have one intermediate state transition (from si to si+1) and a final one (from si+1 to sR) which yields a reward. The weight associated with the reward state sR is set to a constant value unequal to zero. Therefore three-factor differential Hebbian will influence two synaptic connections ρi and ρi+1 of states si and si+1 respectively, which directly project onto neuron v. Fig. 1 B shows a realistic situation of state transitions leaving the old state si−1 and entering the new state si and so on. The signals as such could be considered as membrane voltages or firing rates of neurons. S S S u i u R u i+1 u i−1 u,v T T T v s R s i+1 s i s i−1 ~ ρ ∆i ac i ~ ρ ∆ i+1 i cc ~ ρ ∆i+1 ac i+1 ~ ρ ∆ R i+1 cc L L O L O ρR uR sR v ρi ui ui+1 ρi+1 s i+1 s i Mi+1 direction Mi MR Mi s S B A O time Mi−1 Mi+1 Figure 1: The setup is shown in panel A and the signal structure in panel B. (A) Three states, including the rewarded state, converge on the neuron which learns according to Eq. 2. Each state si controls the occurrence of the modulatory factor Mi which in turn will influence learning at synapse ρi. The states s will be active according to the direction arrow. (B) The lower part shows the states si which have a duration of length S. We assume that the duration for the transition between two states is T. In the middle the output v and the signals u are depicted. Here u is given by u(t) = R S 0 (e−a (t−z) −e−b (t−z)) dz. The third factor Mi is released for the duration L after a time delay of O and is shown in the upper part. For each state the weight change separated into auto-correlation ∆ac and cross-correlation ∆cc and their dependence on the weights according to Eq. 7 and 11 are indicated. We will start our considerations with the weight change of ρi which is only influenced by the visiting state si itself and by the transition between si and si+1. The weight change ∆ac i caused by the autocorrelation (si with itself) is governed by the weight ρi of state si (see Eq. 8) and is negative as the signal ui at the the end of the state decays (κ is positive, though, because we factorized a minus sign from Eq. 6 to Eq 7). The cross-correlation (∆cc i ), however, is proportional to the weight ρi+1 of the following state si+1 (see Eq. 11) and is positive because the positive derivative of the next state signal ui+1 correlates with the signal ui of state si. According to these considerations the contributions for the ∆i+1-values can be discussed in an identical way for the following sequence (si+1, sR). In general the weight after a single trial is the sum of the old weight ρi with the two ∆-values: ρi →ρi + ∆ac i + ∆cc i (13) Using Eq. 8 and Eq. 11 we can reformulate Eq. 13 into ρi →ρi −˜α · κ · ρi + ˜α · τ · ρi+1 (14) Substituting α = ˜α · κ and γ = τ/κ we get ρi →(1 −α) · ρi + α · γ · ρi+1 (15) At this point we can make the transition from weights ρi (differential Hebbian learning) to states V (si) (temporal difference learning). Additionally we note that sequences only terminate at i + 1, thus this index will capture the reward state sR and its value r(si+1), while this is not the case for all other indices (see section 4 for a detail discussion of rewards at non-terminal states). Consequently this gives us an equation almost identical to Eq 1: V (si) →(1 −α)V (si) + α · γ[r(si+1) + V (si+1)] (16) where one small difference arises as in Eq. 16 the reward is scaled by γ. However, this has no influence as numerical reward values are arbitrary. Thus, if learning follows this third factor differential Hebbian rule, weights will converge to the optimal estimated TD-values. This proves that, under some conditions for κ and τ (see below), TD(0) and the here proposed three factor differential Hebbian learning are indeed asymptotically equivalent. 2.3 Analysis of κ and γ Here we will take a closer look at the signal shape and the parameters (L, O, T and S) which influence the values of κ (Eq. 9) and τ (Eq. 12) and therefore γ = τ/κ. For guaranteed convergence these values are constraint by two conditions, τ ≥0 and κ > 0 (where κ = 0 is allowed in case of τ = 0), which come from Eq. 14. A non-positive value of κ would lead to divergent weights ρ and a negative value of τ to oscillating weight pairs (ρi, ρi+1). However even if fulfilled, these conditions will not always lead to meaningful weight developments. A τ-value of 0 leaves all weights at their initial weight value and discount factors, which are represented by γ-values exceeding 1, are usually not considered in reinforcement learning [1]. Thus it makes sense to introduce more rigorous conditions and demand that 0 < γ ≤1 and κ > 0. 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 >1.0 0 0 γ κ τ <0.0 O/P 0 1 2 −1 −2 O/P 0 1 2 −1 −2 O/P 0 1 2 −1 −2 T/P 2 1 −2 0 −1 P/3 2 P/3 4 P/3 Figure 2: Shown are γ-values dependent on the ratio O/P and T/P for three different values of L/P (1/3, 2/3, and 4/3). Here P is the length of the rising as well as the falling phase. The shape of the signal u is given by u(t) = R S 0 (e−a (t−z) −e−b (t−z)) dz with parameters a = 0.006 and b = 0.066. The individual figures are subdivided into a patterned area where the weights will diverge (κ = 0, see Eq.7), a striped area where no overlap between both signals and the third factor exists and into a white area that consists of γ-values which, however, are beyond a meaningful range (γ > 1). The detailed gray shading represent γ-values (0 < γ ≤1) for which convergence is fulfilled. Furthermore, as these conditions depend on the signal shape, the following theoretical considerations need to be guided by biophysics. Hence, we will discuss neuronally plausible signals that can arise at a synapse. This constrains u to functions that posses only one maximum and divide the signal into a rising and a falling phase. One quite general possibility for the shape of the signal u is the function used in Fig. 1 for which we investigate the area of convergence. As we have three (we do not have to consider the parameter S if we take this value to be large compared to |T|, L or O) parameters to be varied, Fig. 2 shows the γ-value in 3 different panels. In each panel we varied the parameters O and T from minus to plus 2 P where P is the time the signal u needs to reach the maximum. In each of the panels we plot γ-values for a particular value of L. Regarding κ the condition formed by Eq. 9 for the shape of the signal u(t) is in general already fulfilled by using neuronally plausible signals and the third factor at the end of each state. As the signals start to decay after the end of a state visit, u(O + S) is always larger than u(O + S + L) and therefore κ > 0. Only if the third factor is shifted (due to the parameter O, see fig. 1 B for more details) to regions of the signal u where the decay has not yet started (O < −L) or has already ended (O > P) the difference of u(O +S) and u(O +S +L) is 0 which leads using Eq. 9 to κ = 0. This is indicated by the patterned area in fig. 2. A gray shading displays in detail the γ-values for which the condition is fulfilled, whereas white represents those areas for which we receive γ > 1. The striped area indicates parameter configurations for which no overlap between two consecutive signals and the third factor exist (τ = 0). The different frames show clearly that the area of convergence changes only gradually and the area as such is increasing with increasing duration of the third factor. Altogether it shows that for a general neuronally plausible signal shape u the condition for asymptotic equivalence between temporal difference learning and differential Hebbian learning with a local third factor is fulfilled for a wide parameter range. 3 Simulation of a small network In this section we show that we can reproduce the behavior of TD-learning in a small linear network with two terminal states. This is done with a network of neurons designed according to our algorithm with a local third factor. Obtained weights of the differential Hebbian learning neuron represent the corresponding TD-value (see fig. 3 A). It is known that in a linear TD-learning system with two terminal states (one is rewarded, the other not) and a γ-value close to 1, values at the end of learning will represent the probability of reaching the reward state starting at the corresponding state (compare [1]). This is shown, including the weight development, in panel (B). terminal R=1 R=0 N N−1 N−2 2 1 0 R=0 R=0 R=0 R=0 terminal N N−1 N−2 0 1 2 N N−1 N−2 1 0 2 S S S S S S ρN=0 ρN−2 =0 ρN−1 =0 ρ2=0 ρ0=1 ρ1=0 ρ2 ρ3 ρ4 ρ5 ρ6 ρ7 ρ1 ρ0 ρ8 ρ9 A B V M M M M M M 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 weights trials 0 1000 2000 3000 4000 5000 6000 7000 8000 Figure 3: The linear state arrangement and the network architecture is shown in panel A. The corresponding weights after a typical experiment are depicted in panel B. The lines represent the mean of the last 2000 weight-values of each state and are distributed uniformly (compare [1]). The signal shape is given by u(t) = R S 0 (e−a (t−z) −e−b (t−z)) dz with parameters a = 0.006 and b = 0.066. Furthermore is O = 1/20 P, L = P, T = 0 (which yields γ ≃1), N = 9, and ˜α = 0.01. 4 Discussion The TD-rule has become the most influential algorithm in reinforcement learning, because of its tremendous simplicity and proven convergence to the optimal value function [1]. It had been successfully transferred to control problems, too, in the form of Q- or SARSA learning [15, 16], which use the same algorithmic structure, while maintaining similar advantageous mathematical properties [15]. In this study we have shown that TD(0)-learning and differential Hebbian learning modulated by a local third factor are equivalent under certain conditions. This proof relies only on commonly applicable, fairly general assumptions, thus rendering a generic result not constraining the design of larger networks. However, in which way the timing of the third factor is implemented in networks will be an important issue when constructing such networks. Several earlier results have pointed to the possibility of an equivalence between RL and CL. Izhikevich [3] solved the distal reward problem using a spiking neural network, yet with fixed exponential functions [17] to emulate differential Hebbian characteristics. His approach is related to neurophysiologically findings on spike-timing dependent plasticity (STDP, [13]). Each synapse learned the correlation between conditioned stimuli and unconditioned stimuli (e.g. a reward) through STDP and a third signal. Furthermore Roberts [4] showed that that asymmetrical STDP and temporal difference learning are related. In our differential Hebbian learning model, in contrast to the work described above, STDP emerges automatically because of the use of the derivative in the postsynaptic potential (Eq. 2). Rao and Sejnowski [18] showed that using the temporal difference will directly lead to STDP, but they could not provide a rigorous proof for the equivalence. Recently, it has been shown that the online policy-gradient RL-algorithm (OLPOMDP, [19]) can be emulated by spike timing dependent plasticity [5], however, in a complex way using a global reward signal. On the other hand, the observations reported here provide a rather simple, equivalent correlation based implementation of TD and support the importance of three factor learning for providing a link between conventional Hebbian approaches and reinforcement learning. In most physiological experiments [20, 21, 22] the reward is given at the end of the stimulus sequence. Our assumption that the reward state is a terminating state and is therefore only at the end of the learning sequence conforms, thus, to this paradigm. However, for TD in general we cannot assume that the reward is only provided at the end. Differential Hebbian learning will then lead to a slightly different solution compared to TD-learning. This solution has already been discussed in a another context [23]. Specifically, the difference in our case is the final result for the state-value after convergence for states that provide a reward: We get V (s) →γV (si+1)+r(si+1)−r(si) compared to TD learning: V (s) →γV (si+1) + r(si+1). It would be interesting to assess with physiological and or behavioral experiments, which of the two equations does more closely represent experimental reality. Our results rely in a fundamental way on the third factor Mi, and the analysis performed in this study indicates that the third factor is necessary for the emulation of TD-learning by a differential Hebb rule. To explain the reason for this requires a closer look at the temporal difference learning rule. We find that the TD-rule requires a leakage term −α·V (s). If this term does not exist, values would diverge. It has been shown [24] that in differential Hebbian learning without a third factor, however, the auto-correlation part, which is the source of the leakage needed, (see Eq. 13 and Eq. 7) is non existing. This shows that just through a well-timed third factor the ratio between cross-correlation and auto-correlation term is correctly adjusted. This ratio is at the end responsible for the γ-value we will get using differential Hebbian learning to emulate TD-learning. References [1] R.S. Sutton and A.G. Barto. Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA, 1998. [2] R. S. Sutton. Learning to predict by the method of temporal differences. Mach. Learn., 3:9–44, 1988. [3] E. Izhikevich. Solving the distal reward problem through linkage of stdp and dopamine signaling. Cereb. Cortex., 17:2443–2452, 2007. [4] PD. Roberts, RA. Santiago, and G. Lafferriere. An implementation of reinforcement learning based on spike-timing dependent plasticity. Biol. Cybern., in press. [5] R. V. Florian. Reinforcement learning through modulation of spike-timing-dependent synaptic plasticity. Neural Comput., 19:1468–1502, 2007. [6] W. Potjans, A. Morrison, and M. Diesmann. A spiking neural network model of an actor-critic learning agent. Neural Comput., 21:301–339, 2009. [7] A. H. Klopf. A neuronal model of classical conditioning. Psychobiol., 16(2):85–123, 1988. [8] R. Sutton and A. Barto. Towards a modern theory of adaptive networks: Expectation and prediction. Psychol. Review, 88:135–170, 1981. [9] E. Oja. A simplified neuron model as a principal component analyzer. J. Math. Biol., 15(3):267–273, 1982. [10] M. Tamosiunaite, J. Ainge, T. Kulvicius, B. Porr, P. Dudchenko, and F. Wörgötter. Pathfinding in real and simulated rats: On the usefulness of forgetting and frustration for navigation learning. J. Comp. Neurosci., 25(3):562–582, 2008. [11] M. Wiering. Convergence and divergence in standard averaging reinforcement learning. In J Boulicaut, F Esposito, F Giannotti, and D Pedreschi, editors, Proceedings of the 15th European Conference on Machine learning ECML’04, pages 477–488, 2004. [12] P. Dayan and T. Sejnowski. Td(λ) converges with probability 1. Mach. Learn., 14(3):295–301, 1994. [13] H. Markram, J. Lübke, M. Frotscher, and B. Sakmann. Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs. Science, 275:213–215, 1997. [14] B. Porr and F. Wörgötter. Learning with “relevance”: Using a third factor to stabilise hebbian learning. Neural Comput., 19:2694–2719, 2007. [15] C. Watkins and P. Dayan. Technical note:Q-Learning. Mach. Learn., 8:279–292, 1992. [16] S. P. Singh, T. Jaakkola, M. L. Littman, and C. Szepesvári. Convergence results for single-step on-policy reinforcement-learning algorithms. Mach. Learn., 38(3):287–308, 2000. [17] W. Gerstner, R. Kempter, L. van Hemmen, and H. Wagner. A neuronal learning rule for submillisecond temporal coding. Nature, 383:76– 78, 1996. [18] R. Rao and T. Sejnowski. Spike-timing-dependent hebbian plasticity as temporal difference learning. Neural Comput., 13:2221–2237, 2001. [19] J. Baxter, P. L. Bartlett, and L. Weaver. Experiments with infinite-horizon,policy-gradient estimation. J. Artif. Intell. Res., 15:351–381, 2001. [20] W. Schultz, P. Apicella, E. Scarnati, and T. Ljungberg. Neuronal activity in monkey ventral striatum related to the expectation of reward. J. Neurosci., 12(12):4595–610, 1992. [21] P. R. Montague, P. Dayan, and T. J. Sejnowski. A framework for mesencephalic dopamine systems based on predictive hebbian learning. J. Neurosci., 76(5):1936–1947, 1996. [22] G. Morris, A. Nevet, D. Arkadir, E. Vaadia, and H. Bergman. Midbrain dopamine neurons encode decisions for future action. Nat. Neurosci., 9 (8):1057–1063, 2006. [23] P. Dayan. Matters temporal. Trends. Cogn. Sci., 6(3):105–106, 2002. [24] C. Kolodziejski, B. Porr, and F. Wörgötter. Mathematical properties of neuronal TD-rules and differential hebbian learning: A comparison. Biol. Cybern., 98(3):259–272, 2008.
2008
78
3,568
Gates Tom Minka Microsoft Research Ltd. Cambridge, UK John Winn Microsoft Research Ltd. Cambridge, UK Abstract Gates are a new notation for representing mixture models and context-sensitive independence in factor graphs. Factor graphs provide a natural representation for message-passing algorithms, such as expectation propagation. However, message passing in mixture models is not well captured by factor graphs unless the entire mixture is represented by one factor, because the message equations have a containment structure. Gates capture this containment structure graphically, allowing both the independences and the message-passing equations for a model to be readily visualized. Different variational approximations for mixture models can be understood as different ways of drawing the gates in a model. We present general equations for expectation propagation and variational message passing in the presence of gates. 1 Introduction Graphical models, such as Bayesian networks and factor graphs [1], are widely used to represent and visualise fixed dependency relationships between random variables. Graphical models are also commonly used as data structures for inference algorithms since they allow independencies between variables to be exploited, leading to significant efficiency gains. However, there is no widely used notation for representing context-specific dependencies, that is, dependencies which are present or absent conditioned on the state of another variable in the graph [2]. Such a notation would be necessary not only to represent and communicate context-specific dependencies, but also to be able to exploit context-specific independence to achieve efficient and accurate inference. A number of notations have been proposed for representing context-specific dependencies, including: case factor diagrams [3], contingent Bayesian networks [4] and labeled graphs [5]. None of these has been widely adopted, raising the question: what properties would a notation need, to achieve widespread use? We believe it would need to be: • simple to understand and use, • flexible enough to represent context-specific independencies in real world problems, • usable as a data structure to allow existing inference algorithms to exploit context-specific independencies for efficiency and accuracy gains, • usable in conjunction with existing representations, such as factor graphs. This paper introduces the gate, a graphical notation for representing context-specific dependencies that we believe achieves these desiderata. Section 2 describes what a gate is and shows how it can be used to represent context-specific independencies in a number of example models. Section 3 motivates the use of gates for inference and section 4 expands on this by showing how gates can be used within three standard inference algorithms: Expectation Propagation (EP), Variational Message Passing (VMP) and Gibbs sampling. Section 5 shows how the placement of gates can tradeoff cost versus accuracy of inference. Section 6 discusses the use of gates to implement inference algorithms. 1 n 1 m p c x m2 p2 c False m1 p1 True c x mn pn n=1..N (a) (b) (c) (d) m2 p2 c x m1 p1 m3 p3 3 2 Gaussian x Figure 1: Gate examples (a) The dashed rectangle indicates a gate containing a Gaussian factor, with selector variable c. (b) Two gates with different key values used to construct a mixture of two Gaussians. (c) When multiple gates share a selector variable, they can be drawn touching with the selector variable connected to only one of the gates. (d) A mixture of N Gaussians constructed using both a gate and a plate. For clarity, factors corresponding to variable priors have been omitted. 2 The Gate A gate encloses part of a factor graph and switches it on or off depending on the state of a latent selector variable. The gate is on when the selector variable has a particular value, called the key, and off for all other values. A gate allows context-specific independencies to be made explicit in the graphical model: the dependencies represented by any factors inside the gate are present only in the context of the selector variable having the key value. Mathematically, a gate represents raising the contained factors to the power zero if the gate is off, or one if it is on: (Q i fi(x))δ(c=key) where c is the selector variable. In diagrams, a gate is denoted by a dashed box labelled with the value of key, with the selector variable connected to the box boundary. The label may be omitted if c is boolean and key is true. Whilst the examples in this paper refer to factor graphs, gate notation can also be used in both directed Bayesian networks and undirected graphs. A simple example of a gate is shown in figure 1a. This example represents the term N(x; m, p−1)δ(c=true) so that when c is true the gate is on and x has a Gaussian distribution with mean m and precision p. Otherwise, the gate is off and x is uniformly distributed (since it is connected to nothing). By using several gates with different key values, multiple components of a mixture can be represented. Figure 1b shows how a mixture of two Gaussians can be represented using two gates with different key values, true and false. If c is true, x will have distribution N(m1, p−1 1 ), otherwise x will have distribution N(m2, p−1 2 ) . When multiple gates have the same selector variable but different key values, they can be drawn as in figure 1c, with the gate rectangles touching and the selector variable connected to only one of the gates. Notice that in this example, an integer selector variable is used and the key values are the integers 1,2,3. For large homogeneous mixtures, gates can be used in conjunction with plates [6]. For example, figure 1d shows how a mixture of N Gaussians can be represented by placing the gate, Gaussian factor and mean/precision variables inside a plate, so that they are replicated N times. Gates may be nested inside each other, implying a conjunction of their conditions. To avoid ambiguities, gates cannot partially overlap, nor can a gate contain its own selector variable. 2 (a) F F Edge labels Pixel intensities e12 x1 x2 x3 e23 (b) c xn + n=1..N Gaussian zn × Genetic variant Quant. trait w yn gn m p Figure 2: Examples of models which use gates (a) A line process where neighboring pixel intensities are independent if an edge exists between them. (b) Testing for dependence between a genetic variant gn and an observed quantitative trait xn. The selector variable c encodes whether the linear dependency represented by the structure inside the gate is present or absent. Gates can also contain variables, as well as factors. Such variables have the behaviour that, when the gate is off, they revert to having a default value of false or zero, depending on the variable type. Mathematically, a variable inside a gate represents a Dirac delta when the gate is off: δ(x)1−δ(c=key) where δ(x) is one only when x has its default value. Figure 2b shows an example where variables are contained in gates – this example is described in the following section. 2.1 Examples of models with gates Figure 2a shows a line process from [7]. The use of gates makes clear the assumption that two neighboring image pixels xi and xj have a dependency between their intensity values, unless there is an edge eij between them. An opaque three-way factor would hide this context-specific independence. Gates can also be used to test for independence. In this case the selector variable is connected only to the gate, as shown in the example of figure 2b. This is a model used in functional genomics [8] where the aim is to detect associations between a genetic variant gn and some quantitative trait xn (such as height, weight, intelligence etc.) given data from a set of N individuals. The binary selector variable c switches on or off a linear model of the genetic variant’s contribution yn to the trait xn, across all individuals. When the gate is off, yn reverts to the default value of 0 and so the trait is explained only by a Gaussian-distributed background model zn. Inferring the posterior distribution of c allows associations between the genetic variation and the trait to be detected. 3 How gates arise from message-passing on mixture models Factor graph notation arises naturally when describing message passing algorithms, such as the sum-product algorithm. Similarly, the gate notation arises naturally when considering the behavior of message passing algorithms on mixture models. As a motivating example, consider the mixture model of figure 1b when the precisions p1 and p2 are constant. Using 1 and 2 as keys instead of true and false, the joint distribution is: p(x, c, m1, m2) = p(c)p(m1)p(m2)f(x|m1)δ(c−1)f(x|m2)δ(c−2) where f is the Gaussian distribution. If we apply mean-field approximation to this model, we obtain the following fixed-point system: q(c = k) ∝p(c = k) exp X x q(x) X mk q(mk) log f(x|mk) ! (1) q(mk) ∝p(mk) exp X x q(x) log f(x|mk) !q(c=k) (2) q(x) ∝ Y k exp X mk q(mk) log f(x|mk) !q(c=k) (3) 3 These updates can be interpreted as message-passing combined with “blurring” (raising to a power between 0 and 1). For example, the update for q(mk) can be interpreted as (message from prior)×(blurred message from f). The update for q(x) can be interpreted as (blurred message from m1)×(blurred message from m2). Blurring occurs whenever a message is sent from a factor having a random exponent to a factor without that exponent. Thus the exponent acts like a container, affecting all messages that pass out of it. Hence, we use a graphical notation where a gate is a container, holding all the factors switched by the gate. Graphically, the blurring operation then happens whenever a message leaves a gate. Messages passed into a gate and within a gate are unchanged. This graphical property holds true for other algorithms as well. For example, EP on this model will blur the message from f to mk and from f to x, where “blurring” means a linear combination with the 1 function followed by KL-projection. 3.1 Why gates are not equivalent to ‘pick’ factors It is possible to rewrite this model so that the f factors do not have exponents, and therefore would not be in gates. However, this will necessarily change the approximation. This is because the blurring effect caused by exponents operates in one direction only, while the blurring effect caused by intermediate factors is always bidirectional. For example, suppose we try to write the model using a factor pick(x|c, h1, h2) = δ(x −h1)δ(c−1)δ(x −h2)δ(c−2). We can introduce latent variables (h1, h2) so that the model becomes p(x, c, m1, m2, h1, h2) = p(c)p(m1)p(m2)f(h1|m1)f(h2|m2)pick(x|c, h1, h2). The pick factor will correctly blur the downward messages from (m1, m2) to x. However, the pick factor will also blur the message upward from x before it reaches the factor f, which is incorrect. Another approach is to pick from (m1, m2) before reaching the factor f, so that the model becomes p(x, c, m1, m2, m) = p(c)p(m1)p(m2)f(x|m)pick(m|c, m1, m2). In this case, the message from x to f is not blurred, and the upward messages to (m1, m2) are blurred, which is correct. However, the downward messages from (m1, m2) to f are blurred before reaching f, which is incorrect. 3.2 Variables inside gates Now consider an example where it is natural to consider a variable to be inside a gate. The model is: p(x, c, m1, m2, y) = p(c)p(m1)p(m2) Q k (f1(x|y)f2(y|mk))δ(c−k). If we use a structured variational approximation where y is conditioned on c, then the fixed-point equations are [9]: q(c = k) ∝p(c = k) exp X x q(x) X y q(y|c = k) log f1(x|y) ! exp X y q(y|c = k) X mk q(mk) log f2(y|mk) ! exp − X y q(y|c = k) log q(y|c = k) ! (4) q(y|c = k) ∝exp X x q(x) log f1(x|y) ! exp X mk q(mk) log f2(y|mk) ! (5) q(mk) ∝p(mk) exp X y q(y|c = k) log f2(y|mk) !q(c=k) (6) q(x) ∝ Y k exp X y q(y|c = k) log f1(x|y) !q(c=k) (7) Notice that only the messages to x and mk are blurred; the messages to and from y are not blurred. Thus we can think of y as sitting inside the gate. The message from the gate to c can be interpreted as the evidence for the submodel containing f1, f2, and y. 4 4 Inference with gates In the previous section, we explained why the gate notation arises when performing message passing in some example mixture models. In this section, we describe how gate notation can be generally incorporated into Variational Message Passing [10], Expectation Propagation [11] and Gibbs Sampling [7] to allow each of these algorithms to support context-specific independence. For reference, Table 1 shows the messages needed to apply standard EP or VMP using a fully factorized approximation q(x) = Q i q(xi). Notice that VMP uses different messages to and from deterministic factors, that is, factors which have the form fa(xi, xa\i) = δ(xi −h(xa\i)) where xi is the derived child variable. Different VMP messages are also used to and from such deterministic derived variables. For both algorithms the marginal distributions are obtained as q(xi) = Q a ma→i(xi), except for derived child variables in VMP where q(xi) = mpar→i(xi). The (approximate) model evidence is obtained by a product of contributions, one from each variable and each factor. Table 1 shows these contributions for each algorithm, with the exception that deterministic factors and their derived variables contribute 1 under VMP. When performing inference on models with gates, it is useful to employ a normalised form of gate model. In this form, variables inside a gate have no links to factors outside the gate, and a variable outside a gate links to at most one factor inside the gate. Both of these requirements can be achieved by splitting a variable into a copy inside and a copy outside the gate, connected by an equality factor inside the gate. A factor inside a gate should not connect to the selector of the gate; it should be given the key value instead. In addition, gates should be balanced by ensuring that if a variable links Alg. Type Variable to factor Factor to variable mi→a(xi) ma→i(xi) EP Any Y b̸=a mb→i(xi) proj hP xa\xi Q j∈a mj→a(xj)  fa(xa) i mi→a(xi) VMP Stochastic Y a∋i ma→i(xi) exp  X xa\xi  Y j̸=i mj→a(xj)  log fa(xa)   Det. to parent Y b̸=a mb→i(xi) exp  X xa\(i,ch)   Y k̸=(i,ch) mk→a(xk)  log ˆfa(xa)   where ˆfa(xa) = P xch mch→a(xch)fa(xa) Det. to child mpar→i(xi) proj  X xa\xi  Y j̸=i mj→a(xj)  fa(xa)   Alg. Evidence for variable xi Evidence for factor fa EP si = P xi Q a ma→i(xi) sa = P xa( Q j∈a mj→a(xj))fa(xa) P xa Q j∈a mj→a(xj)ma→j(xj) VMP si = exp(−P xi q(xi) log q(xi)) sa = exp P xa Q j∈a mj→a(xj)  log fa(xa)  Table 1: Messages and evidence computations for EP and VMP The top part of the table shows messages between a variable xi and a factor fa. The notation j ∈a refers to all neighbors of the factor, j ̸= i is all neighbors except i, par is the parent factor of a derived variable, and ch is the child variable of a deterministic factor. The proj[p] operator returns an exponential-family distribution whose sufficient statistics match p. The bottom part of the table shows the evidence contributions for variables and factors in each algorithm. 5 to a factor in a gate with selector variable c, the variable also links to factors in gates keyed on all other values of the selector variable c. This can be achieved by connecting the variable to uniform factors in gates for any missing values of c. After balancing, each gate is part of a gate block – a set of gates activated by different values of the same condition variable. See [12] for details. 4.1 Variational Message Passing with gates VMP can be augmented to run on a gate model in normalised form, by changing only the messages out of the gate and by introducing messages from the gate to the selector variable. Messages sent between nodes inside the gate and messages into the gate are unchanged from standard VMP. The variational distributions for variables inside gates are implicitly conditioned on the gate selector, as at the end of section 3. In the following, an individual gate is denoted g, its selector variable c and its key kg. See [12] for the derivations. The messages out of a gate are modified as follows: • The message from a factor fa inside a gate g with selector c to a variable outside g is the usual VMP message, raised to the power mc→g(c = kg), except in the following case. • Where a variable xi is the child of a number of deterministic factors inside a gate block G with selector variable c, the variable is treated as derived and the message is a momentmatched average of the individual VMP messages. Then the message to xi is mG→i(xi) = proj  X g∈G mc→g(c = kg)mg→i(xi)   (8) where mg→i(xi) is the usual VMP message from the unique parent factor in g and proj is a moment-matching projection onto the exponential family. The message from a gate g to its selector variable c is a product of evidence messages from the contained nodes: mg→c(c = kg) = Y a∈g sa Y i∈g si, mg→c(c ̸= kg) = 1 (9) where sa and si are the VMP evidence messages from a factor and variable, respectively (Table 1). The set of contained factors includes any contained gates, which are treated as single factors by the containing gate. Deterministic variables and factors send evidence messages of 1, except where a deterministic factor fa parents a variable xi outside g. Instead of sending sa = 1, the factor sends: sa = exp X xi ma→i(xi) log mi→a(xi) ! (10) The child variable xi outside the gate also has a different evidence message: si = exp − X xi mG→i(xi) log mi→a(xi) ! (11) where mG→i is the message from the parents (8) and mi→a is the message from xi to any parent. To allow for nested gates, we must also define an evidence message for a gate: sg =  Y a∈g sa Y i∈g si   q(c=kg) (12) 4.2 Expectation Propagation with gates As with VMP, EP can support gate models in normalised form by making small modifications to the message-passing rules. Once again, messages between nodes inside a gate are unchanged. Recall that, following gate balancing, all gates are part of gate blocks. In the following, an individual gate is denoted g, its selector variable c and its key kg. See [12] for the derivations. 6 The messages into a gate are as follows: • The message from a selector variable to each gate in a gate block G is the same. It is the product of all messages into the variable excluding messages from gates in G. • The message from a variable to each neighboring factor inside a gate block G is the same. It is product of all messages into the variable excluding messages from any factor in G. Let nbrs(g) be the set of variables outside of g connected to some factor in g. Each gate computes an intermediate evidence-like quantity sg defined as: sg = Y a∈g sa Y i∈g si Y i∈nbrs(g) sig where sig = X xi mi→g(xi)mg→i(xi) (13) where mg→i is the usual EP message to xi from its (unique) neighboring factor in g. The third term is used to cancel the denominators of sa (see definition in Table 1). Given this quantity, the messages out of a gate may now be specified: • The combined message from all factors in a gate block G with selector variable c to a variable xi is the weighted average of the messages sent by each factor: mG→i(xi) = proj hP g∈G mc→g(c = kg)sgs−1 ig mg→i(xi)mi→g(xi) i mi→g(xi) (14) (Note mi→g(xi) is the same for each gate g.) • The message from a gate block G to its selector variable c is: mG→c(c = kg) = sg P g∈G sg (15) Finally, the evidence contribution of a gate block with selector c is: sc = P g∈G sg Q i∈nbrs(g) P xi mi→g(xi)mG→i(xi) (16) 4.3 Gibbs sampling with gates Gibbs sampling can easily extend to gates which contain only factors. Gates containing variables require a facility for computing the evidence of a submodel, which Gibbs sampling does not provide. Note also that Gibbs sampling does not support deterministic factors. Thus the graph should only be normalised up to these constraints. The algorithm starts by setting the variables to initial values and sending these values to their neighboring factors. Then for each variable xi in turn: 1. Query each neighboring factor for a conditional distribution for xi. If the factor is in a gate that is currently off, replace with a uniform distribution. For a gate g with selector xi, the conditional distribution is proportional to s for the key value and 1 otherwise, where s is the product of all factors in g. 2. Multiply the distributions from neighboring factors together to get the variable’s conditional distribution. Sample a new value for the variable from its conditional distribution. 5 Enlarging gates to increase approximation accuracy Gates induce a structured approximation as in [9], so by moving nodes inside or outside of gates, you can trade off inference accuracy versus cost. Because one gate of a gate block is always on, any node (variable or factor) outside a gate block G can be equivalently placed inside each gate of G. This increases accuracy since a separate set of messages will be maintained for each case, but it may increase the cost. For example, Archambeau and Verleysen [14] suggested a structured approximation for Student-t mixture models, instead of the factorised approximation of [13]. Their modification can be viewed as a gate enlargement (figure 3). By enlarging the gate block to include unm, the blurring between the multiplication factor and unm is removed, increasing accuracy. This comes at no additional cost since unm is only used by one gate and therefore only one message is needed per n and m. 7 zn xn µm λm m m=1..M π unm × Gaussian Gaussian Gamma Discrete Dirichlet n=1..N Gamma zn xn µm λm m m=1..M π unm × Gaussian Gaussian Gamma Discrete Dirichlet n=1..N Gamma (a) (b) Figure 3: Student-t mixture model using gates (a) Model from [13] (b) Structured approximation suggested by [14], which can be interpreted as enlarging the gate. 6 Discussion and conclusions Gates have proven very useful to us when implementing a library for inference in graphical models. By using gates, the library allows mixtures of arbitrary sub-models, such as mixtures of factor analysers. Gates are also used for computing the evidence for a model, by placing the entire model in a gate with binary selector variable b. The log evidence is then the log-odds of b, that is, log P(b = true) −log P(b = false). Similarly, gates are used for model comparison by placing each model in a different gate of a gate block. The marginal over the selector gives the posterior distribution over models. Graphical models not only provide a visual way to represent a probabilistic model, but they can also be used as a data structure for performing inference on that model. We have shown that gates are similarly effective both as a graphical modelling notation and as a construct within an inference algorithm. References [1] B. Frey, F. Kschischang, H. Loeliger, and N. Wiberg. Factor graphs and algorithms. In Proc. of the 35th Allerton Conference on Communication, Control and Computing, 1998. [2] C. Boutilier, N. Friedman, M. Goldszmidt, and D. Koller. Context-specific independence in Bayesian networks. In Proc. of the 12th conference on Uncertainty in Artificial Intelligence, pages 115–123, 1996. [3] D. McAllester, M. Collins, and F. Pereira. Case-factor diagrams for structured probabilistic modeling. Uncertainty in Artificial Intelligence, 2004. [4] B. Milch, B. Marthi, D. Sontag, S. Russell, D. L. Ong, and A. Kolobov. Approximate inference for infinite contingent Bayesian networks. In Proc. of the 6th workshop on Artificial Intelligence and Statistics, 2005. [5] E. Mjolsness. Labeled graph notations for graphical models: Extended report. Technical Report TR# 0403, UCI ICS, March 2004. [6] W. L. Buntine. Operations for learning with graphical models. JAIR, 2:159–225, 1994. [7] S. Geman and D. Geman. Stochastic relaxation, Gibbs distribution, and the Bayesian restoration of images. IEEE Trans. on Pattern Anal. Machine Intell., 6:721–741, 1984. [8] E. S. Lander and D. Botstein. Mapping Mendelian factors underlying quantitative traits using RFLP linkage maps. Genetics, 121(1):185–199, 1989. [9] W.A.J.J. Wiegerinck. Variational approximations between mean field theory and the junction tree algorithm. In UAI, pages 626–633, 2000. [10] J. Winn and C. M. Bishop. Variational Message Passing. JMLR, 6:661–694, 2005. [11] T. P. Minka. Expectation propagation for approximate Bayesian inference. In UAI, pages 362–369, 2001. [12] T. Minka and J. Winn. Gates: A graphical notation for mixture models. Technical report, Microsoft Research Ltd, 2008. [13] M. Svens´en and C. M. Bishop. Robust Bayesian mixture modelling. Neurocomputing, 64:235–252, 2005. [14] C. Archambeau and M. Verleysen. Robust Bayesian clustering. Neural Networks, 20:129–138, 2007. 8
2008
79
3,569
From Online to Batch Learning with Cutoff-Averaging Anonymous Author(s) Affiliation Address email Abstract We present cutoff averaging, a technique for converting any conservative online learning algorithm into a batch learning algorithm. Most online-to-batch conversion techniques work well with certain types of online learning algorithms and not with others, whereas cutoff averaging explicitly tries to adapt to the characteristics of the online algorithm being converted. An attractive property of our technique is that it preserves the efficiency of the original online algorithm, making it appropriate for large-scale learning problems. We provide a statistical analysis of our technique and back our theoretical claims with experimental results. 1 Introduction Batch learning (also called statistical learning) and online learning are two different supervised machine-learning frameworks. In both frameworks, a learning problem is primarily defined by an instance space X and a label set Y, and the goal is to assign labels from Y to instances in X. In batch learning, we assume that there exists a probability distribution over the product space X × Y, and that we have access to a training set drawn i.i.d. from this distribution. A batch learning algorithm uses the training set to generate an output hypothesis, which is a function that maps instances in X to labels in Y. We expect a batch learning algorithm to generalize, in the sense that its output hypothesis should accurately predict the labels of previously unseen examples, which are sampled from the distribution. On the other hand, in the online learning framework, we typically make no statistical assumptions regarding the origin of the data. An online learning algorithm receives a sequence of examples and processes these examples one-by-one. On each online-learning round, the algorithm receives an instance and predicts its label using an internal hypothesis, which it keeps in memory. Then, the algorithm receives the correct label corresponding to the instance, and uses the new instance-label pair to update and improve its internal hypothesis. There is no notion of statistical generalization, as the algorithm is only expected to accurately predict the labels of examples it receives as input. The sequence of internal hypotheses constructed by the online algorithm from round to round plays a central role in this paper, and we refer to this sequence as the online hypothesis sequence. Online learning algorithms tend to be computationally efficient and easy to implement. However, many real-world problems fit more naturally in the batch learning framework. As a result, we are sometimes tempted to use online learning algorithms as if they were batch learning algorithms. A common way to do this is to present training examples one-by-one to the online algorithm, and use the last hypothesis constructed by the algorithm as the output hypothesis. We call this technique the last-hypothesis online-to-batch conversion technique. The appeal of this technique is that it maintains the computational efficiency of the original online algorithm. However, this heuristic technique generally comes with no theoretical guarantees, and the online algorithm’s inherent disregard for out-of-sample performance makes it a risky practice. 1 In addition to the last-hypothesis heuristic, various principled techniques for converting online algorithms into batch algorithms have been proposed. Each of these techniques essentially wraps the online learning algorithm with an additional layer of instructions that endow it with the ability to generalize. One approach is to use the online algorithm to create the online hypothesis sequence, and then to choose a single good hypothesis from this sequence. For instance, the longest survivor technique [8] (originally called the pocket algorithm) chooses the hypothesis that survives the longest number of consecutive online rounds before it is replaced. The validation technique [12] uses a validation set to evaluate each online hypothesis and chooses the hypothesis with the best empirical performance. Improved versions of the validation technique are given in [2, 3], where the wasteful need for a separate validation set is resolved. All of these techniques follow the single hypothesis approach. We note in passing that a disadvantage of the various validation techniques [12, 2, 3] is that their running time scales quadratically with the number of examples. We typically turn to online algorithms for their efficiency, and often a quadratic running time can be problematic. Another common online-to-batch conversion approach, which we call the ensemble approach, uses the online algorithm to construct the online hypothesis sequence, and combines the hypotheses in the sequence by taking a majority [7] or by averaging [2, Sec. 2.A]. When using linear hypotheses, averaging can be done on-the-fly, while the online algorithm is constructing the online hypothesis sequence. This preserves the computational efficiency of the online algorithm. Taking the majority or the average over a rich set of hypotheses promotes robustness and stability. Moreover, since we do not truly know the quality of each online hypothesis, building an ensemble allows us to hedge our bets, rather than committing to a single online hypothesis. Sometimes the ensemble approach outperforms the single hypothesis approach, while other times we see the opposite behavior (see Sec. 4 and [9]). Ideally, we would like a conversion technique that enjoys the best of both worlds: when a single good online hypothesis can be clearly identified, it should be chosen as the output hypothesis, but when a good hypothesis cannot be identified, we should play it safe and construct an ensemble. A first step in this direction was taken in [10, 5], where the conversion technique selectively chooses which subset of online hypotheses to include in the ensemble. For example, the suffix averaging conversion [5] sets the output hypothesis to be the average over a suffix of the online hypothesis sequence, where the suffix length is determined by minimizing a theoretical upper-bound on the generalization ability of the resulting hypothesis. One extreme of this approach is to include the entire online hypothesis sequence in the ensemble. The other extreme reduces to the last-hypothesis heuristic. By choosing the suffix that gives the best theoretical guarantee, suffix averaging automatically balances the trade-off between these two extremes. Regretfully, this technique suffers from a computational efficiency problem. Specifically, the suffix averaging technique only chooses the suffix length after the entire hypothesis sequence has been constructed. Therefore, it must store the entire sequence in memory before it constructs the output hypothesis, and its memory footprint grows linearly with training set size. This is in sharp contrast to the last-hypothesis heuristic, which uses no memory aside from the memory used by the online algorithm itself. When the training set is massive, storing the entire online hypothesis sequence in memory is impossible. In this paper, we present and analyze a new conversion technique called cutoff averaging. Like suffix averaging, it attempts to enjoy the best of the single hypothesis approach and of the ensemble approach. One extreme of our technique reduces to the simple averaging conversion technique, while the other extreme reduces to the longest-survivor conversion technique. Like suffix averaging, we search for the sweet-spot between these two extremes by explicitly minimizing a tight theoretical generalization bound. The advantage of our technique is that much of it can be performed on-the-fly, as the online algorithm processes the data. The memory required by cutoff averaging scales with square-root the number of training examples in the worst case, and is far less in the typically case. This paper is organized as follows. In Sec. 2 we formally present the background for our approach. In Sec. 3 we present the cutoff averaging technique and provide a statistical generalization analysis for it. Finally, we demonstrate the merits of our approach with a set of experiments in Sec. 4. 2 2 Preliminaries Recall that X is an instance domain and that Y is a set of labels, and let H be a hypothesis class, where each h ∈H is a mapping from X to Y. For example, we may be faced with a confidencerated binary classification problem, where H is the class of linear separators. In this case, X is a subset of the Euclidean space Rn, Y is the real line, and each hypothesis in H is a linear function parametrized by a weight vector w ∈Rn and defined as h(x) = ⟨w, x⟩. We interpret sign(h(x)) as the actual binary label predicted by h, and |h(x)| as the degree of confidence in this prediction. The quality of the predictions made by h is measured using a loss function ℓ. We use ℓ(h; (x, y)) to denote the penalty incurred for predicting the label h(x) when the correct label is actually y. Returning to the example of linear separators, a common choice of loss function is the zero-one loss, which is simply the indicator function of prediction mistakes. Another popular loss function is the hinge loss, defined as ℓ(h; (x, y)) =  1 −y⟨w, x⟩ if y⟨w, x⟩≤1 0 otherwise . As noted above, in batch learning we assume the existence of a probability distribution D over the product space X × Y. The input of a batch learning algorithm is a training set, sampled from Dm. The risk of a hypothesis h, denoted by ℓ(h; D), is defined as the expected loss incurred by h over examples sampled from D. Formally, ℓ(h; D) = E(X,Y )∼D [ℓ(h; (X, Y ))] . We can talk about the zero-one-risk or the hinge-loss-risk, depending on which loss function we choose to work with. The goal of a batch learning algorithm for the hypothesis class H and for the loss function ℓis to find a hypothesis h⋆∈H whose risk is as close as possible to infh∈H ℓ(h; D). In online learning, the labeled examples take the form of a sequence S = (xi, yi) m i=1. We typically refrain from making any assumptions on the process that generates S; it could very well be a stochastic process but it doesn’t have to be. The online algorithm observes the examples in the sequence one-by-one, and incrementally constructs the sequence of online hypotheses (hi)m i=0, where each hi ∈H. The first hypotheses, h0, is a default hypothesis, which is defined in advance. Before round t begins, the algorithm has already constructed the prefix (hi)t−1 i=0. At the beginning of round t, the algorithm observes xt and makes the prediction ht−1(xt). Then, the correct label yt is revealed and the algorithm suffers a loss of ℓ(ht−1; (xt, yt)). Finally, the algorithm uses the new example (xt, yt) to construct the next hypothesis ht. The update rule used to construct ht is the main component of the online learning algorithm. In this paper, we make the simplifying assumption that the update rule is deterministic, and we note that our derivation can be extended to randomized update rules. Since S is not necessarily generated by any distribution D, we cannot define the risk of an online hypothesis. Instead, the performance of an online algorithm is measured using the game-theoretic notion of regret. The regret of an online algorithm is defined as 1 m m X i=1 ℓ(hi−1; (xi, yi)) −min ˆh∈H 1 m m X i=1 ℓ  ˆh; (xi, yi)  . (1) In words, regret measures how much better the algorithm could have done by using the best fixed hypothesis in H on all m rounds. The goal of an online learning algorithm is to minimize regret. To make things more concrete, we focus on two online learning algorithms for binary classification. The first is the classic Perceptron algorithm [13] and the second is a finite-horizon margin-based variant of the Perceptron, which closely resembles algorithms given in [11, 4]. The term finitehorizon indicates that the algorithm knows the total length of the sequence of examples before observing any data. The term margin-based indicates that the algorithm is concerned with minimizing the hinge-loss, unlike the classic Perceptron, which deals directly with the zero-one loss. Pseudocode for both algorithms is given in Fig. 1. We chose these two particular algorithms because they exhibit two extreme behaviors when converted into batch learning algorithms. Specifically, if we were to present the classic Perceptron with an example-sequence S drawn i.i.d. from a distribution D, we would typically see large fluctuations in the zero-one-risk of the various online hypotheses. (see Sec. 4). Due to these fluctuations, the ensemble approach suits the classic Perceptron very well, 3 PERCEPTRON FINITE-HORIZON MARGIN-BASED PERCEPTRON input S = (xi, yi) m i=1 input S = (xi, yi) m i=1 s.t. ∥xi∥2 ≤R set w0 = (0, . . . , 0) set w0 = (0, . . . , 0) for i = 1, . . . , m for i = 1, . . . , m receive xi, predict sign⟨wi−1, xi⟩ receive xi, predict sign⟨wi−1, xi⟩ receive yi ∈{−1, +1} receive yi ∈{−1, +1} if sign ⟨wi−1, xi⟩  ̸= yi if ℓ(wi−1; (xi, yi)) > 0 wi ←wi−1 + yixi w′ i−1 ←wi−1 + yixi √mR wi ← w′ i−1 ∥w′ i−1∥2 Figure 1: Two versions of the Perceptron algorithm. and typically outperforms any single hypothesis approach. On the other hand, if we were to repeat this experiment with the margin-based Perceptron, using hinge-loss-risk, we would typically see a monotonic decrease in risk from round to round. A possible explanation for this is the similarity between the margin-based Perceptron and some incremental SVM solvers [14]. The last hypothesis constructed by the margin-based Perceptron is typically better than any average. This difference between the classic Perceptron and its margin-based variant was previously observed in [9]. Ideally, we would like a conversion technique that performs well in both cases. From a theoretical standpoint, the purpose of an online-to-batch conversion technique is to turn an online learning algorithm with a regret bound into a batch learning algorithm with a risk bound. We state a regret bound for the margin-based Perceptron, so that we can demonstrate this idea in the next section. Theorem 1. Let S = (xi, yi) m i=1 be a sequence of examples such that xi ∈Rn and y ∈{−1, +1} and let ℓdenote the hinge loss. Let H be the set of linear separators defined by weight vectors in the unit L2 ball. Let (hi)m i=0 be the online hypothesis sequence generated by the margin-based Perceptron (see Fig. 1) when it processes S. Then, for any ˆh ∈H, 1 m Pm i=1 ℓ hi−1; (xi, yi)  − 1 m Pm i=1 ℓ ˆh; (xi, yi)  ≤ R √m . The proof of Thm. 1 is not much different from other regret bounds for Perceptron-like algorithms; for completeness we give the proof in [1]. 3 Cutoff Averaging We now present the cutoff averaging conversion technique. This technique can be applied to any conservative online learning algorithm that uses a convex hypothesis class H. A conservative algorithm is one that modifies its online hypotheses only on rounds where a positive loss is suffered. On rounds where no loss is suffered, the algorithm keeps its current hypothesis, and we say that the hypothesis survived the round. The survival time of each distinct online hypothesis is the number of consecutive rounds it survives before the algorithm suffers a loss and replaces it with a new hypothesis. Like the conversion techniques mentioned in Sec. 1, we start by applying the online learning algorithm to an i.i.d. training set, and obtaining the online hypothesis sequence (hi)m−1 i=0 . Let k be an arbitrary non-negative integer, which we call the cutoff parameter. Ultimately, our technique will set k automatically, but for the time-being, assume k is a predefined constant. Let Θ ⊆(hi)m−1 i=0 be the set of distinct hypotheses whose survival time is greater than k. The cutoff averaging technique defines the output hypothesis h⋆as a weighted average over the hypotheses in Θ, where the weight of a hypothesis with survival time s is proportional to s −k. Intuitively, each hypothesis must qualify for the ensemble, by suffering no loss for k consecutive rounds. The cutoff parameter k sets the bar for acceptance into the ensemble. Once a hypothesis is included in the ensemble, its weight is determined by the number of additional rounds it perseveres after qualifying. 4 We present a statistical analysis of the cutoff averaging technique. We use capital-letter notation throughout our analysis to emphasize that our input is stochastic and that we are essentially analyzing random variables. First, we represent the sequence of examples as a sequence of random variables (Xi, Yi) m i=1. Once this sequence is presented to the online algorithm, we obtain the online hypothesis sequence (Hi)m i=1, which is a sequence of random functions. Note that each random function Hi is deterministically defined by the random variables ((Xj, Yj))i j=1. Therefore, the risk of Hi is also a deterministic function of ((Xj, Yj))i j=1. Since (Xi+1, Yi+1) is sampled from D independently of ((Xj, Yj))i j=1, we observe that ℓ(Hi; D) = E  ℓ Hi; (Xi+1, Yi+1)  (Xj, Yj) i j=1  . (2) In words, the risk of the random function Hi equals the conditional expectation of the online loss suffered on round i + 1, conditioned on the random examples 1 through i. This simple observation relates statistical risk with online loss, and is the key to converting regret bounds into risk bounds. Define the sequence of binary random variables (Bi)m−1 i=0 as follows Bi =  1 if i = 0 or if i ≥k and Hi−k = Hi−k+1 = . . . = Hi 0 otherwise . (3) Now define the output hypothesis H⋆ k =  m−1 X i=0 Bi −1 m−1 X i=0 BiHi . (4) Note that we automatically include the default hypothesis H0 in the definition of H⋆ k. This technical detail makes our analysis more elegant, and is otherwise irrelevant. Also note that setting k = 0 results in Bi = 1 for all i, and would reduce our conversion technique to the standard averaging conversion technique. At the other extreme, as k increases, our technique approaches the longest survivor conversion technique. The following theorem bounds the risk of H⋆ k using the online loss suffered on rounds where Bi = 1. The theorem holds only when the loss function ℓis convex in its first argument and bounded in [0, C]. Note that this is indeed the case for the margin-based Perceptron and the hinge loss function. Since the margin-based Perceptron enforces ∥wi∥≤1, and assuming that ∥xi∥≤R, it follows from the Cauchy-Schwartz inequality that ℓ∈[0, R + 1]. If the loss function is not convex, the theorem does not hold, but note that we can still bound the average risk of the hypotheses in the ensemble. Theorem 2. Let k be a non-negative constant and let ℓbe a convex loss function such that ℓ(h; (x, y)) ∈[0, C]. An online algorithm is given m ≥4 independent samples from D and constructs the online hypothesis sequence (Hi)m i=0. Define Bi and H⋆ k as above, let Li = Bi−1ℓ Hi−1; (Xi, Yi)  for all i and let ¯L = (P Bi)−1 P Li. For any δ ∈(0, 1), with probability at least 1 −δ, it holds that ℓ(H⋆ k; D) < ¯L + s 2C ln( m δ )¯L P Bi + 7C ln( m δ ) P Bi . To prove the theorem, we require the following tail bound, which is a corollary of Freedman’s tail bound for martingales [6], similar to [3, Proposition 2]. Lemma 1. Let (Li)m i=1 be a sequence of real-valued random variables and let (Zi)m i=1 be a sequence of arbitrary random variables such that Li = E[Li|(Zj)i j=1] and Li ∈[0, C] for all i. Define Ui = E[Li|(Zj)i−1 j=1] for all i, and define ¯Lt = Pt i=1 Li and ¯Ut = Pt i=1 Ui for all t. For any m ≥4 and for any δ ∈(0, 1), with probability at least 1 −δ, it holds that ∀t ∈{1, . . . , m} ¯Ut < ¯Lt + q 2C ln( m δ )¯Lt + 7C ln( m δ ) . Due to space constraints, the proof of Lemma 1 is given in [1]. It can also be reverse-engineered from [3, Proposition 2]. Equipped with Lemma 1, we now prove Thm. 2. 5 Proof of Thm. 2. Define Ui = E[Li|((Xj, Yj))i−1 j=1] for all i ∈{1, . . . , m}, and define ¯U = Pm i=1 Ui. Using Lemma 1, we have that, with probability at least 1 −δ ¯U < ¯L + q 2C ln( m δ )¯L + 7C ln( m δ ) . Now notice that, by definition, Ui = E h Bi−1ℓ Hi−1; (Xi, Yi)  ((Xj, Yj))i−1 j=1 i . Since Bi is deterministically defined by ((Xj, Yj))i−1 j=1, it can be taken outside of the conditional expectation above. Using the observation made in Eq. (2), we have Ui = Bi−1ℓ(Hi−1; D). Overall, we have shown that m X i=1 Bi−1ℓ(Hi−1; D) < ¯L + q 2C ln( m δ )¯L + 7C ln( m δ ) . Using Jensen’s inequality, the left-hand side above is at least Pm i=1 Bi−1  ℓ(H⋆ k; D). We can now complete the definition of the cutoff averaging technique. Note that by replacing δ with δ/m in Thm. 2 and by using the union bound, we can ensure that Thm. 2 holds uniformly for all k ∈{0, . . . , m −1} with probability at least 1 −δ. The cutoff averaging technique sets the output hypothesis H⋆to be hypothesis in {H⋆ 0, . . . , H⋆ m−1} for which Thm. 2 gives the smallest bound. In other words, k is chosen automatically so as to balance the trade-off between the benefits of averaging and those of good empirical performance. If a small number of online hypotheses stand out with significantly long survival times, then our technique will favor a large k and a sparse ensemble. On the other hand, if most of the online hypotheses have medium/short survival times, then our technique will favor small values of k and a dense ensemble. Even if ℓis not convex, minimizing the bound in Thm. 2 implicitly minimizes the average risk of the ensemble hypotheses. If the online algorithm being converted has a regret bound, then the data dependent risk bound given by Thm. 2 can be turned into a data independent risk bound. A detailed derivation of such a bound exceeds the scope of this paper, and we just sketch the proof in the case of the margin-based Perceptron. It trivially holds that the risk of H⋆is upper-bounded by the bound given in Thm. 2 for k = 0. When Thm. 2 is applied with k = 0, ¯L simply becomes the average loss suffered by the online algorithm over the entire training set and P Bi = m. We can now use Thm. 1 to bound ¯L by the average loss of any ˆh ∈H on the sequence (Xi, Yi) m i=1. Particularly, we can choose ˆh to be the hypothesis with the smallest risk in H, namely, ˆh = arg minh∈H ℓ(h; D). The final step is to bound the difference between 1 m P ℓ(ˆh; (Xi, Yi)) and ℓ(ˆh; D), which can be done using any tail bound for sums of independent bounded random variables, such as Hoeffding’s bound or Bernstein’s bound. The result is that, with high probability, ℓ(H⋆; D) ≤minh∈H ℓ(h; D) + O(m−1/2). Similar derivations appear in [2, 3]. As mentioned in the introduction, our approach is similar to the suffix averaging conversion technique of [5], which also interpolates between an ensemble approach and a single hypothesis approach. However, the suffix conversion requires Ω(m) space, which is problematic when m is large. In contrast, cutoff averaging requires only O(√m) space. Our technique cannot choose the optimal value of k before the entire dataset has been processed, but nevertheless, it does not need to store the entire hypothesis sequence. Instead, it can group the online hypotheses based on their survival times, and stores only the average hypothesis in each group and the total loss in each group. By the time the entire dataset is processed, most of the work has already been done and calculating the optimal k and the output hypothesis is straightforward. Using simple combinatorics, the maximal number of distinct survival times in a sequence of m hypotheses is O(√m). Finally, note that Lemma 1 is a Kolmogorov-type bound, namely, it holds uniformly for every prefix of the sequence of random variables. Therefore, Thm. 2 actually holds simultaneously for every prefix of the training set. Since our conversion is mostly calculated on-the-fly, in parallel with the online rounds, we can easily construct intermediate output hypotheses, before the online algorithm has a chance to process the entire dataset. Thanks to the Kolmorogorv-type bound, the risk bounds for all of these hypotheses all hold simultaneously. We can monitor how the risk bound changes as the number of examples increases, and perhaps even use the bound to define an early stopping criterion for the training algorithm. Specifically, we could stop processing examples when the risk bound becomes lower than a predefined threshold. 6 test error 10 1 10 3 10 5 0.1 0.2 0.3 0.4 0.5 CCAT vs. GCAT cutoff last 10 1 10 3 10 5 0.1 0.2 0.3 0.4 0.5 CCAT vs. MCAT 10 1 10 3 10 5 0.1 0.2 0.3 0.4 0.5 CCAT vs. ECAT 10 1 10 3 10 5 0.1 0.2 0.3 0.4 0.5 CCAT vs. OTHER 10 1 10 3 10 5 0.1 0.2 0.3 0.4 0.5 GCAT vs. MCAT test error 10 1 10 3 10 5 0.1 0.2 0.3 0.4 0.5 GCAT vs. ECAT 10 1 10 3 10 5 0.1 0.2 0.3 0.4 0.5 GCAT vs. OTHER 10 1 10 3 10 5 0.1 0.2 0.3 0.4 0.5 MCAT vs. ECAT 10 1 10 3 10 5 0.1 0.2 0.3 0.4 0.5 MCAT vs. OTHER 10 1 10 3 10 5 0.1 0.2 0.3 0.4 0.5 ECAT vs. OTHER Figure 2: Test error (zero-one-loss) of last-hypothesis and cutoff averaging, each applied to the standard Perceptron, on ten binary classification problems from RCV1. The x-axis represents training set size, and is given in log-scale. Each plot represents the average over 10 random train-test splits. 4 Experiments and Conclusions We conducted experiments using Reuters Corpus Vol. 1 (RCV1), a collection of over 800K news articles collected from the Reuters news wire. An average article in the corpus contains 240 words, and the entire corpus contains over half a million distinct tokens (not including numbers and dates). Each article in the corpus is associated with one or more high-level categories, which are: Corporate/Industrial (CCAT), Economics (ECAT), Government/Social (GCAT), Markets (MCAT), and Other (OTHER). About 20% of the articles in the corpus are associated with more than one highlevel category. After discarding this 20%, we are left with over 600K documents, each with a single high-level label. Each pair of high-level labels defines the binary classification problem of distinguishing between articles of the two categories, for a total of ten different problems. Each problem has different characteristics, due to the different number of articles and the varying degree of homogeneity in each category. Each article was mapped to a feature vector using a logarithmic bag-of-words representation. Namely, the length of each vector equals the number of distinct tokens in the corpus, and each coordinate in the vector represents one of these tokens. If a token appears s times in a given article, the respective coordinate in the feature vector equals log2(1 + s). We applied the cutoff averaging technique to the classic Perceptron and to the margin-based Perceptron. We repeated each of our experiments ten times, each time taking a new random split of the data into a training set (80%) and a test set (20%), and randomly ordering the training set. We trained each algorithm on each dataset in an incremental manner, namely, we started by training the algorithm using a short prefix of the training sequence, and gradually increased the training set size. We paused training at regular intervals, computed the output hypothesis so far, and calculated its test loss. This gives us an idea of what would happen on smaller training sets. Fig. 2 shows the test zero-one loss attained when our technique is applied to the classic Perceptron algorithm. It also shows the test zero-one loss of the last-hypothesis conversion technique. Clearly, the test loss of the last hypothesis is very unstable, even after averaging over 10 repetitions. In some cases, adding training data actually deteriorates the performance of the last hypothesis. If we decide to use the last hypothesis technique, our training set size could happen to be such that we end up with a bad output hypothesis. On the other hand, the cutoff averaging hypothesis is accurate, stable and consistent. The performance of the simple averaging conversion technique is not plotted in Fig. 2, but we note that it was only slightly worse than the performance of cutoff averaging. When using the classic Perceptron, any form of averaging is beneficial, and our technique successfully identifies this. Fig. 3 shows the test hinge loss of cutoff averaging, last-hypothesis, and simple averaging, when applied to the margin-based Perceptron. In this case, the last hypothesis performs remarkably well 7 test hinge-loss 10 1 10 3 10 5 0.1 0.3 0.5 0.7 0.9 CCAT vs. GCAT cutoff average last 10 1 10 3 10 5 0.1 0.3 0.5 0.7 0.9 CCAT vs. MCAT 10 1 10 3 10 5 0.1 0.3 0.5 0.7 0.9 CCAT vs. ECAT 10 1 10 3 10 5 0.1 0.3 0.5 0.7 0.9 CCAT vs. OTHER 10 1 10 3 10 5 0.1 0.3 0.5 0.7 0.9 GCAT vs. MCAT test hinge-loss 10 1 10 3 10 5 0.1 0.3 0.5 0.7 0.9 GCAT vs. ECAT 10 1 10 3 10 5 0.1 0.3 0.5 0.7 0.9 GCAT vs. OTHER 10 1 10 3 10 5 0.1 0.3 0.5 0.7 0.9 MCAT vs. ECAT 10 1 10 3 10 5 0.1 0.3 0.5 0.7 0.9 MCAT vs. OTHER 10 1 10 3 10 5 0.1 0.3 0.5 0.7 0.9 ECAT vs. OTHER Figure 3: Test hinge-loss of last-hypothesis, averaging, and cutoff averaging, each applied to the finite-horizon margin-based Perceptron, on ten binary classification problems from RCV1. The xaxis represents training set size and each plot represents the average over 10 random train-test splits. and the simple averaging conversion technique is significantly inferior for all training set sizes. Within 1000 online rounds (0.1% of the data), the cutoff averaging technique catches up to the last hypothesis and performs comparably well from then on. Our technique’s poor performance on the first 0.1% of the data is expected, since the tail bounds we rely on are meaningless with so few examples. Once the tail bounds become tight enough, our technique essentially identifies that there is no benefit in constructing a diverse ensemble, and assigns all of the weight to a short suffix of the online hypothesis sequence. We conclude that there are cases where the single-hypothesis approach is called for and there are cases where an ensemble approach should be used. If we are fortunate enough to know which case applies, we can simply choose the right approach. However, if we are after a generic solution that performs well in both cases, we need a conversion technique that automatically balances the tradeoff between these two extremes. Suffix averaging [5] and cutoff averaging are two such techniques, with cutoff averaging having a significant computational advantage. References [1] Anonimous. Technical appendix submitted with this manuscript, 2008. [2] N. Cesa-Bianchi, A. Conconi, and C. Gentile. On the generalization ability of online learning algorithms. IEEE Transactions on Information Theory, 50(9):2050–2057, September 2004. [3] N. Cesa-Bianchi and C. Gentile. Improved risk bounds for online algorithms. NIPS 19, 2006. [4] O. Dekel, S. Shalev-Shwartz, and Y. Singer. The Forgetron: A kernel-based perceptron on a budget. SIAM Journal on Computing, 37:1342–1372, 2008. [5] O. Dekel and Y. Singer. Data-driven online to batch conversions. NIPS 18, 2006. [6] D. A. Freedman. On tail probabilities for martingales. Annals of Prob., 3(1):100–118, 1975. [7] Y. Freund and R. E. Schapire. Large margin classification using the perceptron algorithm. Machine Learning, 37(3):277–296, 1999. [8] S. I. Gallant. Optimal linear discriminants. Proc. of ICPR 8, pages 849–852. IEEE, 1986. [9] R. Khardon and G. Wachman. Noise tolerant variants of the perceptron algorithm. Journal of Machine Learning Research, 8:227–248, 2007. [10] Y. Li. Selective voting for perceptron-like learning. Proc. of ICML 17, pages 559–566, 2000. [11] Y. Li, H. Zaragoza, R. He, J. ShaweTaylor, and J. Kandola. The perceptron algorithm with uneven margins. Proc. of ICML 19, pages 379–386, 2002. [12] N. Littlestone. From online to batch learning. Proc. of COLT 2, pages 269–284, 1989. [13] F. Rosenblatt. The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65:386–407, 1958. [14] T. Zhang. Solving large scale linear prediction problems using stochastic gradient descent algorithms. Proc. of ICML 21, 2004. 8
2008
8
3,570
Multi-Level Active Prediction of Useful Image Annotations for Recognition Sudheendra Vijayanarasimhan and Kristen Grauman Department of Computer Sciences University of Texas at Austin {svnaras,grauman}@cs.utexas.edu Abstract We introduce a framework for actively learning visual categories from a mixture of weakly and strongly labeled image examples. We propose to allow the categorylearner to strategically choose what annotations it receives—based on both the expected reduction in uncertainty as well as the relative costs of obtaining each annotation. We construct a multiple-instance discriminative classifier based on the initial training data. Then all remaining unlabeled and weakly labeled examples are surveyed to actively determine which annotation ought to be requested next. After each request, the current classifier is incrementally updated. Unlike previous work, our approach accounts for the fact that the optimal use of manual annotation may call for a combination of labels at multiple levels of granularity (e.g., a full segmentation on some images and a present/absent flag on others). As a result, it is possible to learn more accurate category models with a lower total expenditure of manual annotation effort. 1 Introduction Visual category recognition is a vital thread in computer vision research. The recognition problem remains challenging because of the wide variation in appearance a single class typically exhibits, as well as differences in viewpoint, illumination, and clutter. Methods are usually most reliable when good training sets are available, i.e., when labeled image examples are provided for each class, and where those training examples are adequately representative of the distribution to be encountered at test time. The extent of an image labeling can range from a flag telling whether the object of interest is present or absent, to a full segmentation specifying the object boundary. In practice, accuracy often improves with larger quantities of training examples and/or more elaborate annotations. Unfortunately, substantial human effort is required to gather such training sets, making it unclear how the traditional protocol for visual category learning can truly scale. Recent work has begun to explore ways to mitigate the burden of supervision [1–8]. While the results are encouraging, existing techniques fail to address two key insights about low-supervision recognition: 1) the division of labor between the machine learner and the human labelers ought to respect any cues regarding which annotations would be easy (or hard) for either party to provide, and 2) to use a fixed amount of manual effort most effectively may call for a combination of annotations at multiple levels (e.g., a full segmentation on some images and a present/absent flag on others). Humans ought to be responsible for answering the hardest questions, while pattern recognition techniques ought to absorb and propagate that information and answer the easier ones. Meanwhile, the learning algorithm must be able to accommodate the multiple levels of granularity that may occur in provided image annotations, and to compute which item at which of those levels appears to be most fruitful to have labeled next (see Figure 1). Coarser labels, less expensive Finer labels, more expensive Finer labels, more expensive Coarser labels, less expensive Fig. 1. Useful image annotations can occur at multiple levels of granularity. Left: For example, a learner may only know whether the image contains a particular object or not (top row, dotted boxes denote object is present), or it may also have segmented foregrounds (middle row), or it may have detailed outlines of object parts (bottom row). Right: In another scenario, groups of images for a given class are collected with keyword-based Web search. The learner may only be given the noisy groups and told that each includes at least one instance of the specified class (top), or, for some groups, the individual example images may be labeled as positive or negative (bottom). We propose an active learning paradigm that directs manual annotation effort to the most informative examples and levels. To address this challenge, we propose a method that actively targets the learner’s requests for supervision so as to maximize the expected benefit to the category models. Our method constructs an initial classifier from limited labeled data, and then considers all remaining unlabeled and weakly labeled examples to determine what annotation seems most informative to obtain. Since the varying levels of annotation demand varying degrees of manual effort, our active selection process weighs the value of the information gain against the cost of actually obtaining any given annotation. After each request, the current classifier is incrementally updated, and the process repeats. Our approach accounts for the fact that image annotations can exist at multiple levels of granularity: both the classifier and active selection objectives are formulated to accommodate dual-layer labels. To achieve this duality for the classifier, we express the problem in the multiple instance learning (MIL) setting [9], where training examples are specified as bags of the finer granularity instances, and positive bags may contain an arbitrary number of negatives. To achieve the duality for the active selection, we design a decision-theoretic criterion that balances the variable costs associated with each type of annotation with the expected gain in information. Essentially this allows the learner to automatically predict when the extra effort of a more precise annotation is warranted. The main contribution of this work is a unified framework to actively learn categories from a mixture of weakly and strongly labeled examples. We are the first to identify and address the problem of active visual category learning with multi-level annotations. In our experiments we demonstrate two applications of the framework for visual learning (as highlighted in Figure 1). Not only does our active strategy learn more quickly than a random selection baseline, but for a fixed amount of manual resources, it yields more accurate models than conventional single-layer active selection strategies. 2 Related Work The recognition community is well-aware of the expense of requiring well-annotated image datasets. Recent methods have shown the possibility of learning visual patterns from unlabeled [3, 2] image collections, while other techniques aim to share or re-use knowledge across categories [10, 4]. Several authors have successfully leveraged the free but noisy images on the Web [5, 6, 11]. Using weakly labeled images to learn categories was proposed in [1], and several researchers have shown that MIL can accommodate the weak or noisy supervision often available for image data [11–14]. Working in the other direction, some research seeks to facilitate the manual labor of image annotation, tempting users with games or nice datasets [7, 8]. However, when faced with a distribution of unlabeled images, almost all existing methods for visual category learning are essentially passive, selecting points at random to label. Active learning strategies introduced in the machine learning literature generally select points so as to minimize the model entropy or reduce classification error (e.g., [15, 16]). Decision-theoretic measures for traditional (single-instance) learning have been explored in [17, 18], where they were applied to classify synthetic data and voicemail. Our active selection procedure is in part inspired by this work, as it also seeks to balance the cost and utility tradeoff. Recent work has considered active learning with Gaussian Process classifiers [19], and relevance feedback for video annotations [20]. In contrast, we show how to form active multiple-instance learners, where constraints or labels must be sought at multiple levels of granularity. Further, we introduce the notion of predicting when to “invest” the labor of more expensive image annotations so as to ultimately yield bigger benefits to the classifier. Unlike any previous work, our method continually guides the annotation process to the appropriate level of supervision. While an active criterion for instance-level queries is suggested in [21] and applied within an MI learner, it cannot actively select positive bags or unlabeled bags, and does not consider the cost of obtaining the labels requested. In contrast, we formulate a general selection function that handles the full MIL paradigm and adapts according to the label costs. Experiments show this functionality to be critical for efficient learning from few images. 3 Approach The goal of this work is to learn to recognize an object or category with minimal human intervention. The key idea is to actively determine which annotations a user should be asked to provide, and in what order. We consider image collections consisting of a variety of supervisory information: some images are labeled as containing the category of interest (or not), some have both a class label and a foreground segmentation, while others have no annotations at all. We derive an active learning criterion function that predicts how informative further annotation on any particular unlabeled image or region would be, while accounting for the variable expense associated with different annotation types. As long as the information expected from further annotations outweighs the cost of obtaining them, our algorithm will request the next valuable label, re-train the classifier, and repeat. In the following we outline the MIL paradigm and discuss its applicability for two important image classification scenarios. Then, we describe our decision-theoretic approach to actively request useful annotations. Finally, we discuss how to attribute costs and risks for multi-level annotations. 3.1 Multiple-Instance Visual Category Learning Traditional binary supervised classification assumes the learner is provided a collection of labeled data patterns, and must learn a function to predict labels on new instances. However, the fact that image annotations can exist at multiple levels of granularity demands a learning algorithm that can encode any known labels at the levels they occur, and so MIL [9] is more applicable. In MIL, the learner is instead provided with sets (bags) of patterns rather than individual patterns, and is only told that at least one member of any positive bag is truly positive, while every member of any negative bag is guaranteed to be negative. The goal of MIL is to induce the function that will accurately label individual instances such as the ones within the training bags. MIL is well-suited for the following two image classification scenarios: • Training images are labeled as to whether they contain the category of interest, but they also contain other objects and background clutter. Every image is represented by a bag of regions, each of which is characterized by its color, texture, shape, etc. [12, 13]. For positive bags, at least one of the regions contains the object of interest. The goal is to predict when new image regions contain the object—that is, to learn to label regions as foreground or background. • The keyword associated with a category is used to download groups of images from multiple search engines in multiple languages. Each downloaded group is a bag, and the images within it are instances [11]. For each positive bag, at least one image actually contains the object of interest, while many others may be irrelevant. The goal is to predict the presence or absence of the category in new images. In both cases, an instance-level decision is desirable, but bag-level labels are easier to obtain. While it has been established that MIL is valuable in such cases, previous methods do not consider how to determine what labels would be most beneficial to obtain. We integrate our active selection method with the SVM-based MIL approach given in [22], which uses a Normalized Set Kernel (NSK) to describe bags based on the average representation of instances within them. Following [23], we use the NSK mapping for positive bags only; all instances in a negative bag are treated individually as negative. We chose this classifier since it performs well in practice [24] and allows incremental updates [25]; further, by virtue of being a kernel-based algorithm, it gives us flexibility in our choices of features and kernels. However, alternative MIL techniques that provide probabilitistic outputs could easily be swapped in (e.g. [26, 24, 23]). 3.2 Multi-Level Active Selection of Image Annotations Given the current MIL classifier, our objective is to select what annotation should be requested next. Whereas active selection criteria for traditional supervised classifiers need only identify the best instance to label next, in the MIL domain we have a more complex choice. There are three possible types of request: the system can ask for a label on an instance, a label on an unlabeled bag, or for a joint labeling of all instances within a positive bag. So, we must design a selection criterion that simultaneously determines which type of annotation to request, and for which example to request it. Adding to the challenge, the selection process must also account for the variable costs associated with each level of annotation (e.g., it will take the annotator less time to detect whether the class of interest is present or not, while a full segmentation will be more expensive). We extend the value of information (VOI) strategy proposed in [18] to enable active MIL selection, and derive a generalized value function that can accept both instances and bags. This allows us to predict the information gain in a joint labeling of multiple instances at once, and thereby actively choose when it is worthwhile to expend more or less manual effort in the training process. Our method continually re-evaluates the expected significance of knowing more about any unlabeled or partially labeled example, as quantified by the predicted reduction in misclassification risk plus the cost of obtaining the label. We consider a collection of unlabeled data XU, and labeled data XL composed of a set of positive bags Xp and a set of negative instances ˜ Xn. Recall that positively labeled bags contain instances whose labels are unknown, since they contain an unknown mix of positive and negative instances. Let rp denote the user-specified risk associated with misclassifying a positive example as negative, and rn denote the risk of misclassifying a negative. The risk associated with the labeled data is: Risk(XL) = X Xi∈Xp rp(1 −p(Xi)) + X xi∈˜ Xn rnp(xi), (1) where xi denotes an instance and Xi denotes a bag. Here p(x) denotes the probability that a given input is classified as positive: p(x) = Pr(sgn(wφ(x) + b) = +1|x) for the SVM hyperplane parameters w and b. We compute these values using the mapping suggested in [27], which essentially fits a sigmoid to map the SVM outputs to posterior probabilities. Note that here a positive bag Xi is first transformed according to the NSK before computing its probability. The corresponding risk for unlabeled data is: Risk(XU) = X xi∈XU rp(1 −p(xi)) Pr(yi = +1|xi) + rnp(xi)(1 −Pr(yi = +1|xi)), (2) where yi is the true label for unlabeled example xi. The value of Pr(y = +1|x) is not directly computable for unlabeled data; following [18], we approximate it as Pr(y = +1|x) ≈p(x). This simplifies the risk for the unlabeled data to: Risk(XU) = P xi∈XU (rp +rn)(1−p(xi))p(xi), where again we transform unlabeled bags according to the NSK before computing the posterior. The total cost T(XL, XU) associated with the data is the total misclassification risk, plus the cost of obtaining all labeled data thus far: T(XL, XU) = Risk(XL) + Risk(XU) + X Xi∈Xp C(Xi) + X xi∈˜ Xn C(xi), (3) where the function C(·) returns the cost of obtaining an annotation for its input, and will be defined in more detail below. To measure the expected utility of obtaining any particular new annotation, we want to predict the change in total cost that would result from its addition to XL. Thus, the value of obtaining an annotation for input z is: V OI(z) = T(XL, XU) −T  XL ∪z(t), XU ∖z  (4) = Risk(XL) + Risk(XU) −  Risk  XL ∪z(t) + Risk (XU ∖z)  −C(z), where z(t) denotes that the input z has been merged into the labeled set with its true label t, and XU ∖z denotes that it has been removed from the set of unlabeled data. If the VOI is high for a given input, then the total cost would be decreased by adding its annotation; similarly, low values indicate minor gains, and negative values indicate an annotation that costs more to obtain than it is worth. Thus at each iteration, the active learner surveys all remaining unlabeled and weakly labeled examples, computes their VOI, and requests the label for the example with the maximal value. However, there are two important remaining technical issues. First, for this to be useful we must be able to estimate the empirical risk for inputs before their labels are known. Secondly, for active selection to proceed at multiple levels, the VOI must act as an overloaded function: we need to be able to evaluate the VOI when z is an unlabeled instance or an unlabeled bag or a weakly labeled example, i.e., a positive bag containing an unknown number of negative instances. To estimate the total risk induced by incorporating a newly annotated example z into XL before actually obtaining its true label t, we estimate the updated risk term with its expected value: Risk(XL ∪z(t)) + Risk(XU ∖z) ≈E[Risk(XL ∪z(t)) + Risk(XU ∖z)] = E, where E is shorthand for the expected value expression preceding it. If z is an unlabeled instance, then computing the expectation is straightforward: E = X l∈L  Risk(XL ∪z(l)) + Risk(XU ∖z)  Pr(sgn(wφ(z) + b) = l|z), (5) where L = {+1, −1} is the set of all possible label assignments for z. The value Pr(sgn(wφ(z) + b) = l|z) is obtained by evaluating the current classifier on z and mapping the output to the associated posterior, and risk is computed based on the (temporarily) modified classifier with z(l) inserted into the labeled set. Similarly, if z is an unlabeled bag, the label assignment can only be positive or negative, and we compute the probability of either label via the NSK mapping. If z is a positive bag containing M = |z| instances, however, there are 2M possible labelings: L = {+1, −1}M. For even moderately sized bags, this makes a direct computation of the expectation impractical. Instead, we use Gibbs sampling to draw samples of the label assignment from the joint distribution over the M instances’ descriptors. Let z = {z1, . . . , zM} be the positive bag’s instances, and let z(a) = n (z(a1) 1 ), . . . , (z(aM ) M ) o denote the label assignment we wish to sample, with aj ∈ {+1, −1}. To sample from the conditional distribution of one instance’s label given the rest—the basic procedure required by Gibbs sampling—we re-train the MIL classifier with the given labels added, and then draw the remaining label according to aj ∼Pr(sgn(wφ(zj) + b) = +1|zj), where zj denotes the one instance currently under consideration. For positive bag z, the expected total risk is then the average risk computed over all S generated samples: E = 1 S S X k=1  Risk({XL ∖z} ∪{z(a1)k 1 , . . . , z(aM )k M }) + Risk(XU ∖{z1, z2, ..., zM})  , (6) where k indexes the S samples. To compute the risk on XL for each fixed sample we simply remove the weakly labeled positive bag z, and insert its instances as labeled positives and negatives, as dictated by the sample’s label assignment. Computing the VOI values for all unlabeled data, especially for the positive bags, requires repeatedly solving the classifier objective function with slightly different inputs; to make this manageable we employ incremental SVM updates [25]. To complete our active selection function, we must define the cost function C(z), which maps an input to the amount of effort required to annotate it. This function is problem-dependent. In the visual categorization scenarios we have set forth, we define the cost function in terms of the type of annotation required for the input z; we charge equal cost to label an instance or an unlabeled bag, and proportionally greater cost to label all instances in a positive bag, as determined empirically with labeling experiments with human users. This reflects that outlining an object contour is more expensive than naming an object, or sorting through an entire page of Web search returns is more work than labeling just one. We can now actively select which examples and what type of annotation to request, so as to maximize the expected benefit to the category model relative to the manual effort expended. After each annotation is added and the classifier is revised accordingly, the VOI is evaluated on the remaining unlabeled and weakly labeled data in order to choose the next annotation. This process repeats either until the available amount of manual resources is exhausted, or, alternatively, until the maximum VOI is negative, indicating further annotations are not worth the effort. 4 Results In this section we demonstrate our approach to actively learn visual categories. We test with two distinct publicly available datasets that illustrate the two learning scenarios above: (1) the SIVAL dataset1 of 25 objects in cluttered backgrounds, and (2) a Google dataset ([5]) of seven categories downloaded from the Web. In both, the classification task is to say whether each unseen image contains the object of interest or not. We provide comparisons with single-level active learning (with both the method of [21], and where the same VOI function is used but is restricted to actively label only instances), as well as passive learning. For the passive baseline, we consider random selections from amongst both single-level and multi-level annotations, in order to verify that our approach does not simply benefit from having access to more informative possible labels. 2 To determine how much more labeling a positive bag costs relative to labeling an instance, we performed user studies for both of the scenarios evaluated. For the first scenario, users were shown oversegmented images and had to click on all the segments belonging to the object of interest. In the second, users were shown a page of downloaded Web images and had to click on only those images containing the object of interest. For both datasets, their baseline task was to provide a present/absent flag on the images. For segmentation, obtaining labels on all positive segments took users on average four times as much time as setting a flag. For the Web images, it took 6.3 times as long to identify all positives within bags of 25 noisy images. Thus we set the cost of labeling a positive bag to 4 and 6.3 for the SIVAL and Google data, respectively. These values agree with the average sparsity of the two datasets: the Google set contains about 30% true positive images while the SIVAL set contains 10% positive segments per image. The users who took part in the experiment were untrained but still produced consistent results. 4.1 Actively Learning Visual Objects and their Foreground Regions from Cluttered Images The SIVAL dataset [21] contains 1500 images, each labeled with one of 25 class labels. The cluttered images contain objects in a variety of positions, orientations, locations, and lighting conditions. The images have been oversegmented into about 30 regions (instances) each, each of which is represented by a 30-d feature describing its color and texture. Thus each image is a bag containing both positive and negative instances (segments). Labels on the training data specify whether the object of interest is present or not, but the segments themselves are unlabeled (though the dataset does provide ground truth segment labels for evaluation purposes). The initial training set is comprised of 10 positive and 10 negative images per class, selected at random. Our active learning method must choose its queries from among 10 positive bags (complete segmentations), 300 unlabeled instances (individual segments), and about 150 unlabeled bags (present/absent flag on the image). We use a quadratic kernel with a coefficient of 10−6, and average results over five random training partitions. Figure 2(a) shows representative (best and worst) learning curves for our method and the three baselines, all of which use the same MIL classifier (NSK-SVM). Note that the curves are plotted against the cumulative cost of obtaining labels—as opposed to the number of queried instances— since our algorithm may choose a sequence of queries with non-uniform cost. All methods are given a fixed amount of manual effort (40 cost units) and are allowed to make a sequence of choices until that cost is used up. Recall that a cost of 40 could correspond, for example, to obtaining labels on 40 1 = 40 instances or 40 4 = 10 positive bags, or some mixture thereof. Figure 2(b) summarizes the learning curves for all categories, in terms of the average improvement at a fixed point midway through the active learning phase. All four methods steadily improve upon the initial classifier, but at different rates with respect to the cost. (All methods fail to do better than chance on the ‘dirty glove’ class, which we attribute to the lack of distinctive texture or color on that object.) In general, a steeper learning curve indicates that a method is learning most effectively from the supplied labels. Our multi-level approach shows the most significant gains at a lower cost, meaning that it is best suited for building accurate classifiers with minimal manual effort on this dataset. As we would expect, single-level active selections are better than random, but still fall short of our multi-level approach. This is because single-level active selection can only make a sequence of greedy choices while our approach can jointly select bags of instances to query. Interestingly, multi- and single-level random selections perform quite similarly 1 http://www.cs.wustl.edu/accio/ 2 See [28] for further implementation details, image examples, and learning curves on all classes. 0 10 20 30 40 88 90 92 94 96 98 100 102 Cost Area under ROC Category − ajaxorange Multi−level active Single−level active Multi−level random Single−level random 0 10 20 30 40 55 60 65 70 75 80 85 Cost Area under ROC Category − apple Multi−level active Single−level active Multi−level random Single−level random 0 10 20 30 40 41 42 43 44 45 46 47 48 49 Cost Area under ROC Category − dirtyworkgloves Multi−level active Single−level active Multi−level random Single−level random (a) Example learning curves per class −2 0 2 4 6 8 10 12 Improvement in AUROC at cost = 20 Multi−level active Single−level active Multi−level random Single−level random (b) Summary: all classes Fig. 2. Results on the SIVAL dataset. (a) Sample learning curves per class, each averaged over five trials. First two are best examples, last is worst. (b) Summary of the average improvement over all categories after half of the annotation cost is used. For the same amount of annotation cost, our multi-level approach learns more quickly than both traditional single-level active selection as well as both forms of random selection. Cost Our Approach MI Logistic Regression [21] Random Multi-level Gain over Random MIU Gain over Active Random % Active Random% 10 +0.0051 +0.0241 372 +0.023 +0.050 117 20 +0.0130 +0.0360 176 +0.033 +0.070 112 50 +0.0274 +0.0495 81 +0.057 +0.087 52 0 2 4 6 8 10 0 1 2 3 4 5 6 7 8 Timeline Cumulative number of labels acquired per type SIVAL dataset unlabeled instances unlabeled bags positive bags (all instances) Fig. 3. Left: Comparison with [21] on the SIVAL data, as measured by the average improvement in the AUROC over the initial model for increasing labeling cost values. Right: The cumulative number of labels acquired for each type with increasing number of queries. Our method tends to request complete segmentations or image labels early on, followed by queries on unlabeled segments later on. on this dataset (see boxplots in (b)), which indicates that having more informative labels alone does not directly lead to better classifiers unless the right instances are queried. The table in Figure 3 compares our results to those reported in [21], in which the authors train an initial classifier with multiple-instance logistic regression, and then use the MI Uncertainty (MIU) to actively choose instances to label. Following [21], we report the average gains in the AUROC over all categories at fixed points on the learning curve, averaging results over 20 trials and with the same initial training set of 20 positive and negative images. Since the accuracy of the base classifiers used by the two methods varies, it is difficult to directly compare the gains in the AUROC. The NSKSVM we use consistently outperforms the logistic regression approach using only the initial training set; even before active learning our average accuracy is 68.84, compared to 52.21 in [21]. Therefore, to aid in comparison, we also report the percentage gain relative to random selection, for both classifiers. The results show that our approach yields much stronger relative improvements, again illustrating the value of allowing active choices at multiple levels. For both methods, the percent gains decrease with increasing cost; this makes sense, since eventually (for enough manual effort) a passive learner can begin to catch up to an active learner. 4.2 Actively Learning Visual Categories from Web Images Next we evaluate the scenario where each positive bag is a collection of images, among which only a portion are actually positive instances for the class of interest. Bags are formed from the Googledownloaded images provided in [5]. This set contains on average 600 examples for each of the seven categories. Naturally, the number of true positives for each class are sparse: on average 30% contain a “good” view of the class of interest, 20% are of “ok” quality (occlusions, noise, cartoons, etc.), and 50% are “junk”. Previous methods have shown how to learn from noisy Web images, with results rivaling state-of-the-art supervised techniques [11, 5, 6]. We show how to boost accuracy with these types of learners while leveraging minimal manual annotation effort. To re-use the publicly available dataset from [5], we randomly group Google images into bags of size 25 to simulate multiple searches as in [11], yielding about 30 bags per category. We randomly select 10 positive and 10 negative bags (from all other categories) to serve as the initial training data for each class. The rest of the positive bags of a class are used to construct the test sets. All results are averaged over five random partitions. We represent each image as a bag of “visual words”, and compare examples with a linear kernel. Our method makes active queries among 10 positive bags (complete labels) and about 250 unlabeled instances (images). There are no unlabeled bags in this scenario, since every downloaded batch is associated with a keyword. 0 10 20 30 40 55 60 65 70 Cost Area under ROC Category − cars rear Multi−level active Single−level active Multi−level random Single−level random 0 10 20 30 40 50 55 60 Cost Area under ROC Category − guitar Multi−level active Single−level active Multi−level random Single−level random 0 10 20 30 40 60 62 64 66 68 70 72 Cost Area under ROC Category − motorbike Multi−level active Single−level active Multi−level random Single−level random (a) Example learning curves per class −2 0 2 4 6 8 10 12 Improvment in AUROC at cost 20 Multi−level active Single−level active Multi−level random Single−level random (b) Summary: all classes Fig. 4. Results on the Google dataset, in the same format as Figure 2. Our multi-level active approach outperforms both random selection strategies and traditional single-level active selection. Figure 4 shows the learning curves and a summary of our active learner’s performance. Our multilevel approach again shows more significant gains at a lower cost relative to all baselines, improving accuracy with as few as ten labeled instances. On this dataset, random selection with multi-level annotations actually outperforms random selection on single-level annotations (see the boxplots). We attribute this to the distribution of bags/instances: on average more positive bags were randomly chosen, and each addition led to a larger increase in the AUROC. 5 Conclusions and Future Work Our approach addresses a new problem: how to actively choose not only which instance to label, but also what type of image annotation to acquire in a cost-effective way. Our method is general enough to accept other types of annotations or classifiers, as long as the cost and risk functions can be appropriately defined. Comparisons with passive learning methods and single-level active learning show that our multi-level method is better-suited for building classifiers with minimal human intervention. In future work, we will consider look-ahead scenarios with more far-sighted choices. We are also pursuing ways to alleviate the VOI computation cost, which as implemented involves processing all unlabeled data prior to making a decision. Finally, we hope to incorporate our approach within an existing system with many real users, like Labelme [8]. References [1] Weber, M., Welling, M., Perona, P.: Unsupervised Learning of Models for Recognition. In: ECCV. (2000) [2] Sivic, J., Russell, B., Efros, A., Zisserman, A., Freeman, W.: Discovering Object Categories in Image Collections. In: ICCV. (2005) [3] Quelhas, P., Monay, F., Odobez, J.M., Gatica-Perez, D., Tuytelaars, T., VanGool, L.: Modeling Scenes with Local Descriptors and Latent Aspects. In: ICCV. (2005) [4] Bart, E., Ullman, S.: Cross-Generalization: Learning Novel Classes from a Single Example by Feature Replacement. In: CVPR. (2005) [5] Fergus, R., Fei-Fei, L., Perona, P., Zisserman, A.: Learning Object Categories from Google’s Image Search. In: ICCV. (2005) [6] Li, L., Wang, G., Fei-Fei, L.: Optimol: Automatic Online Picture Collection via Incremental Model Learning. In: CVPR. (2007) [7] von Ahn, L., Dabbish, L.: Labeling Images with a Computer Game. In: CHI. (2004) [8] Russell, B., Torralba, A., Murphy, K., Freeman, W.: Labelme: a Database and Web-Based Tool for Image Annotation. TR, MIT (2005) [9] Dietterich, T., Lathrop, R., Lozano-Perez, T.: Solving the Multiple Instance Problem with Axis-Parallel Rectangles. Artificial Intelligence 89 (1997) 31–71 [10] Murphy, K., Torralba, A., Freeman, W.: Using the Forest to See the Trees:a Graphical Model Relating Features, Objects and Scenes. In: NIPS. (2003) [11] Vijayanarasimhan, S., Grauman, K.: Keywords to Visual Categories: Multiple-Instance Learning for Weakly Supervised Object Categorization. In: CVPR. (2008) [12] Maron, O., Ratan, A.: Multiple-Instance Learning for Natural Scene Classification. In: ICML. (1998) [13] Yang, C., Lozano-Perez, T.: Image Database Retrieval with Multiple-Instance Learning Techniques. In: ICDE. (2000) [14] Viola, P., Platt, J., Zhang, C.: Multiple Instance Boosting for Object Detection. In: NIPS. (2005) [15] Freund, Y., Seung, H., Shamir, E., Tishby: Selective Sampling Using the Query by Committee Algorithm. Machine Learning 28 (1997) [16] Tong, S., Koller, D.: Support Vector Machine Active Learning with Applications to Text Classification. In: ICML. (2000) [17] Lindenbaum, M., Markovitch, S., Rusakov, D.: Selective Sampling for Nearest Neighbor Classifiers. Machine Learning 54 (2004) [18] Kapoor, A., Horvitz, E., Basu, S.: Selective Supervision: Guiding Supervised Learning with Decision-Theoretic Active Learning. In: IJCAI. (2007) [19] Kapoor, A., Grauman, K., Urtasun, R., Darrell, T.: Active Learning with Gaussian Processes for Object Categorization. In: ICCV. (2007) [20] Yan, R., Yang, J., Hauptmann, A.: Automatically Labeling Video Data using Multi-Class Active Learning. In: ICCV. (2003) [21] Settles, B., Craven, M., Ray, S.: Multiple-Instance Active Learning. In: NIPS. (2008) [22] Gartner, T., Flach, P., Kowalczyk, A., Smola, A.: Multi-Instance Kernels. In: ICML. (2002) [23] Bunescu, R., Mooney, R.: Multiple Instance Learning for Sparse Positive Bags. In: ICML. (2007) [24] Ray, S., Craven, M.: Supervised v. Multiple Instance Learning: An Empirical Comparison. In: ICML. (2005) [25] Cauwenberghs, G., Poggio, T.: Incremental and Decremental Support Vector Machine Learning. In: NIPS. (2000) [26] Andrews, S., Tsochantaridis, I., Hofmann, T.: Support Vector Machines for Multiple-Instance Learning. In: NIPS. (2002) [27] Platt, J.: Probabilistic Outputs for Support Vector Machines and Comparisons to Regularized Likelihood Methods. In: Advances in Large Margin Classifiers. MIT Press (1999) [28] Vijayanarasimhan, S., Grauman, K.: Multi-level Active Prediction of Useful Image Annotations for Recognition. Technical Report UT-AI-TR-08-2, University of Texas at Austin (2008)
2008
80
3,571
MAS: a multiplicative approximation scheme for probabilistic inference Ydo Wexler Microsoft Research Redmond, WA 98052 ydow@microsoft.com Christopher Meek Microsoft Research Redmond, WA 98052 meek@microsoft.com Abstract We propose a multiplicative approximation scheme (MAS) for inference problems in graphical models, which can be applied to various inference algorithms. The method uses ϵ-decompositions which decompose functions used throughout the inference procedure into functions over smaller sets of variables with a known error ϵ. MAS translates these local approximations into bounds on the accuracy of the results. We show how to optimize ϵ-decompositions and provide a fast closed-form solution for an L2 approximation. Applying MAS to the Variable Elimination inference algorithm, we introduce an algorithm we call DynaDecomp which is extremely fast in practice and provides guaranteed error bounds on the result. The superior accuracy and efficiency of DynaDecomp is demonstrated. 1 Introduction Probabilistic graphical models gained popularity in the recent decades due to their intuitive representation and because they enable the user to query about the value distribution of variables of interest [19]. Although very appealing, these models suffer from the problem that performing inference in the model (e.g. computing marginal probabilities or its likelihood) is NP-hard [6]. As a result, a variety of approximate inference methods have been developed. Among these methods are loopy message propagation algorithms [24], variational methods [16, 12], mini buckets [10], edge deletion [8], and a variety of Monte Carlo sampling techniques [13, 19, 21, 4, 25]. Approximation algorithms that have useful error bounds and speedup while maintaining high accuracy, include the work of Dechter and colleagues [2, 3, 10, 17], which provide both upper and lower bounds on probabilities, upper bounds suggested by Wainwright et.al. [23], and variational lower bounds [16]. In this paper we present an approximation scheme called the Multiplicative Approximation Scheme (MAS), that provides error bounds for the computation of likelihood of evidence, marginal probabilities, and the Maximum Probability Explanation (MPE) in discrete directed and undirected graphical models. The approximation is based on a local operation called an ϵ-decomposition, that decomposes functions used in the inference procedure into functions over smaller subsets of variables, with a guarantee on the error introduced. The main difference from existing approximations is the ability to translate the error introduced in the local decompositions performed during execution of the algorithm into bounds on the accuracy of the entire inference procedure. We note that this approximation can be also applied to the more general class of multiplicative models introduced in [27]. We explore optimization of ϵ-decompositions and provide a fast optimal closed form solution for the L2 norm. We also show that for the Kullback-Leiber divergence the optimization problem can be solved using variational algorithms on local factors. MAS can be applied to various inference algorithms. As an example we show how to apply MAS to the Variable Elimination (VE) algorithm [9, 20], and present an algorithm called DynaDecomp, which dynamically decomposes functions in the VE algorithm. In the results section we compare the performance of DynaDecomp with that of Mini-buckets [10], GMF [28] and variational methods [26] for various types of models. We find that our method achieves orders of magnitude better accuracy on all datasets. 2 Multiplicative Approximation Scheme (MAS) We propose an approximation scheme, called the Multiplicative Approximation Scheme (MAS) for inference problems in graphical models. The basic operations of the scheme are local approximations called ϵ-decompositions that decouple the dependency of variables. Every such local decomposition has an associated error that our scheme combines into an error bound on the result. Consider a graphical model for n variable X = {X1, . . . , Xn} that encodes a probability distribution P(X) = Q j ψj(dj) where Dj ⊆X are sets determined by the model. Throughout the paper we denote variables and sets of variables with capital letters and denote a value assigned to them with lowercase letters. We denote the observed variables in the model by E = X \ H where E = e. To simplify the proofs we assume ψj(dj) > 1. When this is not the case, as in BNs, every function ψj can be multiplied by a constant zj such that the assumption holds, and the result is obtained after dividing by Q j zj. Thus, here we assume positivity but discuss how this can be relaxed below. In addition to approximating functions ψ by which the original model is defined, we also may wish to approximate other functions such as intermediate functions created in the course of an inference algorithm. We can write the result of marginalizing out a set of hidden variables as a factor of functions fi. The log of the probability distribution the model encodes after such marginalization can then be written as log P(A, E) = log Y i fi(Ui) = X i φi(Ui) (1) where A ⊆H. When A = H we can choose sets Ui = Di and functions fi(Ui) = ψi(Di). Definition 1 (ϵ-decomposition) Given a set of variables W, and a function φ(W) that assigns real values to every instantiation W = w, a set of m functions ˜φl(Wl), l = 1 . . . m, where Wl ⊆W is an ϵ-decomposition if S l Wl = W, and 1 1 + ϵ ≤ P l ˜φl(wl) φ(w) ≤1 + ϵ (2) for some ϵ ≥0, where wl is the projection of w on Wl. Note that an ϵ-decomposition is not well defined for functions φ that equal zero or are infinite for some instantiations. These functions can still be ϵ-decomposed for certain choices of subsets Wl by defining 0 0 = 1 and ∞ ∞= 1. We direct the interested reader to the paper of Geiger et.al. [12] for a discussion on choosing such subsets. We also note that when approximating models in which some assignments have zero probability, the theoretical error bounds can be arbitrarily bad, yet, in practice the approximation can sometimes yield good results. The following theorems show that using ϵ-decompositions the log-likelihood, log P(e), log of marginal probabilities, the log of the Most Probable Explanation (MPE) and the log of the Maximum Aposteriori Probability (MAP) can all be approximated within a multiplicative factor using a set of ϵ-decompositions. Lemma 1 Let A ⊆H, and let P(A, E) factor according to Eq. 1, then the log of the joint probability P(a, e) can be approximated within a multiplicative factor of 1 + ϵmax using a set of ϵidecompositions, where ϵmax = maxi{ϵi}. Proof: log ˜P(a, e) ≡log Y i,l e ˜φil(uil) = X i,l ˜φil(uil) ≤ X i (1 + ϵi)φi(ui) ≤(1 + ϵmax) log P(a, e) log ˜P(a, e) ≡log Y i,l e ˜φil(uil) = X i,l ˜φil(uil) ≥ X i 1 1 + ϵi φi(ui) ≥ 1 1 + ϵmax log P(a, e) Theorem 1 For a set A′ ⊆A the expression log P a′ P(a, e) can be approximated within a multiplicative factor of 1 + ϵmax using a set of ϵi-decompositions. Proof: Recall that P j(cj)r ≤ P j cj r for any set of numbers cj ≥0 and r ≥1. Therefore, using Lemma 1 summing out any set of variables A′ ⊆A does not increase the error: log X a′ ˜P(a, e) ≤log X a′ Y i eφi(ui) !1+ϵmax ≤log X a′ Y i eφi(ui) !1+ϵmax = (1+ϵmax) log X a′ P(a, e) Similarly for the upper bound approximation we use the fact that P j(cj)r ≥ P j cj r for any set of numbers cj ≥0 and 0 < r ≤1. Note that whenever E = ∅, Theorem 1 claims that the log of all marginal probabilities can be approximated within a multiplicative factor of 1 + ϵmax. In addition, for any E ⊆X by setting A′ = A the log-likelihood log P(e) can be approximated with the same factor. A similar analysis can also be applied with minor modifications to the computation of related problems like the MPE and MAP. We adopt the simplification of the problems suggested in [10], reducing the problem of the Most Probable Explanation (MPE) to computing P(h∗, e) = maxh P(h, e) and the problem of the Maximum Aposteriori Probability (MAP) to computing P(a∗, e) = maxa P H\A=h−P(h, e) for a set A ⊆H. Denote the operator ⊕as either a sum or a max operator. Then, similar to Eq. 1, for a set H′ ⊆H we can write log ⊕h′P(h, e) = log Y i fi(Ui) = X i φi(Ui) (3) Theorem 2 Given a set A ⊆H, the log of the MAP probability log maxa P H\A=h−P(h, e) can be approximated within a multiplicative factor of 1 + ϵmax using a set of ϵi-decompositions. Proof: The proof follows that of Theorem 1 with the addition of the fact that maxj(cj)r = (maxjcj)r for any set of real numbers cj ≥0 and r ≥0. An immediate conclusion from Theorem 2 is that the MPE probability can also be approximated with the same error bounds, by choosing A = H. 2.1 Compounded Approximation The results on using ϵ-decompositions assume that we decompose functions fi as in Eqs. 1 and 3. Here we consider decompositions of any function created during the inference procedure, and in particular compounded decompositions of functions that were already decomposed. Suppose that a function ˜φ(W), that already incurs an error ϵ1 compared to a function φ(W), can be decomposed with an error ϵ2. Then, according to Eq. 2, this results in a set of functions ˆφl(Wl), such that the error of P l ˆφl(Wl) is (1 + ϵ1) · (1 + ϵ2) wrt φ(W). To understand what is the guaranteed error for an entire inference procedure consider a directed graph where the nodes represent functions of the inference procedure, and each node v has an associated error rv. The nodes representing the initial potential functions of the model ψi have no parents in the model and are associated with zero error (rv = 1). Every multiplication operation is denoted by edges directed from the nodes S, representing the multiplied functions, to a node t representing the resulting function, the error of which is rt = maxs∈S rs. An ϵ-decomposition on the other hand has a single source node s with an associated error rs, representing the decomposed function, and several target nodes T, with an error rt = (1 + ϵ)rs for every t ∈T. The guaranteed error for the entire inference procedure is then the error associated with the sink function in the graph. In Figure 1 we illustrate such a graph for an inference procedure that starts with four functions (fa, fb, fc and fd) and decomposes three functions, fa, fg and fj, with errors ϵ1, ϵ2 and ϵ3 respectively. In this example we assume that ϵ1 > ϵ2 and that 1 + ϵ1 < (1 + ϵ2)(1 + ϵ3). 2.2 ϵ-decomposition Optimization ϵ-decompositions can be utilized in inference algorithms to reduce the computational cost by parsimoniously approximating factors that occur during the course of computation. As we discuss in Section 3, both the selection of the form of the ϵ-decomposition (i.e., the sets Wi) and which factors to approximate impact the overall accuracy and runtime of the algorithm. Here we consider the problem of optimizing the approximating functions ˜φi given a selected factorization Wi. Given a function f(W) = eφ(W ) and the sets Wi, the goal is to optimize the functions φi(Wi) in order to minimize the error ϵf introduced in the decomposition. The objective function is therefore min ( ˜φ1,..., ˜φm) max w∈W (P i ˜φi(wi) φ(w) , φ(w) P i ˜φi(wi) ) (4) This problem can be formalized as a convex problem using the following notations. Let t = maxw∈W n P i ˜φi(wi) φ(w) , φ(w) P i ˜φi(wi) o and Sw = φ(w) P i ˜φi(wi). Now we can reformulate the problem as min ( ˜φ1,..., ˜φm) t s.t. ∀(W = w) Sw ≤t and S−1 w ≤t (5) This type of problems can be solved with geometric programming techniques, and in particular using interior-point methods [18]. Unfortunately, in the general case the complexity of solving this problem requires O(m3|W|3) time, and hence can be too expensive for functions over a large domain. On the other hand, many times functions defined over a small domain can not be decomposed without introducing a large error. Thus, when trying to limit the error introduced, a significant amount of time is needed for such optimization. To reduce the computational cost of the optimization we resort to minimizing similar measures, in the hope that they will lead to a small error ϵf. Note that by deviating from Eq. 4 to choose the functions ˜φi we may increase the worst case penalty error but not necessarily the actual error achieved by the approximation. In addition, even when using different measures for the optimization we can still compute ϵf exactly. 2.2.1 Minimizing the L2 Norm An alternative minimization measure, the L2 norm, is closely related to that in Eq. 4 and given as: min ( ˜φ1,..., ˜φm) v u u t X w∈W " X i ˜φi(wi) ! −φ(w) #2 (6) We give a closed form analytic solution for this minimization problem when the sets Wi are disjoint, but first we can remove the square root from the optimization formula due to the monotonicity of the square root for positive values. Hence we are left with the task of minimizing: Figure 1: A schematic description of an inference procedure along with the associated error. The procedure starts with four functions (fa, fb, fc and fd) and decomposes three functions, fa, fg and fj, with errors ϵ1, ϵ2 and ϵ3 respectively. In this example we assume that ϵ1 > ϵ2, which results in an error rk = 1 + ϵ1, and assume that 1 + ϵ1 < (1 + ϵ2)(1 + ϵ3), which results in the errors rm = ro = (1 + ϵ2)(1 + ϵ3). Figure 2: An irreducible minor graph of a 4 × 4 Ising model that can be obtained via VE without creating functions of more than 3 variables. Applying MAS, only one function over three variables needs to be decomposed into two functions over overlapping sets of variables in order to complete inference using only functions over three or less variables. min ( ˜φ1,..., ˜φm) X w∈W " X i ˜φi(wi) ! −φ(w) #2 (7) We use the notation w ≈wk to denote an instantiation W = w that is consistent with the instantiation Wk = wk. To find the optimal value of ˜φi(wi) we differentiate Eq. 7 with respect to each ˜φk(wk) and set to zero. Choosing the constraint P w ˜φi(wi) = P w φ(w) m in the resulting underconstrained set of linear equations we get ˜φk(wk) = P w≈wk φ(w) Q i̸=k |Wi| − X i̸=k P w φ(w) m Q j |Wj| As the last term is independent of the index i we finally obtain ˜φk(wk) = P w≈wk φ(w) Q i̸=k |Wi| − (m −1) P w φ(w) m|W| (8) The second term of Eq. 8 is computed once for a decomposition operation. Denoting |W| = N this term can be computed in O(N) time. Computing the first term of Eq. 8 also takes O(N) time but it needs to be computed for every resulting function ˜φk, hence taking an overall time of O(Nm). 2.2.2 Minimizing the KL Divergence The Kulback-liebert (KL) divergence is another common alternative measure used for optimization: min ( ˜φ1,..., ˜φm) X w∈W "X i ˜φi(wi) # log P i ˜φi(wi) φ(w) (9) Although no closed form solution is known for this minimization problem, iterative algorithms were devised for variational approximation, which start with arbitrary functions ˜φi(Wi) and converge to a local minimum [16, 12]. Despite the drawbacks of unbounded convergence time and lack of guarantee to converge to the global optimum, these methods have proven quite successful. In our context this approach has the benefit of allowing overlapping sets Wi. 3 Applying MAS to Inference Algorithms Our multiplicative approximation scheme offers a way to reduce the computational cost of inference by decoupling variables via ϵ-decompositions. The fact that many existing inference algorithms compute and utilize multiplicative factors during the course of computation means that the scheme can be applied widely. The approach does require a mechanism to select functions to decompose, however, the flexibility of the scheme allows a variety of alternative mechanisms. One simple costfocused strategy is to decompose a function whenever its size exceeds some threshold. An alternative quality-focused strategy is to choose an ϵ and search for ϵ-decompositions Wi. Below we consider the application of our approximation scheme to variable elimination with yet another selection strategy. We note that heuristics for choosing approximate factorizations exist for the selection of disjoint sets [28] and for overlapping sets [5] and could be utilized. The ideal application of our scheme is likely to depend both on the specific inference algorithm and the application of interest. 3.1 Dynamic Decompositions One family of decomposition strategies which are of particular interest, are those which allow for dynamic decompositions during the inference procedure. In this dynamic framework, MAS can be incorporated into known exact inference algorithms for graphical models, provided that local functions can be bounded according to Eq. 2. A dynamic decomposition strategy applies ϵ-decompositions to functions in which the original model is defined and to intermediate functions created in the course of the inference algorithm, according to Eq. 1 or Eq. 3, based on the current state of the algorithm, and the accuracy introduced by the possible decompositions. Unlike other approximation methods, such as the variational approach [16] or the edge deletion approach [8], dynamic decompositions has the capability of decoupling two variables in some contexts while maintaining their dependence in others. If we wish to restrict ourselves to functions over three or less variables when performing inference on a 4 × 4 Ising model, the model in Figure 2 is an inevitable minor, and from this point of the elimination, approximation is mandatory. In the variational framework, an edge in the graph should be removed, disconnecting the direct dependence between two or more variables (e.g. removing the edge A-C would result in breaking the set ABC into the sets AB and BC and breaking the set ACD into AD and CD). The same is true for the edge deletion method, with the difference in the new potentials associated with the new sets. Dynamic decompositions allow for a more refined decoupling, where the dependence is removed only in some of the functions. In our example breaking the set ABC into AB and BC while keeping the set ACD intact is possible and is also sufficient for reducing the complexity of inference to functions of no more than three variables (the elimination order would be: A,B,F,H,C,E,D,G). Moreover, if decomposing the set ABC can be done with an error ϵABC, as defined in Eq. 2, then we are guaranteed not to exceed this error for the entire approximate inference procedure. An extreme example will be the functions for the sets ABC and ACD as appear in the tables of Figure 2. It is possible to decompose the function over the set ABC into two functions over the sets AB and BC with an arbitrarily small error, while the same is not possible for the function over the set ACD. Hence, in this example the result of our method will be nearly equal to the solution of exact inference on the model, and the theoretical error bounds will be arbitrarily small, while other approaches, such as the variational method, can yield arbitrarily bad approximations. We discuss how to incorporate MAS into the Variable Elimination (VE) algorithm for computing the likelihood of a graphical model [9, 20]. In this algorithm variables V ∈H are summed out iteratively after multiplying all existing functions that include V , yielding intermediate functions f(W ⊆X) where V /∈W. MAS can be incorporated into the VE algorithm by identifying ϵdecompositions for some of the intermediate functions f. This results in the elimination of f from the pool of functions and adding instead the functions ˜fi(Wi) = e ˜φi(Wi). Note that the sets Wi are not necessarily disjoint and can have common variables. Using ϵ-decompositions reduces the computational complexity, as some variables are decoupled in specific points during execution of the algorithm. Throughout the algorithm the maximal error ϵmax introduced by the decompositions Table 1: Accuracy and speedup for grid-like models. Upper panel: attractive Ising models; Middle panel: repulsive Ising models; Lower panel: Bayesian network grids with random probabilities. Model Num Accuracy Bounds Speedup DD time Values (secs) 10 × 10 5 2.4e-4 0.0096 49.2 0.04 10 × 10 2 2.1e-4 0.0094 2.5 0.01 15 × 15 5 1.2e-4 0.0099 223.3 0.21 15 × 15 2 2.2e-4 0.0096 8.3 0.04 20 × 20 2 1.2e-4 0.0095 12.9 0.08 25 × 25 2 2.6e-5 0.0092 20.9 0.10 30 × 30 2 5.7e-4 0.0097 236.7 0.11 10 × 10 5 3.2e-4 0.0099 38.2 0.04 10 × 10 2 3.5e-4 0.0098 2.3 0.01 15 × 15 5 3.2e-3 0.0099 568.4 0.12 15 × 15 2 8.6e-4 0.0094 7.2 0.05 20 × 20 2 4.5e-4 0.0091 14.3 0.10 25 × 25 2 3.1e-5 0.0094 22.8 0.11 30 × 30 2 8.1e-5 0.0099 218.7 0.10 10 × 10 2 3.0e-3 0.0098 1.1 0.01 12 × 12 2 8.1e-3 0.0096 11.3 0.02 15 × 15 2 1.7e-3 0.0098 201.4 0.05 18 × 18 2 3.0e-4 0.0090 1782.8 0.15 20 × 20 2 1.8e-3 0.0097 7112.9 1.30 10 × 10 5 2.8e-5 0.0095 49.3 0.03 12 × 12 5 5.5e-4 0.0096 458.6 0.05 7 × 7 10 1.8e-4 0.0093 7.8 0.03 8 × 8 10 1.4e-4 0.0098 8.4 0.15 Algorithm 1: DynaDecomp Input: A model for n variables X = {X1, . . . , Xn} and functions ψi(Di ⊆X), that encodes P(X) = Q i ψi(Di); A set E = X \ H of observed variables and their assignment E = e; An elimination order R over the variables in H; scalars M and η. Output: The log-likelihood log P(e); an error ϵ. Initialize: ϵ = 0; F ←{ψi(Di)}; I(ψi) = false; for i = 1 to n do k ←R[i]; T ←{f : f contains Xk, f ∈F}; F ←F \ T; f ′ ←P xk ⊗(T); I(f ′) = V f∈T I(f); if |f ′| ≥M and I(f ′) = true then (ϵf′, ˜F) ←⊘(f ′); if ϵf′ ≤η then ∀˜f ∈˜F I( ˜f) = false; F ←F ∪˜F; ϵ = max{ϵ, ϵf′}; else F ←F ∪f ′; else F ←F ∪f ′; multiply all constant functions in F and put in p; return log p, ϵ; can be easily computed by associating functions with errors, as explained in Section 2.1. In our experiments we restrict attention to non-compounded decompositions. Our algorithm decomposes a function only if it is over a given size M, and if it introduces no more than η error. The approximating functions in this algorithm are strictly disjoint, of size no more than √ M, and with the variables assigned randomly to the functions. We call this algorithm DynaDecomp (DD) and provide a pseudo-code in Algorithm 1. There we use the notation ⊗(T) to denote multiplication of the functions f ∈T, and ⊘(f) to denote decomposition of function f. The outcome of ⊘(f) is a pair (ϵ, ˜F) where the functions ˜fi ∈˜F are over a disjoint set of variables. We note that MAS can also be used on top of other common algorithms for exact inference in probabilistic models which are widely used, thus gaining similar benefits as those algorithms. For example, applying MAS to the junction tree algorithm [14] a decomposition can decouple variables in messages sent from one node in the junction tree to another, and approximate all marginal distributions of single variables in the model in a single run, with similar guarantees on the error. This extension is analogous to how the mini-clusters algorithm [17] extends the mini-bucket algorithm [10]. 4 Results We demonstrate the power of MAS by reporting the accuracy and theoretical bounds for our DynaDecomp algorithm for a variety of models. Our empirical study focuses on approximating the likelihood of evidence, except when comparing to the results of Xing et. al. [28] on grid models. The quality of approximation is measured in terms of accuracy and speedup. The accuracy is reported as max{ log L log ˜L, log ˜L log L} −1 where L is the likelihood and ˜L is the approximate likelihood achieved by DynaDecomp. We also report the theoretical accuracy which is the maximum error introduced by decomposition operations. The speedup is reported as a ratio of run-times for obtaining the approximated and exact solutions, in addition to the absolute time of approximation. In all experiments a random partition was used to decompose the functions, and the L2 norm optimization introduced in Section 2.2.1 was applied to minimize the error. The parameter M was set to 10, 000 and the guaranteed accuracy η was set to 1%, however, as is evident from the results, the algorithm usually achieves better accuracy. We compared the performance of DynaDecomp with the any-time Mini-buckets (MB) algorithm [10]. The parameters i and m, which are the maximal number of variables and functions in a mini-bucket, were initially set to 3 and 1 respectively. The parameter ϵ was set to zero, not constraining the possible accuracy. Generally we allowed MB to run the same time it took DynaDecomp to approximate the model, but not less than one iteration (with the initial parameters). We used two types of grid-like models. The first is an Ising model with random attractive or repulsive pair-wise potentials, as was used in [28]. When computing likelihood in these models we randomly assigned values to 10% of the variables in the model. The other kind of grids were Bayesian networks where every variable Xij at position (i, j) in the grid has the variables Xi−1,j and Xi,j−1 as parents in the model. In addition, every variable Xij has a corresponding observed variable Yij connected to it. Probabilities in these models were uniformly distributed between zero and one. Inference on these models, often used in computer vision [11], is usually harder than on Ising models, due to reduced factorization. We used models where the variables had either two, five or ten values. The results are shown in Table 1. In addition, we applied DynaDecomp to two 100 × 100 Ising grid models with binary variables. Inference in these models is intractable. We estimate the time for exact computation using VE on current hardware to be 3 · 1015 seconds. This is longer than the time since the disappearance of the dinosaurs. Setting η to 2%, DynaDecomp computated the approximated likelihood in 7.09 seconds for the attractive model and 8.14 seconds for the repulsive one. Comparing our results with those obtained by the MB algorithm with an equivalent amount of computations, we find that on the average the accuracy of MB across all models in Tables 1 is 0.198 while the average accuracy of DynaDecomp is 9.8e−4, more than 200 times better than that of MB. In addition the theoretical guarantees are more than 30% for MB and 0.96% for DynaDecomp, a 30-fold improvement. As a side note, the MB algorithm performed significantly better on attractive Ising models than on repulsive ones. To compare our results with those reported in [28] we computed all the marginal probabilities (without evidence) and calculated the L1-based measure P i,j P xij P(xij) −˜P(xij). Running on the Ising models DynaDecomp obtained an average of 1.86e−5 compared to 0.003 of generalized belief propagation (GBP) and 0.366 of generalized mean field (GMF). Although the run times are not directly comparable due to differences in hardware, DynaDecomp average run-time was less than 0.1 seconds, while the run-time of GBP and GMF was previously reported [28] to be 140 and 1.6 seconds respectively, on 8 × 8 grids. We applied our method to probabilistic phylogenetic models. Inference on these large models, which can contain tens of thousands of variables, is used for model selection purposes. Previous works [15, 26] have obtained upper and lower bounds on the likelihood of evidence in the models suggested in [22] using variational methods, reporting an error of 1%. Using the data as in [26], we achieved less than 0.01% error on average within a few seconds, which improves over previous results by two orders of magnitude both in terms of accuracy and speedup. In addition, we applied DynaDecomp to 24 models from the UAI’06 evaluation of probabilistic inference repository [1] with η = 1%. Only models that did not have zeros and that our exact inference algorithm could solve in less than an hour were used. The average accuracy of DynaDecomp on these models was 0.0038 with an average speedup of 368.8 and average run-time of 0.79 seconds. We also applied our algorithm to two models from the CPCS benchmark (cpcs360b and cpcs422b). DynaDecomp obtained an average accuracy of 0.008 versus 0.056 obtained by MB. We note that the results obtained by MB are consistent with those reported in [10] for the MPE problem. References [1] Evaluation of probabilistic inference systems: http://tinyurl.com/3k9l4b, 2006. [2] Bidyuk and Dechter. An anytime scheme for bounding posterior beliefs. AAAI 2006. [3] Bidyuk and Dechter. Improving bound propagation. In ECAI 342–346, 2006. [4] Cheng and Druzdzel. AIS-BN: An adaptive importance sampling algorithm for evidential reasoning in large Bayesian networks. JAIR 13:155–188, 2000. [5] Choi and Darwiche. A variational approach for approximating Bayesian networks by edge deletion. UAI 2006. [6] Cooper. The computational complexity of probabilistic inference using Bayesian belief networks. AI 42(2-3):393–405, 1990. [7] Dagum and Luby. Approximating probabilistic inference in Bayesian belief networks is NP-hard. AI, 60(1):141–153, 1993. [8] Darwiche, Chan, and Choi. On Bayesian network approximation by edge deletion. UAI 2005. [9] Dechter. Bucket elimination: A unifying framework for reasoning. AI 113(1-2):41–85, 1999. [10] Dechter and Rish. Mini-buckets:A general scheme for bounded inference. J.ACM 50:107–153, 2003. [11] W. Freeman, W. Pasztor, and O. Carmichael. Learning low-level vision. IJCV 40:25–47, 2000. [12] Geiger, Meek, and Wexler. A variational inference procedure allowing internal structure for overlapping clusters and deterministic constraints. JAIR 27:1–23, 2006. [13] Henrion. Propagating uncertainty in bayesian networks by probabilistic logic sampling. UAI 1988. [14] Jensen, Lauritzen, and Olesen. Bayesian updating in causal probabilistic networks by local computations. Comp. Stat. Quaterly 4:269–282, 1990. [15] Jojic, Jojic, Meek, Geiger, Siepel, Haussler, and Heckerman. Efficient approximations for learning phylogenetic hmm models from data. ISMB 2004. [16] Jordan, Ghahramani, Jaakkola, and Saul. An introduction to variational methods for graphical models. Machine Learning 37(2):183–233, 1999. [17] Mateescu, Dechter, and Kask. Partition-based anytime approximation for belief updating. 2001. [18] Boyd and Vandenberghe. Convex Optimization. Cambridge University Press, 2004. [19] Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, 1988. [20] Shachter, D’Ambrosio, and Del Favero. Symbolic probabilistic inference in belief networks. AAAI 1990. [21] Shachter and Peot. Simulation approaches to general probabilistic inference on belief networks.UAI 1989. [22] Siepel and Haussler. Combining phylogenetic and HMMs in biosequence analysis. RECOMB 2003. [23] Wainwright, Jaakkola, and Willsky. A new class of upper bounds on the log partition function. IEEE Trans. Info. Theory 51(7):2313–2335, 2005. [24] Weiss. Belief propagation and revision in networks with loops. Technical Report AIM-1616, 1997. [25] Wexler and Geiger. Importance sampling via variational optimization. UAI 2007. [26] Wexler and Geiger. Variational upper bounds for probabilistic phylogenetic models. RECOMB 2007. [27] Wexler and Meek. Inference for multiplicative models. UAI 2008. [28] Xing, Jordan, and Russell. Graph partition strategies for generalized mean field inference. UAI 2004.
2008
81
3,572
Learning Hybrid Models for Image Annotation with Partially Labeled Data Xuming He Department of Statistics UCLA hexm@stat.ucla.edu Richard S. Zemel Department of Computer Science University of Toronto zemel@cs.toronto.edu Abstract Extensive labeled data for image annotation systems, which learn to assign class labels to image regions, is difficult to obtain. We explore a hybrid model framework for utilizing partially labeled data that integrates a generative topic model for image appearance with discriminative label prediction. We propose three alternative formulations for imposing a spatial smoothness prior on the image labels. Tests of the new models and some baseline approaches on three real image datasets demonstrate the effectiveness of incorporating the latent structure. 1 Introduction Image annotation, or image labeling, in which the task is to label each pixel or region of an image with a class label, is becoming an increasingly popular problem in the machine learning and machine vision communities [7, 14]. State-of-the-art methods formulate image annotation as a structured prediction problem, and utilize methods such as Conditional Random Fields [8, 4], which output multiple values for each input item. These methods typically rely on fully labeled data for optimizing model parameters. It is widely acknowledged that consistently-labeled images are tedious and expensive to obtain, which limits the applicability of discriminative approaches. However, a large number of partially-labeled images, with a subset of regions labeled in an image, or only captions for images, are available (e.g., [12]). Learning labeling models with such data would help improve segmentation performance and relax the constraint of discriminative labeling methods. A wide range of learning methods have been developed for using partially-labeled image data. One approach adopts a discriminative formulation, and treats the unlabeled regions as missing data [16], Others take a semi-supervised learning approach by viewing unlabeled image regions as unlabeled data. One class of these methods generalizes traditional semi-supervised learning to structured prediction tasks [1, 10]. However, the common assumption about the smoothness of the label distribution with respect to the input data may not be valid in image labeling, due to large intra-class variation of object appearance. Other semi-supervised methods adopt a hybrid approach, combining a generative model of the input data with a discriminative model for image labeling, in which the unlabeled data are used to regularize the learning of a discriminative model [6, 9]. Only relatively simple probabilistic models are considered in these approaches, without capturing the contextual information in images. Our approach described in this paper extends the hybrid modeling strategy by incorporating a more flexible generative model for image data. In particular, we introduce a set of latent variables that capture image feature patterns in a hidden feature space, which are used to facilitate the labeling task. First, we extend the Latent Dirichlet Allocation model (LDA) [3] to include not only input features but also label information, capturing co-occurrences within and between image feature patterns and object classes in the data set. Unlike other topic models in image modeling [11, 18], our model integrates a generative model of image appearance and a discriminative model of region 1 labels. Second, the original LDA structure does not impose any spatial smoothness constraint to label prediction, yet incorporating such a spatial prior is important for scene segmentation. Previous approaches have introduced lateral connections between latent topic variables [17, 15]. However, this complicates the model learning, and as a latent representation of image data, the topic variables can be non-smooth over the image plane in general. In this paper, we model the spatial dependency of labels by two different structures: one introduces directed connections between each label variable and its neighboring topic variables, and the other incorporates lateral connections between label variables. We will investigate whether these structures effectively capture the spatial prior, and lead to accurate label predictions. The remainder of this paper is organized as follows. The next section presents the base model, and two different extensions to handle label spatial dependencies. Section 3 and 4 define inference and learning procedures for these models. Section 5 describes experimental results, and in the final section we discuss the model limitations and future directions. 2 Model description The structured prediction problem in image labeling can be formulated as follows. Let an image x be represented as a set of subregions {xi}Nx i=1. The aim is to assign each xi a label li from a categorical set L. For instance, subregion xi’s can be image patches or pixels, and L consists of object classes. Denote the set of labels for x as l = {li}Nx i=1. A key issue in structured prediction concerns how to capture the interactions between labels in l given the input image. Model I. We first introduce our base model for capturing individual patterns in image appearance and label space. Assume each subregion xi is represented by two features (ai, ti), in which ai describes its appearance (including color, texture, etc.) in some appearance feature space A and ti is its position on the image plane T . Our method focuses on the joint distribution of labels and subregion appearances given positions by modeling co-occurred patterns in the joint space of L × A. We achieve this by extending the latent Dirichlet allocation model to include both label and appearance. More specifically, we assume each observation pair (ai, li) in image x is generated from a mixture of K hidden ‘topic’ components shared across the whole dataset, given the position information ti. Following the LDA notation, the mixture proportion is denoted as θ, which is image-specific and shares a common Dirichlet prior parameterized by α. Also, zi is used as an indicator variable to specify from which hidden topic component the pair (ai,li) is generated. In addition, we use a to denote the appearance feature vector of each image, z for the indicator vector and t for the position vector. Our model defines a joint distribution of label variables l and appearance feature variables a given the position t as follows, Pb(l, a|t, α) = Z θ [ Y i X zi P(li|ai, ti, zi)P(ai|zi)P(zi|θ)]P(θ|α)dθ (1) where P(θ|α) is the Dirichlet distribution. We specify the appearance model P(ai|zi) to be position invariant but the label predictor P(li|ai, ti, zi) depends on the position information. Those two components are formulated as follows, and the graphical representation of the model is shown in the left panel of Figure 1. (a) Label prediction module P(li|ai, ti, zi). The label predictor P(li|ai, ti, zi) is modeled by a probabilistic classifier that takes (ai, ti, zi) as its input and produces a properly normalized distribution for li. Note that we represent zi in its ‘0-1’ vector form when it is used as the classifier input. So if the dimension of A is M, then the input dimension of the classifier is M + K + 2. We use a MLP with one hidden layer in our experiments, although other strong classifiers are also feasible. (b) Image appearance module P(ai|zi). We follow the convention of topic models and model the topic conditional distributions of the image appearance using a multinomial distribution with parameters βzi. As the appearance features typically take on real values, we first apply k-means clustering to the image features {ai} to build a visual vocabulary V. Thus a feature ai in the appearance space A can be represented as a visual word v, and we have P(ai = v|zi = k) = βk,v. While the topic prediction model in Equation 1 is able to capture regularly co-occurring patterns in the joint space of label and appearance, it ignores spatial priors on the label prediction. However, 2 iz ai li β K θ α N D it i+1 z ai li ai−1 i−1 l zi−1 θ t α β K D z a l i+1 i+1 i i+1 z ai li ai−1 i−1 l zi−1 θ t α β K D z a l i+1 i+1 i Figure 1: Left:A graphical representation of the base topic prediction model (Model I). Middle: Model II. Right: Model III. Circular nodes are random variables, and shaded nodes are observed. N is the number of image features in each image, and D denotes all the training data. spatial priors, such as spatial smoothness, are crucial to labeling tasks, as neighboring labels are usually strongly correlated. To incorporate spatial information, we extend our base model in two different ways as follows. Model II. We introduce a dependency between each label variable and its neighboring topic variables. In this model, each label value is predicted based on the summary information of topics within a neighborhood. More specifically, we change the label prediction model into the following form: P(li|ai, ti, zN(i)) = P(li|ai, ti, X j∈N(i) wjzj), (2) where N(i) is a predefined neighborhood for site i, and wj is the weight for the topic variable zj. We set wj ∝exp(−|ti −tj|/σ2), and normalized to 1, i.e., P j∈N(i) wj = 1. The graphical representation is shown in the middle panel of Figure 1. This model variant can be viewed as an extension to the supervised LDA [2]. Here, however, rather than a single label applying to each input example instead there are multiple labels, one for each element of x. Model III. We add lateral connections between label variables to build a Conditional Random Field of labels. The joint label distribution given input image is defined as P(l|a, t, α) = 1 Z exp{ X i,j∈N(i) f(li, lj) + γ X i log Pb(li|a, t, α)}, (3) where Z is the partition function. The pairwise potential f(li, lj) = P a,b uabδli,aδlj,b, and the unary potential is defined as log output of the base topic prediction model weighted by γ. Here δ is the Kronecker delta function. Note that Pb(li|a, t, α) = P zi P(li|ai, ti, zi)P(zi|a, t). This model is shown in the right panel of Figure 1. Note that the base model (Model I) obtains spatially smooth labels simply through the topics capturing location-dependent co-occurring appearance/label patterns, which tend to be nearby in image space. Model II explicitly predicts a region’s label from the topics in its local neighborhood, so that neighboring labels share similar contexts defined by latent topics. In both of these models, the interaction between labels takes effect through the hidden input representation. The third model uses a conventional form of spatial dependency by directly incorporating local smoothing in the label field. While this structure may impose a stronger spatial prior than other two, it also requires more complicated learning methods. 3 Inference and Label Prediction Given a new image x = {a, t} and our topic models, we predict its labeling based on the Maximum Posterior Marginals (MPM) criterion: l∗ i = arg max li P(li|a, t). (4) We consider the label inference procedure for three models separately as follows. Models I&II: The marginal label distribution P(li|a, t) can be computed as: P(li|a, t) = X zN(i) P(li|ai, ti, X j∈N(i) wjzj)P(zN(i)|a, t) (5) 3 The summation here is difficult when N(i) is large. However, it can be approximated as follows. Denote vi = P j∈N(i) wjzj and vi,q = P j∈N(i) wjq(zj), where q(zj) = {P(zj|a, t)} is the vector form of posterior distribution. Both vi and vi,q are in [0, 1]K. The marginal label distribution can be written as P(li|a, t) = ⟨P(li|ai, ti, vi)⟩P (zN(i)|a,t). We take the first-order approximation of P(li|ai, ti, vi) around vi,q using Taylor expansion: P(li|ai, ti, vi) ≈P(li|ai, ti, vi,q) + (vi −vi,q)T · ∇viP(li|ai, ti, vi)|vi,q. (6) Taking expectation on both sides of Equation 6 w.r.t. P(zN(i)|a, t) (notice that ⟨vi⟩P (zN(i)|a,t) = vi,q), we have the following approximation: P(li|a, t) ≈P zN(i) P(li|ai, ti, P j∈N(i) wjq(zj)). Model III: We first compute the unary potential of the CRF model from the base topic prediction model, i.e., Pb(li|a, t) = P zi P(li|ai, ti, zi)P(zi|a, t). Then the label marginals in Equation 4 are computed by applying loopy belief propagation to the conditional random field. In both situations, we need the conditional distribution of the hidden topic variables z given observed data components to compute the label prediction. We take a Gibbs sampling approach by integrating out the Dirichlet variable θ. From Equation 1, we can derive the posterior of each topic variable zi given other variables, which is required by Gibbs sampling: P(zi = k|z−i, ai) ∝P(ai|zi)(αk + X m∈S\i δzm,k) (7) where z−i denotes all the topic variables in z except zi, and S is the set of all sites. Given the samples of the topic variables, we estimate their posterior marginal distribution P(zi|a, x) by simply computing their normalized histograms. 4 Learning with partially labeled data Here we consider estimating the parameters of both extended models from a partially labeled image set D = {xn, ln}. For an image xn, its label ln = (ln o, ln h) in which ln o denotes the observed labels, and ln h are missing. We also use o to denote the set of labeled regions. As the three models are built with different components, we treat them separately. Models I&II. We use the Maximum Likelihood criterion to estimate the model parameters. Let Θ be the parameter set of the model, Θ∗= arg max Θ X n log P(ln o, an|tn; Θ) (8) We maximize the log data likelihood by Monte Carlo EM. The lower bound of the likelihood can be written as Q = X n ⟨ X i∈o log P(ln i |an i , tn i , zn N(i)) + X i log P(an i |zn i ) + log P(z)⟩P (zn|ln o ,an) (9) In the E step, the posterior distributions of the topic variables are estimated by a Gibbs sampling procedure similar to Equation 7. It uses the following conditional probability: P(zi = k|z−i, ai, l, t) ∝ Y j∈N(i)∩o P(lj|aj, tj, zN(j))P(ai|zi)(αk + X m∈S\i δzm,k) (10) Note that any label variable is marginalized out if it is missing. In the M step, we update the model parameters by maximizing the lower bound Q. Denote the posterior distribution of z as q(·), the updating equation for parameters of the appearance module P(a|z) can be derived from the stationary point of Q: β∗ k,v ∝ X n,i q(zn i = k)δ(an i , v). (11) The classifier in the label prediction module is learned by maximizing the following log likelihood, Lc = X n,i∈o ⟨log P(ln i |an i , tn i , X j∈N(i) wjzj)⟩q(zN(i)) ≈ X n,i∈o log P(ln i |an i , tn i , X j∈N(i) wjq(zj)). (12) 4 where the approximation takes the same form as in Equation 6. We use a gradient ascent algorithm to update the classifier parameters. Note that we need to run only a few iterations at each M step, which reduces training time. Model III. We estimate the parameters of Model III in two stages: (1). The parameters of the base topic prediction model are learned using the same procedure as in Models I&II. More specifically, we set N(i) = i and estimate the parameters of the appearance module and label classifier based on Maximum Likelihood. (2). Given the base topic prediction model, we compute the marginal label probability Pb(li|a, t) and plug in the unary potential function in the CRF model (see Equation 3). We then estimate the parameters in the CRF by maximizing conditional pseudo-likelihood as follows: Lp = X n X i∈o  log exp{ X j∈N(i) X a,b uabδln i ,aδln j ,b + γ log Pb(ln i |an, tn)} −log Zn i  . (13) where Zn i = P li exp{P j∈N(i) P a,b uabδli,aδln j ,b + γ log Pb(li|a, t)} is the normalizing constant. As this cost function is convex, we use a simple gradient ascent method to optimize the conditional pseudo-likelihood. 5 Experimental evaluation Data sets and representation. Our experiments are based on three image datasets. The first is a subset of the Microsoft Research Cambridge (MSRC) Image Database [14] as in [16]. This subset includes 240 images and 9 different label classes. The second set is the full MSRC image dataset, including 591 images and 21 object classes. The third set is a labeled subset of the Corel database as in [5] (referred therein as Corel-B). It includes 305 manually labeled images with 11 classes, focusing on animals and natural scenes. We use the normalized cut segmentation algorithm [13] to build a super-pixel representation of the images, in which the segmentation algorithm is tuned to generate approximately 1000 segments for each image on average. We extract a set of basic image features, including color, edge and texture information, from each pixel site. For the color information, we transform the RGB values into CIE Lab* color space. The edge and texture are extracted by a set of filter-banks including a differenceof-Gaussian filter at 3 different scales, and quadrature pairs of oriented even- and odd-symmetric filters at 4 orientations and 3 scales.The color descriptor of a super-pixel is the average color over the pixels in that super-pixel. For edge and texture descriptors, we first discretize the edge/texture feature space by k-means, and use each cluster as a bin. Then we compute the normalized histograms of the features within a super-pixel as the edge/texture descriptor. In the experiments reported here, we used 20 bins for edge information and 50 bins for texture information. We also augment each feature by a SIFT descriptor extracted from a 30 × 30 image patch centered at the super-pixel. The image position of a super-pixel is the average position of its pixels. To compute the vocabulary of visual words in the topic model, we apply k-means to group the super-pixel descriptors into clusters. The cluster centers are used as visual words and each descriptor is encoded by its word index. Comparison methods. We compare our approach directly with two baseline systems: a superpixel-wise classifier and a basic CRF model. We also report the experimental results from [16], although they adopt a different data representation in their experiments (patches rather than superpixels). The super-pixel-wise classifier is an MLP with one hidden layer, which predicts labels for each super-pixel independently. The MLP has 30 hidden units, a number chosen based on validation performance. In the basic CRF, the conditional distribution of the labels of an image is defined as: P(l|a, t) ∝exp{ X i,j X u,v σu,vδli,uδlj,v + γ X i h(li|ai, ti)} (14) where h(·) is the log output from the super-pixel classifier. We train the CRF model by maximizing its conditional pseudo-likelihood, and label the image based on the marginal distribution of each label variable, computed by the loopy belief propagation algorithm. Performance on MSRC-9. Following the setting in [16], we randomly split the dataset into training and testing sets with equal size, and use 10% training data as our validation set. In this experiment, 5 Table 1: A comparison of classification accuracy of the 3 variants of our model with other methods. The average classification accuracy is at the pixel level. Label building grass tree cow sky plane face car bike Total S Class 61.2 93.2 71.3 57.0 92.9 37.5 69.0 56.0 54.1 74.2 CRF 69.8 94.4 82.1 73.3 94.2 62.0 80.5 80.1 78.6 83.5 Model I 64.8 93.0 76.6 72.0 93.5 65.1 74.4 61.3 77.7 79.7 Model II 79.2 94.1 81.4 80.2 93.5 72.4 86.3 69.5 86.2 85.5 Model III 78.1 92.5 85.4 86.7 94.6 77.9 83.5 74.7 88.3 86.7 [16] 73.6 91.1 82.1 73.6 95.7 78.3 89.5 84.5 81.4 84.9 we set the vocabulary size to 500, the number of hidden topics to 50, and each symmetric Dirichlet parameter αk = 0.5, based on validation performance. For Model II, we define the neighborhood of each site i as a subset of sites that falls into a circular region centered at i and with radius of 2σ, where σ is the fall-off rate of the weights. We set σ to be 10 pixels, which is roughly 1/20 of image size. The classifiers for label prediction have 15 hidden units. The appearance model for topics and the classifier are initialized randomly. In the learning procedure, the E step uses 500 samples to estimate the posterior distribution of topics. In the M step, we take 3 steps of gradient ascent learning of the classifiers per iteration. The performance of our models is first evaluated on the dataset with all the labels available. We compare the performance of the three model variants to the super-pixel classifier (S Class), and the CRF model. Table 1 shows the average classification accuracy rates of our model and the baselines for each class and in total, over 10 different random partitions of the dataset. We can see that Model I, which uses latent feature representations as additional inputs, achieves much better performance than the S Class. Also, Model II and III improve the accuracy further by incorporating the label spatial priors. We notice that the lateral connections between label variables are more effective than integrating information from neighboring latent topic variables. This is also demonstrated by the good performance of the simple CRF. Learning with different amounts of label data. In order to test the robustness of the latent feature representation, we evaluate our models using data with different amount of labeling information. We use an image dilation operator on the image regions labeled as ‘void’, and control the proportion of labeled data by varying the diameters of the dilation operator (see [16] for similar processing). Specifically, we use diameter values of 5, 10, 15, 20, 25, 30 and 35 to change the proportion of the labeled pixels to 62.9%, 52.1%, 44.1%, 36.4%, 30.5%, 24.9% and 20.3%, respectively. The original proportion is 71.9%. We report the average accuracies of 5 runs of training and testing with random equal partition of the dataset in Figure 2. The figure shows that the performance of all three models degrades with fewer labeled data, but the degradation is relatively gradual. When the proportion of labeled data decreases from 72% to 20%, the total loss in accuracy is less than 10%. This suggests that incorporating latent features makes our models more robust against missing labels than the previous work (cf. [16]). We also note that the performance of Model III is more robust than the other two variants, which may derive from stronger smoothing. Table 2: A comparison of classification accuracy of our three model variants with other methods on the full MSRC dataset and Corel-B dataset. S Class Model I Model II Model III [14] [5] MSRC 60.0 65.9 72.3 74.0 72.2 Corel-B 68.2 69.2 73.4 75.5 75.3 Performance on other sets. We further evaluate our models on two larger datasets to see whether they can scale up. The first dataset is the full version of the MSRC dataset, and we use the same training/testing partition as in [14]. The model setting is the same as in MSRC-9 except that we use a MLP with 20 hidden units for label prediction. The second is the Corel-B dataset, which is divided into 175 training images and 130 testing images randomly. We use the same setting of the models as in the experiments on the full MSRC set. Table 2 summarizes the classification accuracies of our models as well as some previous methods. For the full MSRC set, the two extended versions of our model achieve the similar performance as in [14], and we can see that the latent topic representation 6 71.9 62.8 52.1 44.1 36.4 30.5 24.9 20.3 0.6 0.65 0.7 0.75 0.8 0.85 Percentage of Labeled Pixels Accuracy S_Class Model−I Model−II Model−III Figure 2: Left: Classification Accuracy with gradually decreasing proportion of labeled pixels. Right top: Examples of an image and its super-pixelization. Right bottom: Examples of original labeling and labeling after dilation (the ratio is 36.4). provides useful cues. Also, our models have the same accuracy as reported in [5] on the Corel-B dataset, while we have a simpler label random field and use a smaller training set. It is interesting to note that the topics and spatial smoothness play less roles in the labeling performance on CorelB. Figure 3 shows some examples of labeling results from both datasets. We can see that our models handle the extended regions better than those fine object structures, due to the tendency of (over)smoothing caused by super-pixelization and the two spatial dependency structures. 6 Discussion In this paper, we presented a hybrid framework for image labeling, which combines a generative topic model with discriminative label prediction models. The generative model extends latent Dirichlet allocation to capture joint patterns in the label and appearance space of images. This latent representation of an image then provides an additional input to the label predictor. We also incorporated the spatial dependency into the model structure in two different ways, both imposing a prior of spatial smoothness for labeling on the image plane. The results of applying our methods to three different image datasets suggest that this integrated approach may extend to a variety of image databases with only partial labeling available. The labeling system consistently out-performs alternative approaches, such as a standard classifier and a standard CRF. Its performance also matches that of the state-of-the-art approaches, and is robust against different amount of missing labels. Several avenues exist for future work. First, we would like to understand when the simple first-order approximation in inference for Model II holds, e.g., when the local curvature of the classifier with respect to its input is large. In addition, it is important to address model selection issues, such as the number of topics. We currently rely on the validation set, but more principled approaches are possible. A final issue concerns the reliance on visual words formed by clustering features in a complicated appearance space. Using a stronger appearance model may help us understand the role of different visual cues, as well as construct a more powerful generative model. References [1] Yasemin Altun, David McAllester, and Mikhail Belkin. Maximum margin semi-supervised learning for structured variables. In NIPS 18, 2006. [2] David Blei and Jon McAuliffe. Supervised topic models. In NIPS 20, 2008. [3] David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent Dirichlet allocation. J. Mach. Learn. Res., 3:993–1022, 2003. [4] Xuming He, Richard Zemel, and Miguel Carreira-Perpinan. Multiscale conditional random fields for image labelling. In CVPR, 2004. [5] Xuming He, Richard S. Zemel, and Debajyoti Ray. Learning and incorporating top-down cues in image segmentation. In ECCV, 2006. [6] Michael Kelm, Chris Pal, and Andrew McCallum. Combining generative and discriminative methods for pixel classification with multi-conditional learning. In ICPR, 2006. 7 Building Grass Tree Cow Sky Plane Face Car Bike MSRC Orig Image Our Model Ground Truth Hippo/Rhino Horse Tigher Polar Bear Wolf/Lepard Water Vegetaion Sky Ground Snow Fence Corel-B Orig Image Our Model Ground Truth Figure 3: Some labeling results for the Corel-B (bottom panel) and MSRC-9 (top panel) datasets, based on the best performance of our models. The ‘Void’ region is annotated by color ‘black’. [7] Sanjiv Kumar and Martial Hebert. Discriminative random fields: A discriminative framework for contextual interaction in classification. In ICCV, 2003. [8] John Lafferty, Andrew McCallum, and Fernando Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML, pages 282–289, 2001. [9] Julia A. Lasserre, Christopher M. Bishop, and Thomas P. Minka. Principled hybrids of generative and discriminative models. In CVPR, 2006. [10] Chi-Hoon Lee, Shaojun Wang, Feng Jiao, Dale Schuurmans, and Russell Greiner. Learning to model spatial dependency: Semi-supervised discriminative random fields. In NIPS 19, 2007. [11] Nicolas Loeff, Himanshu Arora, Alexander Sorokin, and David Forsyth. Efficient unsupervised learning for localization and detection in object categories. In NIPS, 2006. [12] B. Russell, A. Torralba, K. Murphy, and W. Freeman. LabelMe: A database and web-based tool for image annotation. Technical report, MIT AI Lab Memo AIM-2005-025, 2005. [13] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Trans. PAMI, 2000. [14] Jamie Shotton, John M. Winn, Carsten Rother, and Antonio Criminisi. Textonboost: Joint appearance, shape and context modeling for multi-class object recognition and segmentation. In ECCV, 2006. [15] Jakob Verbeek and Bill Triggs. Region classification with markov field aspect models. In CVPR, 2007. [16] Jakob Verbeek and Bill Triggs. Scene segmentation with CRFs learned from partially labeled images. In NIPS 20, 2008. [17] Gang Wang, Ye Zhang, and Li Fei-Fei. Using dependent regions for object categorization in a generative framework. In CVPR, 2006. [18] Xiaogang Wang and Eric Grimson. Spatial latent Dirichlet allocation. In NIPS, 2008. 8
2008
82
3,573
Efficient Sampling for Gaussian Process Inference using Control Variables Michalis K. Titsias, Neil D. Lawrence and Magnus Rattray School of Computer Science, University of Manchester Manchester M13 9PL, UK Abstract Sampling functions in Gaussian process (GP) models is challenging because of the highly correlated posterior distribution. We describe an efficient Markov chain Monte Carlo algorithm for sampling from the posterior process of the GP model. This algorithm uses control variables which are auxiliary function values that provide a low dimensional representation of the function. At each iteration, the algorithm proposes new values for the control variables and generates the function from the conditional GP prior. The control variable input locations are found by minimizing an objective function. We demonstrate the algorithm on regression and classification problems and we use it to estimate the parameters of a differential equation model of gene regulation. 1 Introduction Gaussian processes (GPs) are used for Bayesian non-parametric estimation of unobserved or latent functions. In regression problems with Gaussian likelihoods, inference in GP models is analytically tractable, while for classification deterministic approximate inference algorithms are widely used [16, 4, 5, 11]. However, in recent applications of GP models in systems biology [1] that require the estimation of ordinary differential equation models [2, 13, 8], the development of deterministic approximations is difficult since the likelihood can be highly complex. Other applications of Gaussian processes where inference is intractable arise in spatio-temporal models and geostatistics and deterministic approximations have also been developed there [14]. In this paper, we consider Markov chain Monte Carlo (MCMC) algorithms for inference in GP models. An advantage of MCMC over deterministic approximate inference is that it provides an arbitrarily precise approximation to the posterior distribution in the limit of long runs. Another advantage is that the sampling scheme will often not depend on details of the likelihood function, and is therefore very generally applicable. In order to benefit from the advantages of MCMC it is necessary to develop an efficient sampling strategy. This has proved to be particularly difficult in many GP applications, because the posterior distribution describes a highly correlated high-dimensional variable. Thus simple MCMC sampling schemes such as Gibbs sampling can be very inefficient. In this contribution we describe an efficient MCMC algorithm for sampling from the posterior process of a GP model which constructs the proposal distributions by utilizing the GP prior. This algorithm uses control variables which are auxiliary function values. At each iteration, the algorithm proposes new values for the control variables and samples the function by drawing from the conditional GP prior. The control variables are highly informative points that provide a low dimensional representation of the function. The control input locations are found by minimizing an objective function. The objective function used is the expected least squares error of reconstructing the function values from the control variables, where the expectation is over the GP prior. We demonstrate the proposed MCMC algorithm on regression and classification problems and compare it with two Gibbs sampling schemes. We also apply the algorithm to inference in a systems biology model where a set of genes is regulated by a transcription factor protein [8]. This provides an example of a problem with a non-linear and non-factorized likelihood function. 2 Sampling algorithms for Gaussian Process models In a GP model we assume a set of inputs (x1, . . . , xN) and a set of function values f = (f1, . . . , fN) evaluated at those inputs. A Gaussian process places a prior on f which is a N-dimensional Gaussian distribution so that p(f) = N(y|µ, K). The mean µ is typically zero and the covariance matrix K is defined by the kernel function k(xn, xm) that depends on parameters θ. GPs are widely used for supervised learning [11] in which case we have a set of observed pairs (yi, xi), where i = 1, . . . , N, and we assume a likelihood model p(y|f) that depends on parameters α. For regression or classification problems, the latent function values are evaluated at the observed inputs and the likelihood factorizes according to p(y|f) = QN i=1 p(yi|fi). However, for other type of applications, such as modelling latent functions in ordinary differential equations, the above factorization is not applicable. Assuming that we have obtained suitable values for the model parameters (θ, α) inference over f is done by applying Bayes rule: p(f|y) ∝p(y|f)p(f). (1) For regression, where the likelihood is Gaussian, the above posterior is a Gaussian distribution that can be obtained using simple algebra. When the likelihood p(y|f) is non-Gaussian, computations become intractable and we need to carry out approximate inference. The MCMC algorithm we consider is the general Metropolis-Hastings (MH) algorithm [12]. Suppose we wish to sample from the posterior in eq. (1). The MH algorithm forms a Markov chain. We initialize f (0) and we consider a proposal distribution Q(f (t+1)|f (t)) that allows us to draw a new state given the current state. The new state is accepted with probability min(1, A) where A = p(y|f (t+1))p(f (t+1)) p(y|f (t))p(f (t)) Q(f (t)|f (t+1)) Q(f (t+1)|f (t)). (2) To apply this generic algorithm, we need to choose the proposal distribution Q. For GP models, finding a good proposal distribution is challenging since f is high dimensional and the posterior distribution can be highly correlated. To motivate the algorithm presented in section 2.1, we discuss two extreme options for specifying the proposal distribution Q. One simple way to choose Q is to set it equal to the GP prior p(f). This gives us an independent MH algorithm [12]. However, sampling from the GP prior is very inefficient as it is unlikely to obtain a sample that will fit the data. Thus the Markov chain will get stuck in the same state for thousands of iterations. On the other hand, sampling from the prior is appealing because any generated sample satisfies the smoothness requirement imposed by the covariance function. Functions drawn from the posterior GP process should satisfy the same smoothness requirement as well. The other extreme choice for the proposal, that has been considered in [10], is to apply Gibbs sampling where we iteratively draw samples from each posterior conditional density p(fi|f−i, y) with f−i = f \fi. However, Gibbs sampling can be extremely slow for densely discretized functions, as in the regression problem of Figure 1, where the posterior GP process is highly correlated. To clarify this, note that the variance of the posterior conditional p(fi|f−i, y) is smaller or equal to the variance of the conditional GP prior p(fi|f−i). However, p(fi|f−i) may already have a tiny variance caused by the conditioning on all remaining latent function values. For the one-dimensional example in Figure 1, Gibbs sampling is practically not applicable. We further study this issue in section 4. A similar algorithm to Gibbs sampling can be expressed by using the sequence of the conditional densities p(fi|f−i) as a proposal distribution for the MH algorithm1. We call this algorithm the Gibbs-like algorithm. This algorithm can exhibit a high acceptance rate, but it is inefficient to sample from highly correlated functions. A simple generalization of the Gibbs-like algorithm that is more appropriate for sampling from smooth functions is to divide the domain of the function into regions and sample the entire function within each region by conditioning on the remaining function regions. Local region sampling iteratively draws each block of functions values fk from 1Thus we replace the proposal distribution p(fi|f−i, y) with the prior conditional p(fi|f−i). the conditional GP prior p(f t+1 k |f (t) −k), where f−k = f \ fk. However, this scheme is still inefficient to sample from highly correlated functions since the variance of the proposal distribution can be very small close to the boundaries between neighbouring function regions. The description of this algorithm is given in the supplementary material. In the next section we discuss an algorithm using control variables that can efficiently sample from highly correlated functions. 2.1 Sampling using control variables Let fc be a set of M auxiliary function values that are evaluated at inputs Xc and drawn from the GP prior. We call fc the control variables and their meaning is analogous to the auxiliary inducing variables used in sparse GP models [15]. To compute the posterior p(f|y) based on control variables we use the expression p(f|y) = Z fc p(f|fc, y)p(fc|y)dfc. (3) Assuming that fc is highly informative about f, so that p(f|fc, y) ≃p(f|fc), we can approximately sample from p(f|y) in a two-stage manner: firstly sample the control variables from p(fc|y) and then generate f from the conditional prior p(f|fc). This scheme can allow us to introduce a MH algorithm, where we need to specify only a proposal distribution q(f (t+1) c |f (t) c ), that will mimic sampling from p(fc|y), and always sample f from the conditional prior p(f|fc). The whole proposal distribution takes the form Q(f (t+1), f (t+1) c |f (t), f (t) c ) = p(f (t+1)|f (t+1) c )q(f (t+1) c |f (t) c ). (4) Each proposed sample is accepted with probability min(1, A) where A is given by A = p(y|f (t+1))p(f (t+1) c ) p(y|f (t))p(f (t) c ) .q(f (t) c |f (t+1) c ) q(f (t+1) c |f (t) c ) . (5) The usefulness of the above sampling scheme stems from the fact that the control variables can form a low-dimensional representation of the function. Assuming that these variables are much fewer than the points in f, the sampling is mainly carried out in the low dimensional space. In section 2.2 we describe how to select the number M of control variables and the inputs Xc so as fc becomes highly informative about f. In the remainder of this section we discuss how we set the proposal distribution q(f (t+1) c |f (t) c ). A suitable choice for q is to use a Gaussian distribution with diagonal or full covariance matrix. The covariance matrix can be adapted during the burn-in phase of MCMC in order to increase the acceptance rate. Although this scheme is general, it has practical limitations. Firstly, tuning a full covariance matrix is time consuming and in our case this adaption process must be carried out simultaneously with searching for an appropriate set of control variables. Also, since the terms involving p(fc) do not cancel out in the acceptance probability in eq. (5), using a diagonal covariance for the q distribution has the risk of proposing control variables that may not satisfy the GP prior smoothness requirement. To avoid these problems, we define q by utilizing the GP prior. According to eq. (3) a suitable choice for q must mimic the sampling from the posterior p(fc|y). Given that the control points are far apart from each other, Gibbs sampling in the control variables space can be efficient. However, iteratively sampling fci from the conditional posterior p(fci|fc−i, y) ∝p(y|fc)p(fci|fc−i), where fc−i = fc \ fci is intractable for non-Gaussian likelihoods2. An attractive alternative is to use a Gibbs-like algorithm where each fci is drawn from the conditional GP prior p(f (t+1) ci |f (t) c−i) and is accepted using the MH step. More specifically, the proposal distribution draws a new f (t+1) ci for a certain control variable i from p(f (t+1) ci |f (t) c−i) and generates the function f (t+1) from p(f (t+1)|f (t+1) ci , f (t) c−i). The sample (f (t+1) ci , f (t+1)) is accepted using the MH step. This scheme of sampling the control variables one-at-a-time and resampling f is iterated between different control variables. A complete iteration of the algorithm consists of a full scan over all control variables. The acceptance probability A in eq. (5) becomes the likelihood ratio and the prior smoothness requirement is always satisfied. The iteration between different control variables is illustrated in Figure 1. 2This is because we need to integrate out f in order to compute p(y|fc). 0 0.2 0.4 0.6 0.8 1 −3 −2 −1 0 1 2 3 0 0.2 0.4 0.6 0.8 1 −3 −2 −1 0 1 2 3 0 0.2 0.4 0.6 0.8 1 −3 −2 −1 0 1 2 3 Figure 1: Visualization of iterating between control variables. The red solid line is the current f (t), the blue line is the proposed f (t+1), the red circles are the current control variables f (t) c while the diamond (in magenta) is the proposed control variable f (t+1) ci . The blue solid vertical line represents the distribution p(f (t+1) ci |f (t) c−i) (with two-standard error bars) and the shaded area shows the effective proposal p(f t+1|f (t) c−i). Although the control variables are sampled one-at-at-time, f can still be drawn with a considerable variance. To clarify this, note that when the control variable fci changes the effective proposal distribution for f is p(f t+1|f (t) c−i) = R f (t+1) ci p(f t+1|f (t+1) ci , f (t) c−i)p(f (t+1) ci |f (t) c−i)df (t+1) ci , which is the conditional GP prior given all the control points apart from the current point fci. This conditional prior can have considerable variance close to fci and in all regions that are not close to the remaining control variables. As illustrated in Figure 1, the iteration over different control variables allow f to be drawn with a considerable variance everywhere in the input space. 2.2 Selection of the control variables To apply the previous algorithm we need to select the number, M, of the control points and the associated inputs Xc. Xc must be chosen so that knowledge of fc can determine f with small error. The prediction of f given fc is equal to Kf,cK−1 c,c fc which is the mean of the conditional prior p(f|fc). A suitable way to search over Xc is to minimize the reconstruction error ||f −Kf,cK−1 c,c fc||2 averaged over any possible value of (f, fc): G(Xc) = Z f,fc ||f −Kf,cK−1 c,c fc||2p(f|fc)p(fc)dfdfc = Tr(Kf,f −Kf,cK−1 c,c KT f,c). The quantity inside the trace is the covariance of p(f|fc) and thus G(Xc) is the total variance of this distribution. We can minimize G(Xc) w.r.t. Xc using continuous optimization similarly to the approach in [15]. Note that when G(Xc) becomes zero, p(f|fc) becomes a delta function. To find the number M of control points we minimize G(Xc) by incrementally adding control variables until the total variance of p(f|fc) becomes smaller than a certain percentage of the total variance of the prior p(f). 5% was the threshold used in all our experiments. Then we start the simulation and we observe the acceptance rate of the Markov chain. According to standard heuristics [12] which suggest that desirable acceptance rates of MH algorithms are around 1/4, we require a full iteration of the algorithm (a complete scan over the control variables) to have an acceptance rate larger than 1/4. When for the current set of control inputs Xc the chain has a low acceptance rate, it means that the variance of p(f|fc) is still too high and we need to add more control points in order to further reduce G(Xc). The process of observing the acceptance rate and adding control variables is continued until we reach the desirable acceptance rate. When the training inputs X are placed uniformly in the space, and the kernel function is stationary, the minimization of G places Xc in a regular grid. In general, the minimization of G places the control inputs close to the clusters of the input data in such a way that the kernel function is taken into account. This suggests that G can also be used for learning inducing variables in sparse GP models in a unsupervised fashion, where the observed outputs y are not involved. 3 Applications We consider two applications where exact inference is intractable due to a non-linear likelihood function: classification and parameter estimation in a differential equation model of gene regulation. Classification: Deterministic inference methods for GP classification are described in [16, 4, 7]. Among these approaches, the Expectation-Propagation (EP) algorithm [9] is found to be the most efficient [6]. Our MCMC implementation confirms these findings since sampling using control variables gave similar classification accuracy to EP. Transcriptional regulation: We consider a small biological sub-system where a set of target genes are regulated by one transcription factor (TF) protein. Ordinary differential equations (ODEs) can provide an useful framework for modelling the dynamics in these biological networks [1, 2, 13, 8]. The concentration of the TF and the gene specific kinetic parameters are typically unknown and need to be estimated by making use of a set of observed gene expression levels. We use a GP prior to model the unobserved TF activity, as proposed in [8], and apply full Bayesian inference based on the MCMC algorithm presented previously. Barenco et al. [2] introduce a linear ODE model for gene activation from TF. This approach was extended in [13, 8] to account for non-linear models. The general form of the ODE model for transcription regulation with a single TF has the form dyj(t) dt = Bj + Sjg(f(t)) −Djyj(t), (6) where the changing level of a gene j’s expression, yj(t), is given by a combination of basal transcription rate, Bj, sensitivity, Sj, to its governing TF’s activity, f(t), and the decay rate of the mRNA, Dj. The differential equation can be solved for yj(t) giving yj(t) = Bj Dj + Aje−Djt + Sje−Djt Z t 0 g(f(u))eDjudu, (7) where Aj term arises from the initial condition. Due to the non-linearity of the g function that transforms the TF, the integral in the above expression is not analytically obtained. However, numerical integration can be used to accurately approximate the integral with a dense grid (ui)P i=1 of points in the time axis and evaluating the function at the grid points fp = f(up). In this case the integral in the above equation can be written PPt p=1 wpg(fp)eDjup where the weights wp arise from the numerical integration method used and, for example, can be given by the composite Simpson rule. The TF concentration f(t) in the above system of ODEs is a latent function that needs to be estimated. Additionally, the kinetic parameters of each gene αj = (Bj, Dj, Sj, Aj) are unknown and also need to be estimated. To infer these quantities we use mRNA measurements (obtained from microarray experiments) of N target genes at T different time steps. Let yjt denote the observed gene expression level of gene j at time t and let y = {yjt} collect together all these observations. Assuming a Gaussian noise for the observed gene expressions the likelihood of our data has the form p(y|f, {αj}N j=1) = N Y j=1 T Y t=1 p(yjt|f1≤p≤Pt, αj), (8) where each probability density in the above product is a Gaussian with mean given by eq. (7) and f1≤p≤Pt denotes the TF values up to time t. Notice that this likelihood is non-Gaussian due to the non-linearity of g. Further, this likelihood does not have a factorized form, as in the regression and classification cases, since an observed gene expression depends on the protein concentration activity in all previous times points. Also note that the discretization of the TF in P time points corresponds to a very dense grid, while the gene expression measurements are sparse, i.e. P ≫T. To apply full Bayesian inference in the above model, we need to define prior distributions over all unknown quantities. The protein concentration f is a positive quantity, thus a suitable prior is to consider a GP prior for log f. The kinetic parameters of each gene are all positive scalars. Those parameters are given vague gamma priors. Sampling the GP function is done exactly as described in section 2; we have only to plug in the likelihood from eq. (8) in the MH step. Sampling from the kinetic parameters is carried using Gaussian proposal distributions with diagonal covariance matrices that sample the positive kinetic parameters in the log space. 0 2 4 6 8 10 x 10 4 0 5 10 15 20 MCMC iterations KL(real||empirical) gibbs region control 2 4 6 8 10 0 10 20 30 40 50 60 dimension KL(real||empirical) Gibbs region control 2 4 6 8 10 0 10 20 30 40 50 60 70 dimension number of control variables corrCoef control 0 0.05 0.1 0.15 0.2 0.25 (a) (b) (c) Figure 2: (a) shows the evolution of the KL divergence (against the number of MCMC iterations) between the true posterior and the empirically estimated posteriors for a 5-dimensional regression dataset. (b) shows the mean values with one-standard error bars of the KL divergence (against the input dimension) between the true posterior and the empirically estimated posteriors. (c) plots the number of control variables together with the average correlation coefficient of the GP prior. 4 Experiments In the first experiment we compare Gibbs sampling (Gibbs), sampling using local regions (region) (see the supplementary file) and sampling using control variables (control) in standard regression problems of varied input dimensions. The performance of the algorithms can be accurately assessed by computing the KL divergences between the exact Gaussian posterior p(f|y) and the Gaussians obtained by MCMC. We fix the number of training points to N = 200 and we vary the input dimension d from 1 to 10. The training inputs X were chosen randomly inside the unit hypercube [0, 1]d. Thus, we can study the behavior of the algorithms w.r.t. the amount of correlation in the posterior GP process which depends on how densely the function is sampled. The larger the dimension, the sparser the function is sampled. The outputs Y were chosen by randomly producing a GP function using the squared-exponential kernel σ2 f exp(−||xm−xn||2 2ℓ2 ), where (σ2 f, ℓ2) = (1, 100) and then adding noise with variance σ2 = 0.09. The burn-in period was 104 iterations3. For a certain dimension d the algorithms were initialized to the same state obtained by randomly drawing from the GP prior. The parameters (σ2 f, ℓ2, σ2) were fixed to the values that generated the data. The experimental setup was repeated 10 times so as to obtain confidence intervals. We used thinned samples (by keeping one sample every 10 iterations) to calculate the means and covariances of the 200-dimensional posterior Gaussians. Figure 2(a) shows the KL divergence against the number of MCMC iterations for the 5-dimensional input dataset. It seems that for 200 training points and 5 dimensions, the function values are still highly correlated and thus Gibbs takes much longer for the KL divergence to drop to zero. Figure 2(b) shows the KL divergence against the input dimension after fixing the number of iterations to be 3 × 104. Clearly Gibbs is very inefficient in low dimensions because of the highly correlated posterior. As dimension increases and the functions become sparsely sampled, Gibbs improves and eventually the KL divergences approaches zero. The region algorithm works better than Gibbs but in low dimensions it also suffers from the problem of high correlation. For the control algorithm we observe that the KL divergence is very close to zero for all dimensions. Figure 2(c) shows the increase in the number of control variables used as the input dimension increases. The same plot shows the decrease of the average correlation coefficient of the GP prior as the input dimension increases. This is very intuitive, since one should expect the number of control variables to increase as the function values become more independent. Next we consider two GP classification problems for which exact inference is intractable. We used the Wisconsin Breast Cancer (WBC) and the Pima Indians Diabetes (PID) binary classification datasets. The first consists of 683 examples (9 input dimensions) and the second of 768 examples (8 dimensions). 20% of the examples were used for testing in each case. The MCMC samplers were run for 5 × 104 iterations (thinned to one sample every five iterations) after a burn-in of 104 iterations. The hyperparameters were fixed to those obtained by EP. Figures 3(a) and (b) shows 3For Gibbs we used 2 × 104 iterations since the region and control algorithms require additional iterations during the adaption phase. 200 400 600 800 1000 −264 −262 −260 −258 −256 −254 MCMC iterations Log likelihood 200 400 600 800 1000 −50 −45 −40 −35 −30 MCMC iterations Log likelihood gibbs contr ep gibbs contr ep 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 (a) (b) (c) Figure 3: We show results for GP classification. Log-likelihood values are shown for MCMC samples obtained from (a) Gibbs and (b) control applied to the WBC dataset. In (c) we show the test errors (grey bars) and the average negative log likelihoods (black bars) on the WBC (left) and PID (right) datasets and compare with EP. 0 2 4 6 8 10 12 0 0.5 1 1.5 2 2.5 3 3.5 p26 sesn1 Gene − first Replica DDB2 p26 sesn1 TNFRSF10b CIp1/p21 BIK 0 1 2 3 4 5 6 7 8 9 10 Decay rates 0 10 20 30 40 50 60 0 1 2 3 4 5 6 7 Inferred protein 0 10 20 30 40 50 60 3.5 4 4.5 5 5.5 6 6.5 7 7.5 dinI Gene 0 10 20 30 40 50 60 3 3.5 4 4.5 5 5.5 6 6.5 7 yjiW Gene Figure 4: First row: The left plot shows the inferred TF concentration for p53; the small plot on top-right shows the ground-truth protein concentration obtained by a Western blot experiment [2]. The middle plot shows the predicted expression of a gene obtained by the estimated ODE model; red crosses correspond to the actual gene expression measurements. The right-hand plot shows the estimated decay rates for all 5 target genes used to train the model. Grey bars display the parameters found by MCMC and black bars the parameters found in [2] using a linear ODE model. Second row: The left plot shows the inferred TF for LexA. Predicted expressions of two target genes are shown in the rest two plots. Error bars in all plots correspond to 95% credibility intervals. the log-likelihood for MCMC samples on the WBC dataset, for the Gibbs and control algorithms respectively. It can be observed that mixing is far superior for the control algorithm and it has also converged to a much higher likelihood. In Figure 3(c) we compare the test error and the average negative log likelihood in the test data obtained by the two MCMC algorithms with the results from EP. The proposed control algorithm shows similar classification performance to EP, while the Gibbs algorithm performs significantly worse on both datasets. In the final two experiments we apply the control algorithm to infer the protein concentration of TFs that activate or repress a set of target genes. The latent function in these problems is always onedimensional and densely discretized and thus the control algorithm is the only one that can converge to the GP posterior process in a reasonable time. We first consider the TF p53 which is a tumour repressor activated during DNA damage. Seven samples of the expression levels of five target genes in three replicas are collected as the raw time course data. The non-linear activation of the protein follows the Michaelis Menten kinetics inspired response [1] that allows saturation effects to be taken into account so as g(f(t)) = f(t) γj+f(t) in eq. (6) where the Michaelis constant for the jth gene is given by γj. Note that since f(t) is positive the GP prior is placed on the log f(t). To apply MCMC we discretize f using a grid of P = 121 points. During sampling, 7 control variables were needed to obtain the desirable acceptance rate. Running time was 4 hours for 5 × 105 sampling iterations plus 5 × 104 burn-in iterations. The first row of Figure 4 summarizes the estimated quantities obtained from MCMC simulation. Next we consider the TF LexA in E.Coli that acts as a repressor. In the repression case there is an analogous Michaelis Menten model [1] where the non-linear function g takes the form: g(f(t)) = 1 γj+f(t). Again the GP prior is placed on the log of the TF activity. We applied our method to the same microarray data considered in [13] where mRNA measurements of 14 target genes are collected over six time points. For this dataset, the expression of the 14 genes were available for T = 6 times. The GP function f was discretized using 121 points. The result for the inferred TF profile along with predictions of two target genes are shown in the second row of Figure 4. Our inferred TF profile and reconstructed target gene profiles are similar to those obtained in [13]. However, for certain genes, our model provides a better fit to the gene profile. 5 Discussion Gaussian processes allow for inference over latent functions using a Bayesian estimation framework. In this paper, we presented an MCMC algorithm that uses control variables. We showed that this sampling scheme can efficiently deal with highly correlated posterior GP processes. MCMC allows for full Bayesian inference in the transcription factor networks application. An important direction for future research will be scaling the models used to much larger systems of ODEs with multiple interacting transcription factors. In such large systems where MCMC can become slow a combination of our method with the fast sampling scheme in [3] could be used to speed up the inference. Acknowledgments This work is funded by EPSRC Grant No EP/F005687/1 ”Gaussian Processes for Systems Identification with Applications in Systems Biology”. References [1] U. Alon. An Introduction to Systems Biology: Design Principles of Biological Circuits. Chapman and Hall/CRC, 2006. [2] M. Barenco, D. Tomescu, D. Brewer, J. Callard, R. Stark, and M. Hubank. Ranked prediction of p53 targets using hidden variable dynamic modeling. Genome Biology, 7(3), 2006. [3] B. Calderhead, M. Girolami, and N.D. Lawrence. Accelerating Bayesian Inference over Nonlinear Differential Equations with Gaussian Processes. In Neural Information Processing Systems, 22, 2008. [4] L. Csato and M. Opper. Sparse online Gaussian processes. Neural Computation, 14:641–668, 2002. [5] M. N. Gibbs and D. J. C. MacKay. Variational Gaussian process classifiers. IEEE Transactions on Neural Networks, 11(6):1458–1464, 2000. [6] M. Kuss and C. E. Rasmussen. Assessing Approximate Inference for Binary Gaussian Process Classification. Journal of Machine Learning Research, 6:1679–1704, 2005. [7] N. D. Lawerence, M. Seeger, and R. Herbrich. Fast sparse Gaussian process methods: the informative vector machine. In Advances in Neural Information Processing Systems, 13. MIT Press, 2002. [8] N. D. Lawrence, G. Sanguinetti, and M. Rattray. Modelling transcriptional regulation using Gaussian processes. In Advances in Neural Information Processing Systems, 19. MIT Press, 2007. [9] T. Minka. Expectation propagation for approximate Bayesian inference. In UAI, pages 362–369, 2001. [10] R. M. Neal. Monte Carlo implementation of Gaussian process models for Bayesian regression and classification. Technical report, Dept. of Statistics, University of Toronto, 1997. [11] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006. [12] C. P. Robert and G. Casella. Monte Carlo Statistical Methods. Springer-Verlag, 2nd edition, 2004. [13] S. Rogers, R. Khanin, and M. Girolami. Bayesian model-based inference of transcription factor activity. BMC Bioinformatics, 8(2), 2006. [14] H. Rue, S. Martino, and N. Chopin. Approximate Bayesian inference for latent Gaussian models using integrated nested Laplace approximations. NTNU Statistics Preprint, 2007. [15] E. Snelson and Z. Ghahramani. Sparse Gaussian process using pseudo inputs. In Advances in Neural Information Processing Systems, 13. MIT Press, 2006. [16] C. K. I. Williams and D. Barber. Bayesian classification with Gaussian processes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(12):1342–1351, 1998.
2008
83
3,574
An Online Algorithm for Maximizing Submodular Functions Matthew Streeter Google, Inc. Pittsburgh, PA 15213 mstreeter@google.com Daniel Golovin Carnegie Mellon University Pittsburgh, PA 15213 dgolovin@cs.cmu.edu Abstract We present an algorithm for solving a broad class of online resource allocation problems. Our online algorithm can be applied in environments where abstract jobs arrive one at a time, and one can complete the jobs by investing time in a number of abstract activities, according to some schedule. We assume that the fraction of jobs completed by a schedule is a monotone, submodular function of a set of pairs (v, τ), where τ is the time invested in activity v. Under this assumption, our online algorithm performs near-optimally according to two natural metrics: (i) the fraction of jobs completed within time T, for some fixed deadline T > 0, and (ii) the average time required to complete each job. We evaluate our algorithm experimentally by using it to learn, online, a schedule for allocating CPU time among solvers entered in the 2007 SAT solver competition. 1 Introduction This paper presents an algorithm for solving the following class of online resource allocation problems. We are given as input a finite set V of activities. A pair (v, τ) ∈V × R>0 is called an action, and represents spending time τ performing activity v. A schedule is a sequence of actions. We use S to denote the set of all schedules. A job is a function f : S →[0, 1], where for any schedule S ∈S, f(S) represents the proportion of some task that is accomplished by performing the sequence of actions S. We require that a job f have the following properties (here ⊕is the concatenation operator): 1. (monotonicity) for any schedules S1, S2 ∈S, we have f(S1) ≤f(S1 ⊕S2) and f(S2) ≤ f(S1 ⊕S2) 2. (submodularity) for any schedules S1, S2 ∈S and any action a ∈V ×R>0, fa(S1 ⊕S2) ≤ fa(S1), where we define fa(S) ≡f(S ⊕⟨a⟩) −f(S). We will evaluate schedules in terms of two objectives. The first objective, which we call benefitmaximization, is to maximize f (S) subject to the constraint ℓ(S) ≤T, for some fixed T > 0, where ℓ(S) equals the sum of the durations of the actions in S. For example if S = ⟨(v1, 3), (v2, 3)⟩, then ℓ(S) = 6. The second objective is to minimize the cost of a schedule, which we define as c (f, S) = Z ∞ t=0 1 −f S⟨t⟩  dt where S⟨t⟩is the schedule that results from truncating schedule S at time t. For example if S = ⟨(v1, 3), (v2, 3)⟩then S⟨5⟩= ⟨(v1, 3), (v2, 2)⟩.1 One way to interpret this objective is to imagine 1More formally, if S = ⟨a1, a2, . . .⟩, where ai = (vi, τi), then S⟨t⟩= ⟨a1, a2, . . . , ak−1, ak, (vk+1, τ ′)⟩, where k is the largest integer such that Pk i=1 τi < t and τ ′ = t −Pk i=1 τi. 1 that f(S) is the probability that some desired event occurs as a result of performing the actions in S. For any non-negative random variable X, we have E [X] = R ∞ t=0 P [X > t] dt. Thus c (f, S) is the expected time we must wait before the desired event occurs if we execute actions according to the schedule S. The following example illustrates these definitions. Example 1. Let each activity v represent a randomized algorithm for solving some decision problem, and let the action (v, τ) represent running the algorithm (with a fresh random seed) for time τ. Fix some particular instance of the decision problem, and for any schedule S, let f(S) be the probability that one (or more) of the runs in the sequence S yields a solution to that instance. So f(S⟨T ⟩) is (by definition) the probability that performing the runs in schedule S yields a solution to the problem instance in time ≤T, while c (f, S) is the expected time that elapses before a solution is obtained. It is clear that f(S) is monotone, because adding runs to the sequence S can only increase the probability that one of the runs is successful. The fact that f is submodular can be seen as follows. For any schedule S and action a, fa(S) equals the probability that action a succeeds after every action in S has failed, which can also be written as (1 −f(S)) · f(⟨a⟩). This, together with the monotonicity of f, implies that for any schedules S1, S2 and any action a, we have fa(S1 ⊕S2) = (1 −f(S1 ⊕S2)) · f(⟨a⟩) ≤(1 −f(S1)) · f(⟨a⟩) = fa(S1). In the online setting, an arbitrary sequence ⟨f (1), f (2), . . . , f (n)⟩of jobs arrive one at a time, and we must finish each job (via some schedule) before moving on to the next job. When selecting a schedule S(i) to use to finish job f (i), we have knowledge of the previous jobs f (1), f (2), . . . , f (i−1) but we have no knowledge of f (i) itself or of any subsequent jobs. In this setting we aim to minimize regret, which measures the difference between the average cost (or average benefit) of the schedules produced by our online algorithm and that of the best single schedule (in hindsight) for the given sequence of jobs. 1.1 Problems that fit into this framework A number of previously-studied problems can be cast as the task of computing a schedule S that minimizes c (f, S), where f is of the form f(S) = 1 n Pn i=1  1 −Q (v,τ)∈S (1 −pi(v, τ))  . This expression can be interpreted as follows: the job f consists of n subtasks, and pi(v, τ) is the probability that investing time τ in activity v completes the ith subtask. Thus, f(S) is the expected fraction of subtasks that are finished after performing the sequence of actions S. Assuming pi(v, τ) is a non-decreasing function of τ for all i and v, it can be shown that any function f of this form is monotone and submodular. PIPELINED SET COVER [11, 15] can be defined as the special case in which for each activity v there is an associated time τv, and pi(v, τ) = 1 if τ ≥τv and pi(v, τ) = 0 otherwise. MIN-SUM SET COVER [7] is the special case in which, additionally, τv = 1 or τv = ∞ for all v ∈V. The problem of constructing efficient sequences of trials [5] corresponds to the case in which we are given a matrix q, and pi(v, τ) = qv,i if τ ≥1 and pi(v, τ) = 0 otherwise. The problem of maximizing f(S⟨T ⟩) is a slight generalization of the problem of maximizing a monotone submodular set function subject to a knapsack constraint [14, 20] (which in turn generalizes BUDGETED MAXIMUM COVERAGE [12], which generalizes MAX k-COVERAGE [16]). The only difference between the two problems is that, in the latter problem, f(S) may only depend on the set of actions in the sequence S, and not on the order in which the actions appear. 1.2 Applications We now discuss three applications, the first of which is the focus of our experiments in §5. 1. Online algorithm portfolio design. An algorithm portfolio [9] is a schedule for interleaving the execution of multiple (randomized) algorithms and periodically restarting them with a fresh random seed. Previous work has shown that combining multiple heuristics for NP-hard problems into a portfolio can dramatically reduce average-case running time [8, 9, 19]. In particular, algorithms based on chronological backtracking often exhibit heavy-tailed run length distributions, and periodically restarting them with a fresh random seed can reduce the mean running time by orders of magnitude [8]. As illustrated in Example 1, our algorithms can be used to learn an effective algorithm portfolio online, in the course of solving a sequence of problem instances. 2 2. Database query processing. In database query processing, one must extract all the records in a database that satisfy every predicate in a list of one or more predicates (the conjunction of predicates comprises the query). To process the query, each record is evaluated against the predicates one at a time until the record either fails to satisfy some predicate (in which case it does not match the query) or all predicates have been examined. The order in which the predicates are examined affects the time required to process the query. Munagala et al. [15] introduced and studied a problem called PIPELINED SET COVER (discussed in §1.1), which entails finding an evaluation order for the predicates that minimizes the average time required to process a record. Our work addresses the online version of this problem, which arises naturally in practice. 3. Sensor placement. Sensor placement is the task of assigning locations to a set of sensors so as to maximize the value of the information obtained (e.g., to maximize the number of intrusions that are detected by the sensors). Many sensor placement problems can be optimally solved by maximizing a monotone submodular set function subject to a knapsack constraint [13], a special case of our benefit-maximization problem (see §1.1). Our online algorithms could be used to select sensor placements when the same set of sensors is repeatedly deployed in an unknown or adversarial environment. 1.3 Summary of results We first consider the offline variant of our problem. As an immediate consequence of existing results [6, 7], we find that, for any ϵ > 0, (i) achieving an approximation ratio of 4 −ϵ for the cost-minimization problem is NP-hard and (ii) achieving an approximation ratio of 1 −1 e + ϵ for the benefit-maximization problem is NP-hard. We then present a greedy approximation algorithm that simultaneously achieves the optimal approximation ratios (of 4 and 1 −1 e) for these two problems, building on and generalizing previous work on special cases of these two problems [7, 20]. In the online setting we provide an online algorithm whose worst-case performance guarantees approach those of the offline greedy approximation algorithm asymptotically (as the number of jobs approaches infinity). We then show how to modify our online algorithm for use in several different “bandit” feedback settings. Finally, we prove information-theoretic lower bounds on regret. We conclude with an experimental evaluation. 2 Related Work As discussed in §1.1, the offline cost-minimization problem considered here generalizes MIN-SUM SET COVER [7], PIPELINED SET COVER [11, 15], and the problem of constructing efficient sequences of trials [5]. Several of these problems have been considered in the online setting. Munagala et al. [15] gave an online algorithm for PIPELINED SET COVER that is asymptotically O (log |V|)-competitive. Babu et al. [3] and Kaplan et al. [11] gave online algorithms for PIPELINED SET COVER that are asymptotically 4-competitive, but only in the special case where the jobs are drawn independently at random from a fixed probability distribution (whereas our online algorithm is asymptotically 4-competitive on an arbitrary sequence of jobs). Our offline benefit-maximization problem generalizes the problem of maximizing a monotone submodular set function subject to a knapsack constraint. Previous work gave offline greedy approximation algorithms for this problem [14, 20], which generalized earlier algorithms for BUDGETED MAXIMUM COVERAGE [12] and MAX k-COVERAGE [16]. To our knowledge, none of these problems have previously been studied in an online setting. Note that our problem is quite different from online set covering problems (e.g., [1]) that require one to construct a single collection of sets that covers each element in a sequence of elements that arrive online. In this paper we convert a specific greedy approximation algorithm into an online algorithm. Recently, Kakade et al. [10] gave a generic procedure for converting an α-approximation algorithm into an online algorithm that is asymptotically α-competitive. Their algorithm applies to linear optimization problems, but not to the non-linear problems we consider here. Independently of us, Radlinkski et al. [17] developed a no-regret algorithm for the online version of MAX k-COVERAGE, and applied it to online ranking. As it turns out, their algorithm is a special case of the algorithm OGunit that we present in §4.1. 3 3 Offline Greedy Algorithm In the offline setting, we are given as input a job f : S  [0,1]. Our goal is to compute a schedule S that achieves one of two objectives, either minimizing the cost c (f,S) or maximizing f(S) subject to the constraint  (S)  T.2 As already mentioned, this offline problem generalizes MIN-SUM SET COVER under the former objective and generalizes MAX k-COVERAGE under the latter objective, which implies the following computational complexity result [6, 7]. Theorem 1. For any  > 0, achieving a 4 ≫ (resp. 1 ≫1 e + ) approximation ratio for the costminimization (resp. benefit-maximization) problem is NP-hard. We now consider an arbitrary schedule G, whose jth action is gj = (vj,j). Let sj = fgj (Gj)  j , where Gj =  g1,g2,...,gj 1 , and let j = max(v, ) V× R>0  f(v, )(Gj)   ≫sj. We will prove bounds on the performance of G in terms of the j values. Note that we can ensure j = 0  j by greedily choosing gj = arg max(v, ) V× R>0  f(v, )(Gj)   (i.e., greedily appending actions to the schedule so as to maximize the resulting increase in f per unit time). A key property is stated in the following lemma, which follows from the submodularity assumption (for the proof, see [18]). Lemma 1. For any schedule S, any positive integer j, and any t > 0, f(S t )  f(Gj)+t· (sj+j). Using Lemma 1, together with a geometric proof technique developed in [7], we now show that the greedy algorithm achieves the optimal approximation ratio for the cost-minimization problem. Theorem 2. Let S = arg minS S c (f,S). If j = 0  j, then c (f,G)  4 · c (f,S ). More generally, let L be a positive integer, and let T =  L j=1 j. For any schedule S, define cT (f,S)   T t=0 1 ≫f  S t  dt. Then cT (f,G)  4 · c (f,S ) +  L j=1 Ejj, where Ej =  l<j ll. Proof. We consider the special case j = 0  j; for the full proof see [18]. Let Rj = 1 ≫f (Gj); let xj = Rj 2sj ; let yj = Rj 2 ; and let h(x) = 1 ≫f(S  x ). By Lemma 1, h(xj)  Rj ≫Rj 2 = yj. The monotonicity of f implies that h(x) is non-increasing and also that the sequence  y1,y2,... is non-increasing. These facts imply that   x=0 h(x) dx   j 1 xj (yj ≫yj+1) (see Figure 1). The left hand side equals c (f,S ), and, using the fact that sj = Rj Rj+1  j , the right hand side simplifies to 1 4  j 1 Rjj  1 4c (f,G), proving c (f,G)  4 · c (f,S ). y1 x1 y2 x2 y3 x3 y4 x4 y5 x5 x h(x) Figure 1: An illustration of the inequality   x=0 h(x) dx   j 1 xj (yj ≫yj+1). The greedy algorithm also achieves the optimal approximation ratio for the benefit-maximization problem, as can be shown using arguments similar to the ones in [14, 20]; see [18] for details. Theorem 3. Let L be a positive integer, and let T =  L j=1 j. Then f  G T   >  1 ≫1 e  maxS S  f  S T   ≫ L j=1 jj. 2Given a set of jobs { f (1), f (2), ..., f (n)} , we can optimize the average schedule cost (or benefit) simply by applying our offline algorithm to the job f = 1 n Pn i=1 f (i) (since any convex combination of jobs is a job). 4 4 Online Greedy Algorithm In the online setting we are fed, one at a time, a sequence ⟨f (1), f (2), . . . , f (n)⟩of jobs. Prior to receiving job f (i), we must specify a schedule S(i). We then receive complete access to the function f (i). We measure performance using two different notions of regret. For the cost-minimization objective, we define Rcost = 1 n Pn i=1 cT S(i), f (i) −4·minS∈S  1 n Pn i=1 c S, f (i) , for some fixed T > 0. Here for any schedule S and job f, we define cT (S, f) = R T t=0 1 −f S⟨t⟩  dt to be the value of c (S, f) when the integral is truncated at time T. Some form of truncation is necessary because c S(i), f (i) could be infinite, and without bounding it we could not prove any finite bound on regret (our regret bounds will be stated as a function of T). For the benefit-maximization objective, we define Rbenefit = 1 −1 e  maxS∈S  1 n Pn i=1 f (i) S⟨T ⟩  −1 n Pn i=1 f (i) S(i) . Here we require that for each i, E  ℓ S(i) = T, where the expectation is over the online algorithm’s random bits. That is, we allow the online algorithm to treat T as a budget in expectation, rather than a hard budget. Our goal is to bound the worst-case expected values of Rcost and Rbenefit. For simplicity, we consider the oblivious adversary model, in which the sequence of jobs is fixed in advance and does not change in response to the decisions made by our online algorithm. We confine our attention to schedules that consist of actions that come from some finite set A, and assume that the actions in A have integer durations (i.e. A ⊆V × Z>0). 4.1 Unit-cost actions In the special case in which each action takes unit time (i.e., A ⊆V × {1}), our online algorithm OGunit is very simple. OGunit runs T action-selection algorithms, E1, E2, . . . , ET , where T is the number of time steps for which our schedule is defined. The intent is that each action-selection algorithm is a no-regret algorithm such as randomized weighted majority (WMR) [4], which selects actions so as to maximize payoffs associated with the actions. Just before job f (i) arrives, each action-selection algorithm Et selects an action ai t. The schedule used by OGunit on job f (i) is S(i) = ⟨ai 1, ai 2, . . . , ai T ⟩. The payoff that Et associates with action a is f (i) a  S(i) ⟨t−1⟩  . Theorem 4. Algorithm OGunit has E [Rbenefit] = O q T n ln |A|  and E [Rcost] = O  T q T n ln |A|  in the worst case, when WMR [4] is the subroutine action-selection algorithm. Proof. We will view OGunit as producing an approximate version of the offline greedy schedule for the job f = 1 n Pn i=1 f (i). First, view the sequence of actions selected by Et as a single meta-action ˜at, and extend the domain of each f (i) to include the meta-actions by defining f (i)(S ⊕⟨˜at⟩) = f (i)(S ⊕⟨ai t⟩) for all S ∈S (note each f (i) remains monotone and submodular). Thus, the online algorithm produces a single schedule ˜S = ⟨˜a1, ˜a2, . . . , ˜aT ⟩for all i. Let rt be the regret experienced by action-selection algorithm Et. By construction, rt = maxa∈A n fa  ˜S⟨t−1⟩ o −f˜at  ˜S⟨t−1⟩  . Thus OGunit behaves exactly like the greedy schedule G for the function f, with ϵt = rt. Thus, Theorem 3 implies that Rbenefit ≤PT t=1 rt ≡R. Similarly, Theorem 2 implies that Rcost ≤TR. To complete the analysis, it remains to bound E [R]. WMR has worst-case expected regret O  1 n p Gmax ln |A|  , where Gmax is the maximum sum of payoffs payoff for any single action.3 Because each payoff is at most 1 and there are n rounds, Gmax ≤n, so a trivial bound is E [R] = O  T q 1 n ln |A|  . In fact, the worst case is when Gmax = Θ n T  for all T action-selection algorithms, leading to an improved bound of E [R] = O q T n ln |A|  (for details see [18]), which completes the proof. 3This bound requires Gmax to be known in advance; however, the same guarantee can be achieved by guessing a value of Gmax and doubling the guess whenever it is proven wrong. 5 4.2 From unit-cost actions to arbitrary actions In this section we generalize the online greedy algorithm presented in the previous section to accommodate actions with arbitrary durations. Like OGunit, our generalized algorithm OG makes use of a series of action-selection algorithms E1, E2, . . . , EL (for L to be determined). On each round i, OG constructs a schedule S(i) as follows: for t = 1, 2, . . . , L, it uses Et to choose an action ai t = (v, τ) ∈A, and appends this action to S(i) with probability 1 τ . Let S(i) t denote the schedule that results from the first t steps of this process (so S(i) t contains between 0 and t actions). The payoff that Et associates with an action a = (v, τ) equals 1 τ fa(S(i) t−1) (i.e., the increase in f per unit time that would have resulted from appending a to the schedule-under-construction). As in the previous section, we view each action-selection algorithm Et as selecting a single metaaction ˜at. We extend the domain of each f (i) to include the meta-actions by defining f (i)(S⊕⟨˜at⟩) = f (i)(S ⊕⟨ai t⟩) if ai t was appended to S(i), and f (i)(S ⊕⟨˜at⟩) = f (i)(S) otherwise. Thus, the online algorithm produces a single schedule ˜S = ⟨˜a1, ˜a2, . . . , ˜aL⟩for all i. Note that each f (i) remains monotone and submodular. For the purposes of analysis, we will imagine that each meta-action ˜at always takes unit time (whereas in fact, ˜at takes unit time per job in expectation). We show later that this assumption does not invalidate any of our arguments. Let f = 1 n Pn i=1 f (i), and let ˜St = ⟨˜a1, ˜a2, . . . , ˜at⟩. Thus ˜S can be viewed as a version of the greedy schedule from §3, with ϵt = max(v,τ)∈A n 1 τ  f(v,τ)( ˜St−1) o −  f˜at( ˜St−1)  , where we are using the assumption that ˜at takes unit time. Let rt be the regret experienced by Et. Although rt ̸= ϵt in general, the two quantities are equal in expectation (proof omitted). Lemma 2. E [ϵt] = E [rt]. We now prove a bound on E [Rbenefit]. Because each f (i) is monotone and submodular, f is monotone and submodular as well, so the greedy schedule’s approximation guarantees apply to f. In particular, by Theorem 3, we have Rbenefit ≤PT t=1 ϵt. Thus by Lemma 2, E [Rbenefit] ≤E [R], where R = PT t=1 rt. To bound E [Rbenefit], it remains to justify the assumption that each meta-action ˜at always takes unit time. First, note that the value of the objective function f( ˜S) is independent of how long each metaaction ˜at takes. Thus, the only potential danger is that in making this assumption we have overlooked a constraint violation of the form E  ℓ S(i) ̸= T. But by construction, E  ℓ S(i) = L for each i, regardless of what actions are chosen by each action-selection algorithm. Thus if we set L = T there is no constraint violation. Combining the bound on E [R] stated in the proof of Theorem 4 with the fact that E [Rbenefit] ≤E [R] yields the following theorem. Theorem 5. Algorithm OG, run with input L = T, has E [Rbenefit] ≤E [R]. If WMR [4] is used as the subroutine action-selection algorithm, then E [R] = O q T n ln |A|  . The argument bounding E [Rcost] is similar, although somewhat more involved (for details, see [18]). One additional complication is that ℓ S(i) is now a random variable, whereas in the definition of Rcost the cost of a schedule is always calculated up to time T. This can be addressed by making the probability that ℓ S(i) < T sufficiently small, which can be done by setting L ≫T and applying concentration of measure inequalities. However, E [R] grows as a function of L, so we do not want to make L too large. The (approximately) best bound is obtained by setting L = T ln n. Theorem 6. Algorithm OG, run with input L = T ln n, has E [Rcost] = O(T ln n · E [R] + T √n). In particular, E [Rcost] = O  (ln n) 3 2 T q T n ln |A|  if WMR [4] is used as the subroutine actionselection algorithm. 6 4.3 Dealing with limited feedback Thus far we have assumed that, after specifying a schedule S(i), the online algorithm receives complete access to the job f (i). We now consider three more limited feedback settings that may arise in practice. In the priced feedback model, to receive access to f (i) we must pay a price C, which is added to our regret. In the partially transparent feedback model, we only observe f (i)  S(i) ⟨t⟩  for each t > 0. In the opaque feedback model, we only observe f (i) S(i) . The priced and partially transparent feedback models arise naturally in the case where action (v, τ) represents running a deterministic algorithm v for τ time units, and f(S) = 1 if some action in S yields a solution to some particular problem instance, and f(S) = 0 otherwise. If we execute a schedule S and halt as soon as some action yields a solution, we obtain exactly the information that is revealed in the partially transparent model. Alternatively, running each algorithm v until it returns a solution would completely reveal the function f (i), but incurs a computational cost, as reflected in the priced feedback model. Algorithm OG can be adapted to work in each of these three feedback settings; see [18] for the specific bounds. In all cases, the high-level idea is to replace the unknown quantities used by OG with (unbiased) estimates of those quantities. This technique has been used in a number of online algorithms (e.g., see [2]). 4.4 Lower bounds on regret We now state lower bounds on regret; for the proofs see the full paper [18]. Our proofs have the same high-level structure as that of the lower bound given in [4], in that we define a distribution over jobs that allows any online algorithm’s expected performance to be easily bounded, and then prove a bound on the expected performance of the best schedule in hindsight. The upper bounds in Theorem 4 match the lower bounds in Theorem 7 up to logarithmic factors, although the latter apply to standard regret as opposed to Rbenefit and Rcost (which include factors of 1 −1 e and 4). Theorem 7. Let X = q T n ln |V| T . Then any online algorithm has worst-case expected regret Ω(X) (resp. Ω(TX)) for the online benefit-maximization (resp. cost-minimization) problem. 5 Experimental Evaluation on SAT 2007 Competition Data The annual SAT solver competition (www.satcompetition.org) is designed to encourage the development of efficient Boolean satisfiability solvers, which are used as subroutines in state-ofthe-art model checkers, theorem provers, and planners. The competition consists of running each submitted solver on a number of benchmark instances, with a per-instance time limit. Solvers are ranked according to the instances they solve within each of three instance categories: industrial, random, and hand-crafted. We evaluated the online algorithm OG by using it to combine solvers from the 2007 SAT solver competition. To do so, we used data available on the competition web site to construct a matrix X, where Xi,j is the time that the jth solver required on the ith benchmark instance. We used this data to determine whether or not a given schedule would solve an instance within the time limit T (schedule S solves instance i if and only if, for some j, S⟨T ⟩contains an action (hj, τ) with τ ≥Xi,j). As illustrated in Example 1, the task of maximizing the number of instances solved within the time limit, in an online setting in which a sequence of instances must be solved one at a time, is an instance of our online problem (under the benefit-maximization objective). Within each instance category, we compared OG to the offline greedy schedule, to the individual solver that solved the most instances within the time limit, and to a schedule that ran each solver in parallel at equal strength. For these experiments, we ran OG in the full-information feedback model, after finding that the number of benchmark instances was too small for OG to be effective in the limited feedback models. Table 1 summarizes the results. In each category, the offline greedy schedule and the online greedy algorithm outperform all solvers entered in the competition as well as the na¨ıve parallel schedule. 7 Table 1: Number of benchmark instances solved within the time limit. Category Offline Online Parallel Top greedy greedy schedule solver Industrial 147 149 132 139 Random 350 347 302 257 Hand-crafted 114 107 95 98 References [1] Noga Alon, Baruch Awerbuch, and Yossi Azar. The online set cover problem. In Proceedings of the 35th STOC, pages 100–105, 2003. [2] Peter Auer, Nicol`o Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. The nonstochastic multiarmed bandit problem. SIAM Journal on Computing, 32(1):48–77, 2002. [3] Shivnath Babu, Rajeev Motwani, Kamesh Munagala, Itaru Nishizawa, and Jennifer Widom. Adaptive ordering of pipelined stream filters. In Proc. Intl. Conf. on Management of Data, pages 407–418, 2004. [4] Nicol`o Cesa-Bianchi, Yoav Freund, David Haussler, David Helmbold, Robert Schapire, and Manfred Warmuth. How to use expert advice. Journal of the ACM, 44(3):427–485, 1997. [5] Edith Cohen, Amos Fiat, and Haim Kaplan. Efficient sequences of trials. In Proceedings of the 14th SODA, pages 737–746, 2003. [6] Uriel Feige. A threshold of ln n for approximating set cover. Journal of the ACM, 45(4):634– 652, 1998. [7] Uriel Feige, L´aszl´o Lov´asz, and Prasad Tetali. Approximating min sum set cover. Algorithmica, 40(4):219–234, 2004. [8] Carla P. Gomes and Bart Selman. Algorithm portfolios. Artificial Intelligence, 126:43–62, 2001. [9] Bernardo A. Huberman, Rajan M. Lukose, and Tad Hogg. An economics approach to hard computational problems. Science, 275:51–54, 1997. [10] Sham Kakade, Adam Kalai, and Katrina Ligett. Playing games with approximation algorithms. In Proceedings of the 39th STOC, pages 546–555, 2007. [11] Haim Kaplan, Eyal Kushilevitz, and Yishay Mansour. Learning with attribute costs. In Proceedings of the 37th STOC, pages 356–365, 2005. [12] Samir Khuller, Anna Moss, and Joseph (Seffi) Naor. The budgeted maximum coverage problem. Information Processing Letters, 70(1):39–45, 1999. [13] Andreas Krause and Carlos Guestrin. Near-optimal nonmyopic value of information in graphical models. In Proceedings of the 21st UAI, pages 324–331, 2005. [14] Andreas Krause and Carlos Guestrin. A note on the budgeted maximization of submodular functions. Technical Report CMU-CALD-05-103, Carnegie Mellon University, 2005. [15] Kamesh Munagala, Shivnath Babu, Rajeev Motwani, Jennifer Widom, and Eiter Thomas. The pipelined set cover problem. In Proc. Intl. Conf. on Database Theory, pages 83–98, 2005. [16] G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher. An analysis of approximations for maximizing submodular set functions. Mathematical Programming, 14(1):265–294, 1978. [17] Filip Radlinski, Robert Kleinberg, and Thorsten Joachims. Learning diverse rankings with multi-armed bandits. In Proceedings of the 25th ICML, pages 784–791, 2008. [18] Matthew Streeter and Daniel Golovin. An online algorithm for maximizing submodular functions. Technical Report CMU-CS-07-171, Carnegie Mellon University, 2007. [19] Matthew Streeter, Daniel Golovin, and Stephen F. Smith. Combining multiple heuristics online. In Proceedings of the 22nd AAAI, pages 1197–1203, 2007. [20] Maxim Sviridenko. A note on maximizing a submodular set function subject to a knapsack constraint. Operations Research Letters, 32:41–43, 2004. 8
2008
84
3,575
On the Complexity of Linear Prediction: Risk Bounds, Margin Bounds, and Regularization Sham M. Kakade TTI Chicago Chicago, IL 60637 sham@tti-c.org Karthik Sridharan TTI Chicago Chicago, IL 60637 karthik@tti-c.org Ambuj Tewari TTI Chicago Chicago, IL 60637 tewari@tti-c.org Abstract This work characterizes the generalization ability of algorithms whose predictions are linear in the input vector. To this end, we provide sharp bounds for Rademacher and Gaussian complexities of (constrained) linear classes, which directly lead to a number of generalization bounds. This derivation provides simplified proofs of a number of corollaries including: risk bounds for linear prediction (including settings where the weight vectors are constrained by either L2 or L1 constraints), margin bounds (including both L2 and L1 margins, along with more general notions based on relative entropy), a proof of the PAC-Bayes theorem, and upper bounds on L2 covering numbers (with Lp norm constraints and relative entropy constraints). In addition to providing a unified analysis, the results herein provide some of the sharpest risk and margin bounds. Interestingly, our results show that the uniform convergence rates of empirical risk minimization algorithms tightly match the regret bounds of online learning algorithms for linear prediction, up to a constant factor of 2. 1 Introduction Linear prediction is the cornerstone of an extensive number of machine learning algorithms, including SVM’s, logistic and linear regression, the lasso, boosting, etc. A paramount question is to understand the generalization ability of these algorithms in terms of the attendant complexity restrictions imposed by the algorithm. For example, for the sparse methods (e.g. regularizing based on L1 norm of the weight vector) we seek generalization bounds in terms of the sparsity level. For margin based methods (e.g. SVMs or boosting), we seek generalization bounds in terms of either the L2 or L1 margins. The focus of this paper is to provide a more unified analysis for methods which use linear prediction. Given a training set {(xi, yi)}n i=1, the paradigm is to compute a weight vector ˆw which minimizes the F-regularized ℓ-risk. More specifically, ˆw = argmin w 1 n n X i=1 ℓ(⟨w, xi⟩, yi) + λF(w) (1) where ℓis the loss function, F is the regularizer, and ⟨w, x⟩is the inner product between vectors x and w. In a formulation closely related to the dual problem, we have: ˆw = argmin w:F (w)≤c 1 n n X i=1 ℓ(⟨w, xi⟩, yi) (2) where, instead of regularizing, a hard restriction over the parameter space is imposed (by the constant c). This works provides generalization bounds for an extensive family of regularization functions F. Rademacher complexities (a measure of the complexity of a function class) provide a direct route to obtaining such generalization bounds, and this is the route we take. Such bounds are analogous to VC dimensions bounds, but they are typically much sharper and allow for distribution dependent bounds. There are a number of methods in the literature to use Rademacher complexities to obtain either generalization bounds or margin bounds. Bartlett and Mendelson [2002] provide a generalization bound for Lipschitz loss functions. For binary prediction, the results in Koltchinskii and Panchenko [2002] provide means to obtain margin bounds through Rademacher complexities. In this work, we provide sharp bounds for Rademacher and Gaussian complexities of linear classes, with respect to a strongly convex complexity function F (as in Equation 1). These bounds provide simplified proofs of a number of corollaries: generalization bounds for the regularization algorithm in Equation 2 (including settings where the weight vectors are constrained by either L2 or L1 constraints), margin bounds (including L2 and L1 margins, and, more generally, for Lp margins), a proof of the PAC-Bayes theorem, and L2 covering numbers (with Lp norm constraints and relative entropy constraints). Our bounds are often tighter than previous results and our proofs are all under this more unified methodology. Our proof techniques — reminiscent of those techniques for deriving regret bounds for online learning algorithms — are rooted in convex duality (following Meir and Zhang [2003]) and use a more general notion of strong convexity (as in Shalev-Shwartz and Singer [2006]). Interestingly, the risk bounds we provide closely match the regret bounds for online learning algorithms (up to a constant factor of 2), thus showing that the uniform converge rates of empirical risk minimization algorithms tightly match the regret bounds of online learning algorithms (for linear prediction). The Discussion provides this more detailed comparison. 1.1 Related Work A staggering number of results have focused on this problem in varied special cases. Perhaps the most extensively studied are margin bounds for the 0-1 loss. For L2-margins (relevant for SVM’s, perceptron based algorithms, etc.), the sharpest bounds are those provided by Bartlett and Mendelson [2002] (using Rademacher complexities) and Langford and Shawe-Taylor [2003], McAllester [2003] (using the PAC-Bayes theorem). For L1-margins (relevant for Boosting, winnow, etc), bounds are provided by Schapire et al. [1998] (using a self-contained analysis) and Langford et al. [2001] (using PAC-Bayes, with a different analysis). Another active line of work is on sparse methods — particularly methods which impose sparsity via L1 regularization (in lieu of the non-convex L0 norm). For L1 regularization, Ng [2004] provides generalization bounds for this case, which follow from the covering number bounds of Zhang [2002]. However, these bounds are only stated as polynomial in the relevant quantities (dependencies are not provided). Previous to this work, the most unified framework for providing generalization bounds for linear prediction stem from the covering number bounds in Zhang [2002]. Using these covering number bounds, Zhang [2002] derives margin bounds in a variety of cases. However, providing sharp generalization bounds for problems with L1 regularization (or L1 constraints in the dual) requires more delicate arguments. As mentioned, Ng [2004] provides bounds for this case, but the techniques used by Ng [2004] would result in rather loose dependencies (the dependence on the sample size n would be n−1/4 rather than n−1/2). We discuss this later in Section 4. 2 Preliminaries Our input space, X, is a subset of a vector space, and our output space is Y. Our samples (X, Y ) ∈ X × Y are distributed according to some unknown distribution P. The inner product between vectors x and w is denoted by ⟨w, x⟩, where w ∈S (here, S is a subset of the dual space to our input vector space). A norm of a vector x is denoted by ∥x∥, and the dual norm is defined as ∥w∥⋆= sup{⟨w, x⟩: ∥x∥≤1}. We further assume that for all x ∈X, ∥x∥≤X. Let ℓ: R×Y →R+ be our loss function of interest. Throughout we shall consider linear predictors of form ⟨w, x⟩. The expected of loss of w is denoted by L(w) = E[ℓ(⟨w, x⟩, y)]. As usual, we are provided with a sequence of i.i.d. samples {(xi, yi)}n i=1, and our goal is to minimize our expected loss. We denote the empirical loss as ˆL(w) = 1 n Pn i=1 ℓ(⟨w, xi⟩, yi). The restriction we make on our complexity function F is that it is a strongly convex function. In particular, we assume it is strongly convex with respect to our dual norm: a function F : S →R is said to be σ-strongly convex w.r.t. to ∥· ∥∗iff ∀u, v ∈S, ∀α ∈[0, 1], we have F(αu + (1 −α)v) ≤αF(u) + (1 −α)F(v) −σ 2 α(1 −α)∥u −v∥2 ∗. See Shalev-Shwartz and Singer [2006] for more discussion on this generalized definition of strong convexity. Recall the definition of the Rademacher and Gaussian complexity of a function class F, Rn(F) = E " sup f∈F 1 n n X i=1 f(xi)ǫi # Gn(F) = E " sup f∈F 1 n n X i=1 f(xi)ǫi # where, in the former, ǫi independently takes values in {−1, +1} with equal probability, and, in the latter, ǫi are independent, standard normal random variables. In both expectations, (x1, . . . , xn) are i.i.d. As mentioned in the Introduction, there are number of methods in the literature to use Rademacher complexities to obtain either generalization bounds or margin bounds. Two results are particularly useful to us. First, Bartlett and Mendelson [2002] provides the following generalization bound for Lipschitz loss functions. Here, L(f) = E[ℓ(f(x), y)] is the expected of loss of f : X →R, and ˆL(f) = 1 n Pn i=1 ℓ(f(xi), yi) is the empirical loss. Theorem 1. (Bartlett and Mendelson [2002]) Assume the loss ℓis Lipschitz (with respect to its first argument) with Lipschitz constant Lℓand that ℓis bounded by c. For any δ > 0 and with probability at least 1 −δ simultaneously for all f ∈F, we have that L(f) ≤ˆL(f) + 2LℓRn(F) + c r log(1/δ) 2n where Rn(F) is the Rademacher complexity of a function class F, and n is the sample size. The second result, for binary prediction, from Koltchinskii and Panchenko [2002] provides a margin bound in terms of the Rademacher complexity. The following is a variant of Theorem 2 in Koltchinskii and Panchenko [2002]: Theorem 2. (Koltchinskii and Panchenko [2002]) The zero-one loss function is given by ℓ(f(x), y) = 1[yf(x) ≤0], where y ∈{+1, −1}. Denote the fraction of the data having γmargin mistakes by Kγ(f) := |{i:yif(xi)<γ}| n . Assume that ∀f ∈F we have supx∈X |f(x)| ≤C. Then, with probability at least 1 −δ over the sample, for all margins γ > 0 and all f ∈F we have, L(f) ≤Kγ(f) + 4Rn(F) γ + s log(log2 4C γ ) n + r log(1/δ) 2n . (We provide a proof in the appendix.) The above results show that if we provide sharp bounds on the Rademacher complexities then we obtain sharp generalization bounds. Typically, we desire upper bounds on the Rademacher complexity that decrease with n. 3 Complexities of Linear Function Classes Given a subset W ⊆S, define the associated class of linear functions FW as FW := {x 7→⟨w, x⟩: w ∈W}. Our main theorem bounds the complexity of FW for certain sets W. Theorem 3. (Complexity Bounds) Let S be a closed convex set and let F : S →R be σ-strongly convex w.r.t. ∥· ∥∗s.t. infw∈S F(w) = 0. Further, let X = {x : ∥x∥≤X}. Define W = {w ∈ S : F(w) ≤W 2 ∗}. Then, we have Rn(FW) ≤XW∗ r 2 σn , Gn(FW) ≤XW∗ r 2 σn . The restriction infw∈S F(w) = 0 is not a significant one since adding a constant to F still keeps it strongly convex. Interestingly, the complexity bounds above precisely match the regret bounds for online learning algorithms (for linear prediction), a point which we return to in the Discussion. We first provide a few examples, before proving this result. 3.1 Examples (1) Lp/Lq norms. Let S = Rd. Take ∥·∥, ∥·∥∗to be the Lp, Lq norms for p ∈[2, ∞), 1/p+1/q = 1, where ∥x∥p := Pd j=1 |xi|p1/p . Choose F(w) = ∥·∥2 q and note that it is 2(q−1)-strongly convex on Rd w.r.t. itself. Set X, W as in Theorem 3. Then, we have Rn(FW) ≤XW∗ r p −1 n . (3) (2) L∞/L1 norms. Let S = {w ∈Rd : ∥w∥1 = W1 , wj ≥0} be the W1-scaled probability simplex. Take ∥· ∥, ∥· ∥∗to be the L∞, L1 norms, ∥x∥∞= max1≤j≤d |xj|. Fix a probability distribution µ > 0 and let F(w) = entroµ(w) := P j(wj/W1) log(wj/(W1µj)). For any µ, entroµ(w) is 1/W 2 1 -strongly convex on S w.r.t. ∥· ∥1. Set X as in Theorem 3 and let W(E) = {w ∈S : entroµ(w) ≤E}. Then, we have Rn(FW(E)) ≤XW1 r 2E n . (4) Note that if we take µ to be the uniform distribution then for any w ∈S we have that trivial upper bound of entroµ(w) ≤log d. Hence if we let W := W(log d) with uniform µ and note that it is the entire scaled probability simplex. Then Rn(FW) ≤XW1 r 2 log d n . (5) The restriction wj ≥0 can be removed in the definition of S by the standard trick of doubling the dimension of x to include negated copies of each coordinate. So, if we have S = {w ∈Rd : ∥w∥1 ≤W1} and we set X as above and W = S, then we get Rn(FW) ≤XW1 p 2 log(2d)/n. In this way, even though the L1 norm is not strongly convex (so our previous Theorem does not directly apply to it), the class of functions imposed by this L1 norm restriction is equivalent to that imposed by the above entropy restriction. Hence, we are able to analyze the generalization properties of the optimization problem in Equation 2. (3) Smooth norms. A norm is (2, D)-smooth on S if for any x, y ∈S, d2 dt2 ∥x + ty∥2 ≤2D2∥y∥2 . Let ∥· ∥be a (2, D)-smooth norm and ∥· ∥∗be its dual. Lemma 11 in the appendix proves that ∥· ∥∗ is 2/D2-strongly convex w.r.t. itself. Set X, W as in Theorem 3. Then, we have Rn(FW) ≤XW∗D √n . (6) (4) Bregman divergences. For a strongly convex F, define the Bregman divergence ∆F (w∥v) := F(w) −F(v) −⟨∇F(v), w −v⟩. It is interesting to note that Theorem 3 is still valid if we choose W∗= {w ∈S : ∆F (w∥v) ≤W 2 ∗} for some fixed v ∈S. This is because the Bregman divergence ∆F (·∥v) inherits the strong convexity of F. Except for (5), none of the above bounds depend explicitly on the dimension of the underlying space and hence can be easily extended to infinite dimensional spaces under appropriate assumptions. 3.2 The Proof First, some background on convex duality is in order. The Fenchel conjugate of F : S →R is defined as: F ∗(θ) := sup w∈S ⟨w, θ⟩−F(w) . A simple consequence of this definition is Fenchel-Young inequality, ∀θ, w ∈S, ⟨w, θ⟩≤F(w) + F ∗(θ) . If F is σ-strongly convex, then F ∗is differentiable and ∀θ, η, F ∗(θ + η) ≤F ∗(θ) + ⟨∇F ∗(θ), η⟩+ 1 2σ ∥η∥2 ∗. (7) See the Appendix in Shalev-Shwartz [2007] for proof. Using this inequality we can control the expectation of F ∗applied to a sum of independent random variables. Lemma 4. Let S be a closed convex set and let F : S →R be σ-strongly convex w.r.t. ∥· ∥∗. Let Zi be mean zero independent random vectors such that E[∥Zi∥2] ≤V 2. Define Si := P j≤i Zi. Then F ∗(Si) −iV 2/2σ is a supermartingale. Furthermore, if infw∈S F(w) = 0, then E[F ∗(Sn)] ≤ nV 2/2σ. Proof. Note that infw∈S F(w) = 0 implies F ∗(0) = 0. Inequality (7) gives, F ∗(Si−1 + Zi) ≤F ∗(Si) + ⟨∇F ∗(Si−1), Zi⟩+ 1 2σ ∥Zi∥2 ∗. Taking conditional expectation w.r.t. Z1, . . . , Zi−1 and noting that Ei−1[Zi] = 0 and Ei−1[∥Zi∥2 ∗] ≤ V 2, we get Ei−1[F ∗(Si)] ≤F ∗(Si−1) + 0 + V 2 2σ where Ei−1[·] abbreviates E[· | Z1, . . . , Zi−1]. To end the proof, note that infw∈S F(w) = 0 implies F ∗(0) = 0. Like Meir and Zhang [2003] (see Section 5 therein), we begin by using conjugate duality to bound the Rademacher complexity. To finish the proof, we exploit the strong convexity of F by applying the above lemma. Proof. Fix x1, . . . , xn such that ∥xi∥≤X. Let θ = 1 n P i ǫixi where ǫi’s are i.i.d. Rademacher or Gaussian random variables (our proof only requires that E[ǫi] = 0 and E[ǫ2 i ] = 1). Choose arbitrary λ > 0. By Fenchel’s inequality, we have ⟨w, λθ⟩≤F(w) + F ∗(λθ) which implies ⟨w, θ⟩≤F(w) λ + F ∗(λθ) λ . Since, F(w) ≤W 2 ∗for all w ∈W, we have sup w∈W ⟨w, θ⟩≤W 2 ∗ λ + F ∗(λθ) λ . Taking expectation (w.r.t. ǫi’s), we get E  sup w∈W ⟨w, θ⟩  ≤W 2 ∗ λ + 1 λE [F ∗(λθ)] . Now set Zi = λǫixi n (so that Sn = λθ) and note that the conditions of Lemma 4 are satisfied with V 2 = λ2B2/n2 and hence E[F ∗(λθ)] ≤λ2X2 2σn . Plugging this above, we have E  sup w∈W ⟨w, θ⟩  ≤W 2 ∗ λ + λX2 2σn . Setting λ = q 2σnW 2 ∗ X2 gives E  sup w∈W ⟨w, θ⟩  ≤XW∗ r 2 σn . which completes the proof. 4 Corollaries 4.1 Risk Bounds We now provide generalization error bounds for any Lipschitz loss function ℓ, with Lipschitz constant Lℓ. Based on the Rademacher generalization bound provided in the Introduction (see Theorem 1) and the bounds on Rademacher complexity proved in previous section, we obtain the following corollaries. Corollary 5. Each of the following statements holds with probability at least 1−δ over the sample: • Let W be as in the Lp/Lq norms example. For all w ∈W, L(w) ≤ˆL(w) + 2LℓXW∗ r p −1 n + LℓXW∗ r log(1/δ) 2n • Let W be as in the L∞/L1 norms example. For all w ∈W, L( ˆw) ≤ˆL(w) + 2LℓXW1 r 2 log(d) n + LℓXW1 r log(1/δ) 2n Ng [2004] provides bounds for methods which use L1 regularization. These bounds are only stated as polynomial bounds, and, the methods used (covering number techniques from Pollard [1984] and covering number bounds from Zhang [2002]) would provide rather loose bounds (the n dependence would be n−1/4). In fact, even a more careful analysis via Dudley’s entropy integral using the covering numbers from Zhang [2002] would result in a worse bound (with additional log n factors). The above argument is sharp and rather direct. 4.2 Margin Bounds In this section we restrict ourselves to binary classification where Y = {+1, −1}. Our prediction is given by sign(⟨w, x⟩). The zero-one loss function is given by ℓ(⟨w, x⟩, y) = 1[y ⟨w, x⟩≤ 0]. Denote the fraction of the data having γ-margin mistakes by Kγ(f) := |{i:yif(xi)<γ}| n . We now demonstrate how to get improved margin bounds using the upper bounds for the Rademacher complexity derived in Section 3. Based on the Rademacher margin bound provided in the Introduction (see Theorem 2), we get the following corollary which will directly imply the margin bounds we are aiming for. The bound for the p = 2 case has been used to explain the performance of SVMs. Our bound essentially matches the best known bound [Bartlett and Mendelson, 2002] which was an improvement over previous bounds [Bartlett and Shawe-Taylor, 1999] proved using fat-shattering dimension estimates. For the L∞/L1 case, our bound improves the best known bound [Schapire et al., 1998] by removing a factor of √log n. Corollary 6. (Lp Margins) Each of the following statements holds with probability at least 1 −δ over the sample: • Let W be as in the Lp/Lq norms example. For all γ > 0, w ∈W, L(w) ≤Kγ(w) + 4XW∗ γ r p −1 n + s log(log2 4XW∗ γ ) n + r log(1/δ) 2n • Let W be as in the L∞/L1 norms example. For all γ > 0, w ∈W, L(w) ≤Kγ(w) + 4XW1 γ r 2 log(d) n + s log(log2 4XW1 γ ) n + r log(1/δ) 2n The following result improves the best known results of the same kind, [Langford et al., 2001, Theorem 5] and [Zhang, 2002, Theorem 7], by removing a factor of √log n. These results themselves were an improvement over previous results obtained using fat-shattering dimension estimates. Corollary 7. (Entropy Based Margins) Let X be such that for all x ∈X, ∥x∥∞≤X. Consider the class W = {w ∈Rd : ∥w∥1 ≤W1}. Fix an arbitrary prior µ. We have that with probability at least 1 −δ over the sample, for all margins γ > 0 and all weight vector w ∈W, L(w) ≤Kγ(w) + 8.5 XW1 γ r entroµ(w) + 2.5 n + s log(log2 4XW1 γ ) n + r log(1/δ) 2n where entroµ(w) := P i |wi| ∥w∥1 log( |wi| µi∥w∥1 ) Proof. Proof is provided in the appendix. 4.3 PAC-Bayes Theorem We now show that (a form of) the PAC Bayesian theorem [McAllester, 1999] is a consequence of Theorem 3. In the PAC Bayesian theorem, we have a set of hypothesis (possibly infinite) C. We choose some prior distribution over this hypothesis set say µ, and after observing the training data, we choose any arbitrary posterior ν and the loss we are interested in is ℓν(x, y) = Ec∼νℓ(c, x, y) that is basically the expectation of the loss when hypothesis c ∈C are drawn i.i.d. using distribution ν. Note that in this section we are considering a more general form of the loss. The key observation as that we can view ℓν(x) as the inner product ⟨dν(·), ℓ(·, x, y)⟩between the measure dν(·) and the loss ℓ(·, x). This leads to the following straightforward corollary. Corollary 8. (PAC-Bayes) For a fixed prior µ over the hypothesis set C, and any loss bounded by 1, with probability at least 1 −δ over the sample, simultaneously for all choice of posteriors ν over C we have that, Lν ≤ˆLν + 4.5 r max{KL(ν∥µ), 2} n + r log(1/δ) 2n (8) Proof. Proof is provided in the appendix. Interestingly, this result is an improvement over the original statement, in which the last term was p log(n/δ)/n. Our bound removes this extra log(n) factor, so, in the regime where we fix ν and examine large n, this bound is sharper. We note that our goal was not to prove the PAC-Bayes theorem, and we have made little attempt to optimize the constants. 4.4 Covering Number Bounds It is worth noting that using Sudakov’s minoration results we can obtain upper bound on the L2 (and hence also L1) covering numbers using the Gaussian complexities. The following is a direct corollary of the Sudakov minoration theorem for Gaussian complexities (Theorem 3.18, Page 80 of Ledoux and Talagrand [1991]). Corollary 9. Let FW be the function class from Theorem 3. There exists a universal constant K > 0 such that its L2 covering number is bounded as follows: ∀ǫ > 0 log(N2(FW, ǫ, n)) ≤2K2X2W 2 ∗ σǫ2 This bound is sharper than those that could be derived from the N∞covering number bounds of Zhang [2002]. 5 Discussion: Relations to Online, Regret Minimizing, Algorithms In this section, we make a further assumption that loss ℓ(⟨w, x⟩, y) is convex in its first argument. We now show that in the online setting that the regret bounds for linear prediction closely match our risk bounds. The algorithm we consider performs the update, wt+1 = ∇F −1(∇F(wt) −η∇wℓ(⟨wt, xt⟩, yt)) (9) This algorithm captures both gradient updates, multiplicative updates, and updates based on the Lp norms, through appropriate choices of F. See Shalev-Shwartz [2007] for discussion. For the algorithm given by the above update, the following theorem is a bound on the cumulative regret. It is a corollary of Theorem 1 in Shalev-Shwartz and Singer [2006] (and also of Corollary 1 in Shalev-Shwartz [2007]), applied to our linear case. Corollary 10. (Shalev-Shwartz and Singer [2006]) Let S be a closed convex set and let F : S →R be σ-strongly convex w.r.t. ∥· ∥∗. Further, let X = {x : ∥x∥≤X} and W = {w ∈S : F(w) ≤ W 2 ∗}. Then for the update given by Equation 9 if we start with w1 = argmin F(w), we have that for all sequences {(xt, yt)}n t=1, n X t=1 ℓ(⟨wt, xt⟩, yt) −argmin w∈W n X t=1 ℓ(⟨w, xt⟩, yt) ≤LℓXW∗ r 2n σ For completeness, we provide a direct proof in the Appendix. Interestingly, the regret above is precisely our complexity bounds (when Lℓ= 1). Also, our risk bounds are a factor of 2 worse, essentially due to the symmetrization step used in proving Theorem 1. References P. L. Bartlett and S. Mendelson. Rademacher and Gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3:463–482, 2002. P. L. Bartlett and J. Shawe-Taylor. Generalization performance of support vector machines and other pattern classifiers. In B. Sch¨olkopf, C. J. C. Burges, and A. J. Smola, editors, Advances in Kernel Methods – Support Vector Learning, pages 43–54. MIT Press, 1999. N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press, 2006. V. Koltchinskii and D. Panchenko. Empirical margin distributions and bounding the generalization error of combined classifiers. Annals of Statistics, 30(1):1–50, 2002. J. Langford and J. Shawe-Taylor. PAC-Bayes & margins. In Advances in Neural Information Processing Systems 15, pages 423–430, 2003. J. Langford, M. Seeger, and Nimrod Megiddo. An improved predictive accuracy bound for averaging classifiers. In Proceedings of the Eighteenth International Conference on Machine Learning, pages 290–297, 2001. M. Ledoux and M. Talagrand. Probability in Banach spaces: Isoperimetry and processes, volume 23 of Ergebnisse der Mathematik und ihrer Grenzgebiete (3). Springer-Verlag, 1991. David A. McAllester. Simplified PAC-Bayesian margin bounds. In Proceedings of the Sixteenth Annual Conference on Computational Learning Theory, pages 203–215, 2003. David A. McAllester. PAC-Bayesian model averaging. In Proceedings of the Twelfth Annual Conference on Computational Learning Theory, pages 164–170, 1999. Ron Meir and Tong Zhang. Generalization error bounds for Bayesian mixture algorithms. Journal of Machine Learning Research, 4:839–860, 2003. A.Y. Ng. Feature selection, l1 vs. l2 regularization, and rotational invariance. In Proceedings of the Twenty-First International Conference on Machine Learning, 2004. David Pollard. Convergence of Stochastic Processes. Springer-Verlag, 1984. R.E. Schapire, Y. Freund, P. Bartlett, and W.S. Lee. Boosting the margin: A new explanation for the effectiveness of voting methods. The Annals of Statistics, 26(5):1651–1686, October 1998. S. Shalev-Shwartz. Online Learning: Theory, Algorithms, and Applications. PhD thesis, The Hebrew University, 2007. S. Shalev-Shwartz and Y. Singer. Convex repeated games and Fenchel duality. In Advances in Neural Information Processing Systems 20, 2006. M. Warmuth and A. K. Jagota. Continuous versus discrete-time non-linear gradient descent: Relative loss bounds and convergence. In Fifth International Symposium on Artificial Intelligence and Mathematics, 1997. T. Zhang. Covering number bounds of certain regularized linear function classes. Journal of Machine Learning Research, 2:527–550, 2002.
2008
85
3,576
Bayesian Exponential Family PCA Shakir Mohamed Katherine Heller Zoubin Ghahramani Department of Engineering, University of Cambridge Cambridge, CB2 1PZ, UK {sm694,kah60,zoubin}@eng.cam.ac.uk Abstract Principal Components Analysis (PCA) has become established as one of the key tools for dimensionality reduction when dealing with real valued data. Approaches such as exponential family PCA and non-negative matrix factorisation have successfully extended PCA to non-Gaussian data types, but these techniques fail to take advantage of Bayesian inference and can suffer from problems of overfitting and poor generalisation. This paper presents a fully probabilistic approach to PCA, which is generalised to the exponential family, based on Hybrid Monte Carlo sampling. We describe the model which is based on a factorisation of the observed data matrix, and show performance of the model on both synthetic and real data. 1 Introduction In Principal Components Analysis (PCA) we seek to reduce the dimensionality of a D-dimensional data vector to a smaller K-dimensional vector, which represents an embedding of the data in a lower dimensional space. The traditional PCA algorithm is non-probabilistic and defines the eigenvectors corresponding to the K-largest eigenvalues as this low dimensional embedding. In probabilistic approaches to PCA, such as probabilistic PCA (PPCA) and Bayesian PCA [1], the data is modelled by unobserved latent variables, and these latent variables define the low dimensional embedding. In these models both the data and the latent variables are assumed to be Gaussian distributed. This Gaussian assumption may not be suitable for all data types, especially in the case where data is binary or integer valued. Models such as Non-negative Matrix Factorisation (NMF) [2], Discrete Components Analysis (DCA) [3], Exponential Family PCA (EPCA) [4] and Semi-parametric PCA (SP-PCA) [5], have been developed that endow PCA the ability to handle data for which Bernoulli or Poisson distributions may be more appropriate. These general approaches to PCA involve the representation of the data matrix X as a product of smaller matrices: the factor score matrix V, representing the reduced vectors; and a data independent part Θ, known as the factor loading matrix. In the original data matrix, there are N × D entries, and in the matrix factorisation there are (N + D) × K entries, which is a reduction in the number of parameters if K ≪N, D [3]. Models such as PCA, NMF and EPCA are from the class of deterministic latent variable models [6], since their latent variables are set to their maximum a posteriori (MAP) values. Welling et al. [6] argue that the resulting model essentially assigns zero probability to all input configurations that are not in the training set. This problem stems from the use of an inappropriate objective function, and can be remedied by using an alternate approximate inference scheme. In this paper, we propose a fully Bayesian approach to PCA generalised to the exponential family. Our approach follows the method of factorising the data matrix into two lower rank matrices using an exponential family distribution for the data with conjugate priors. The exponential family of distributions is reviewed in section 2, and the complete specification for the model is given in section 3. Learning and inference in the model is performed using the Hybrid Monte Carlo approach, which is appropriate due to the continuous nature of variables in the model. The connections to existing generalised PCA methods, such as NMF and EPCA are discussed in section 4. We present results on the performance of our Bayesian exponential family PCA model in section 5. We report performance using both a synthetic data set to highlight particular model properties and also on two real datasets: the Cedar Buffalo digits dataset and data on cardiac SPECT images. The Bayesian approach gives us many samples of the final low dimensional embedding of the data, and techniques for determining a single low dimensional embedding are discussed in section 6. In section 7 we conclude, and present a survey of possible future work. 2 Exponential Family Models In the exponential family of distributions, the conditional probability of a value xn given parameter value θ, takes the following form: p(xn|θ) = exp{s(xn)⊤θ + h(xn) + g(θ)} (1) where s(xn) are the sufficient statistics, θ is a vector of natural parameters, h(xn) is a function of the data and g(θ) is a function of the parameters. In this paper, the natural representation of the exponential family likelihood is used, such that s(xn) = xn. It is convenient to represent a variable xn that is drawn from an exponential family distribution using the notation: xn ∼Expon(θ) with natural parameters θ. Probability distributions that belong to the exponential family also have natural conjugate prior distributions p(θ). The conjugate prior distribution for the exponential family distribution of equation (1) is: p(θ) ∝exp{λ⊤θ + νg(θ) + f(λ)} (2) where λ and ν are hyperparameters of the prior distribution. In this case we use the notation: θ ∼Conj(λ, ν) as shorthand for the conjugate distribution. As an example, for binary data an appropriate data distribution is the Bernoulli distribution. The distribution is usually written as p(x|µ) = µx(1 −µ)1−x, with µ in [0,1]. The exponential family form of this distribution, using the terms in equation (1) are: h(x) = 0, θ = ln( µ 1−µ) and g(θ) = −ln(1 + eθ). The natural parameters can be mapped to the parameter values of the distribution using the link function, which is the logistic sigmoid in the case of the Bernoulli distribution. The terms of the conjugate distribution can also be derived easily. 3 Bayesian Exponential Family PCA We can consider Bayesian Exponential Family PCA (BXPCA) as a method of searching for two matrices V and Θ, and we define the product matrix P = VΘ. In traditional PCA, the elements of the matrix P which are the means of Gaussians, lie in the same space as that of the data X. In the case of BXPCA and other methods for non-Gaussian PCA such as EPCA [4], this matrix represents the natural parameters of the exponential family distribution of the data. We represent the observed data as an N × D matrix X = {x1, . . . , xN}, with an individual data point xn = [xn1, . . . , xnD]. N is the number of data points and D is the number of input features. Θ is a K × D matrix with rows θk. V is a N × K matrix V = {v1, . . . , vn}, and rows vn = [vn1, . . . , vnK], are K-dimensional vectors of continuous values in R. K is the number of latent factors representing the dimensionality of the reduced space. 3.1 Model Specification The generative process for the BXPCA model is described in figure 1. Let m and S be hyperparameters representing a K-dimensional vector of initial mean values and an initial covariance matrix respectively. Let α and β be the hyperparameters corresponding to the shape and scale parameters of an inverse Gamma distribution. We start by drawing µ from a Gaussian distribution and the elements σ2 k of the diagonal matrix Σ from an inverse gamma distribution: µ ∼N(µ|m, S) σ2 k ∼iG(α, β) (3) Figure 1: Graphical Model for Bayesian Exponential Family PCA. For each data point n, we draw the K-dimensional entry vn of the factor score matrix: vn ∼N(vn|µ, Σ) (4) The data is described by an exponential family distribution with natural parameters θk. The exponential family distribution modelling the data, and the corresponding prior over the model parameters, is: xn|vn, Θ ∼Expon X k vnkθk ! θk ∼Conj (λ, ν) (5) We denote Ω= {V, Θ, µ, Σ} as the set of unknown parameters with hyperparameters Ψ = {m, S, α, β, λ, ν}. Given the graphical model, the joint probability of all parameters and variables is: p(X, Ω|Ψ) = p(X|V, Θ)p(Θ|λ, ν)p(V|µ, Σ)p(µ|m, S)p(Σ|α, β) (6) Using the model specification given by equations (3) - (5) and assuming that the parameter ν = 1, the log-joint probability distribution is: ln p(X, Ω|Ψ) = N X n=1   X k vnkθk !⊤ xn + h(xn) + g X k vnkθk !  (7) + K X k=1  λ⊤θk + g(θk) + f(λ)  + N X n=1  −K 2 ln(2π) −1 2 ln |Σ| −1 2(vn −µ)T Σ−1(vn −µ)  −K 2 ln(2π) −1 2 ln |S| −1 2(µ −m)T S−1(µ −m) + K X i=1  α ln β −ln Γ(α) + (α −1) ln σ2 i −βσ2 i  where the functions h(·), g(·) and f(·) correspond to the functions of the chosen conjugate distribution for the data. 3.2 Learning The model parameters Ω= {V, Θ, µ, Σ} are learned from the data using Hybrid Monte Carlo (HMC) sampling [7]. While the parameters Ψ = {m, S, α, β, λ, ν} are treated as fixed hyperparameters, these can also be learned from the data. Hybrid Monte Carlo is a suitable sampler for use with this model since all the variables are continuous and it is possible to compute the derivative of the log-joint probability. HMC is also an attractive scheme for sampling since it avoids the random walk behaviour of the Metropolis or the Gibbs sampling algorithms [7]. Hybrid Monte Carlo (HMC) is an auxiliary variable sampler where we sample from an augmented distribution p(x, u), rather than the target distribution p(x), since it is easier to sample from this augmented distribution [8]. HMC utilises the gradient of the target distribution to improve mixing in high dimensions. In BXPCA, the target distribution is: E(Ω|Ψ) = −ln p(X, Ω|Ψ) and represents the potential energy function. The auxiliary variable u, is Gaussian and is used to define the kinetic energy K = 1 2uT u. Furthermore, we define the gradient vector ∆(X, Ω) ≜ ∂E(Ω) ∂Ω, which can be computed using equation (7). The sum of the kinetic and the potential energy defines the Hamiltonian. Samples of Ωand u are obtained by combining the Hamiltonian with the gradient information in the simulation of so-called “leapfrog” steps. These details and the general pseudocode for HMC can be found in MacKay [9]. One key feature of HMC is that the dynamics is simulated in an unconstrained space. Therefore to correctly apply HMC to this model, we must ensure that all constrained variables are transformed to an unconstrained space, perform dynamics in this unconstrained space, and then transform the variables back to the original constrained space. The only variable that is constrained in BXPCA is Σ where each diagonal element σ2 k > 0. Each σ2 k can be transformed to a corresponding unconstrained variable ξk using the transformation: σ2 k = eξk. This transformation requires that we then apply the chain rule for differentiation and that we must include the determinant of the Jacobian of the transformed variables, which is: |J| = ∂ ∂ξk exp(σ2 k) = |exp(ξk)| = σ2 k. We also extended the HMC procedure to handle missing inputs in a principled manner, by analytically integrating them out.In practice, this implies working with missing data under the Missing at Random (MAR) assumption. Here, we divide the data into the set of observed and missing data, X = {Xobs, Xmissing}, and use the set Xobs in the inference. 4 Related Work Exponential Family PCA: Exponential family PCA (EPCA) [4] is a general class of PCA algorithms that allows the ideas of PCA to be applied to any data that can be modelled from a distribution in the exponential family. Like BXPCA, it is based on a factorisation of the data into a factor score matrix V and a factor loading matrix Θ. The algorithm is based on the optimisation of a loss function which is based on the Bregman divergence between the data and the learned reconstruction of the data. The learning is based on an alternating minimisation procedure where the two matrices V and Θ are optimised in turn, and each optimisation is a convex function. The EPCA objective function can be seen as the likelihood function of a probabilistic model, and hence this optimisation corresponds to maximum a posteriori (MAP) learning. The use of MAP learning makes EPCA a deterministic latent variable model [6], since the latent variables are set to their MAP values. In both our model and EPCA, the product P = VΘ represents the natural parameters of the distribution over the data, and must be transformed using the link function to get to the parameter space of the associated data distribution. Our model is different from EPCA in that it is a fully probabilistic model in which all parameters can be integrated out by MCMC. Furthermore, EPCA does not include any form of regularisation and is prone to overfitting the data, which is avoided in the Bayesian framework. We will compare BXPCA to EPCA throughout this paper. Non-negative Matrix Factorisation: Non-negative Matrix Factorisation (NMF) [2] is a technique of factorising a matrix into the product of two positive lower rank matrices. In NMF, the matrix product P approximates the mean parameters of the data distribution, and is thus in the same space as the data. A mean parameter for example, is the rate λ if the data is modelled as a Poisson distribution, or is the probability of data being a 1 if the data is modelled as a Bernoulli. In NMF, V and Θ are restricted to be positive matrices, and inference corresponds to maximum likelihood learning with a Poisson likelihood. Similarly to EPCA, this learning method places NMF in the class of deterministic latent variable methods. Discrete Components Analysis: The Discrete Components Analysis (DCA) [3] is a family of probabilistic algorithms that deals with the application of PCA to discrete data and is a unification of the existing theory relating to dimensionality reduction with discrete distributions. In DCA the product P = VΘ is the mean parameter of the appropriate distribution over that data, as with NMF, and also constrains V and Θ to be non-negative. The various algorithms of the DCA family are simulated using either Gibbs sampling or variational approximations. Bayesian Partial Membership: The Bayesian Partial Membership (BPM) model is a clustering technique that allows data points to have fractional membership in multiple clusters. The model is derived from a finite mixture model which allows the usual indicator variables to take on any value in the range [0,1]. The resulting model has the same form as the model shown in figure 1, but instead of the model variable V being modelled as a Gaussian with unknown mean and covariance, it is instead modelled as a Dirichlet distribution. This difference is important, since it affects the interpretation of the results. In the BXPCA, we interpret the matrix V as a lower dimensional embedding of the data which can be used for dimensionality reduction. In contrast, the corresponding matrix for the BPM model, whose values are restricted to [0,1], is the partial membership of each data point and represents the extent to which each data point belongs to each of the K clusters. 5 Results and Discussion Synthetic Data: Synthetic data was generated by creating three 16-bit prototype vectors with each bit being generated with a probability of 0.5. Each of the three prototypes is replicated 200 times, resulting in a 600-point data set. We then flip bits in the replicates with a probability of 0.1, as in Tipping [10], thus adding noise about each of the prototypes. BXPCA inference was run using this data for 4000 iterations, using the first half as burn-in. Figure 2 demonstrates the learning process of BXPCA. In the initial phase of the sampling, the energy decreases slowly and the model is unable to learn any useful structure from the data. Around sample 750, the energy function decreases and some useful structure has been learnt. By sample 4000 the model has learnt the original data well, as can be seen by comparing sample 4000 and the original data. To evaluate the performance of BXPCA, we define training and test data from the available 5 50 500 5000 2000 4000 6000 8000 10000 Energy E(Ω) Sample 5 5 10 15 100 200 300 400 500 600 Sample 200 5 10 15 100 200 300 400 500 600 Sample 300 5 10 15 100 200 300 400 500 600 Sample 500 5 10 15 100 200 300 400 500 600 Sample 1000 5 10 15 100 200 300 400 500 600 Sample 1250 5 10 15 100 200 300 400 500 600 Sample 2000 5 10 15 100 200 300 400 500 600 Sample 3250 5 10 15 100 200 300 400 500 600 Sample 4000 5 10 15 100 200 300 400 500 600 Original Data 5 10 15 100 200 300 400 500 600 Figure 2: Reconstruction of data from samples at various stages of the sampling. The top plot shows the change in the energy function. The lower plots show the reconstructions and the original data. 1 2 3 4 5 8 10 15 20 25 30 0.2 0.3 0.4 0.5 0.6 0.7 RMSE on Test Data Latent Factors (K) 1 2 3 4 5 8 10 15 20 25 30 0 1000 2000 3000 4000 5000 6000 7000 Neg. Log Prob. (Bits) Latent Factors (K) 1 2 3 4 5 8 10 15 20 25 30 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 RMSE on Training Data Latent Factors (K) 0 5 10 15 20 25 30 10 −3 10 −2 10 −1 10 0 Latent Factors (K) |ε| > 0.95 EPCA BXPCA −Box− BXPCA −Notch− EPCA −Box− BXPCA −Notch− EPCA −Box− BXPCA −Notch− EPCA (a) (b) (c) (d) Figure 3: Boxplots comparing the NLP and RMSE of BXPCA and EPCA for various latent factors. data. The test data was created by randomly selecting 10% of the data points. These test data points were set as missing values in the training data. Inference is then run using BXPCA, which has been extended to consider missing data. This method of using missing data is a natural way of testing these algorithms, since both are generative models. We calculate the negative log probability (NLP) and the root mean squared error (RMSE) using the testing data. We evaluate the same metrics for EPCA, which is also trained considering missing data. This missing data testing methodology is also used in the experiments on real data that are described later. In figure 3a and 3b, the RMSE and NLP of the two algorithms are compared respectively, for various choices of the latent factor K. EPCA shows characteristic underfitting for K = 1 and demonstrates severe overfitting for large K. This overfitting is seen by the very large values of NLP for EPCA. If we examine the RMSE on the training data shown in figure 3c, we see the overfitting problem highlighted further, where the error on the training set is almost zero for EPCA, whereas BXPCA manages to avoid this problem. We expect that a random model would have a NLP = 10% × 600 × 16 = 960 bits, but the NLP values for EPCA are significantly larger than this. This is because as EPCA begins to overfit, it becomes highly confident in its predictions and the proportion of bits which it believes are 1, for example, but which are actually 0, increases. This is shown in figure 3d, where we show the frequency of incorrect predictions, where the error between the predicted and actual bits is greater than 0.95. BXPCA, based on a Bayesian approach thus avoids overfitting and gives improved predictions. Digits Data: BXPCA was applied to the CEDAR Buffalo digits dataset. The digit 2 was used, and consists of 700 greyscale images with 64 attributes. The digits were binarised by thresholding at a greyscale value of 128 from the 0 to 255 greyscale range. Table 1 compares the performance of BXPCA and EPCA, using the same method of creating training and testing data sets as for the synthetic data. BXPCA has lower RMSE and NLP than EPCA and also does not exhibit overfitting at large K, which can be seen in EPCA by the large value of NLP at K = 5. SPECT Data: The data set describes the diagnosis of cardiac Single Proton Emission Computed Tomography (SPECT) images [11]. The data consists of 267 SPECT image sets, and has been processed resulting in 22 binary attributes. Table 2 compares the performance of BXPCA and EPCA. This dataset demonstrates that EPCA quickly overfits the data, as shown by the rapidly increasing values of NLP, and that the two algorithms perform equally well for low values of K. Table 1: Table comparing BXPCA and EPCA on the digit 2 dataset. K 2 3 4 5 BXPCA NLP 2032.3 2022.9 2002.4 2032.0 RMSE 0.389 0.385 0.380 0.383 EPCA NLP 2125.5 2482.1 2990.2 4708.8 RMSE 0.392 0.393 0.399 0.402 Table 2: Table Comparing BXPCA and EPCA on the SPECT dataset. K 1 2 3 4 5 6 7 8 BXPCA NLP 348.67 343.40 325.94 331.47 291.75 305.22 310.36 319.06 RMSE 0.441 0.433 0.405 0.419 0.377 0.393 0.383 0.396 EPCA NLP 388.18 516.78 507.79 1096.6 1727.4 4030.0 4209.0 4330.0 RMSE 0.439 0.427 0.413 0.439 0.487 0.517 0.528 0.560 6 Choice of Final Embedding For the purposes of dimensionality reduction, PCA is used to search for a low dimensional embedding V of the data points. In EPCA, the alternating minimisation returns a single V that is the low dimensional representation. In BXPCA though, we do not get a single V, but rather many samples which represent the variation in the embedding. Furthermore, we cannot simply take the average of each of these samples to obtain a single V, since we have not included any identifiability constraints in the model. This lack of identifiability subjects V to permutations of the columns, and to rotations of the matrix, making an average of the samples meaningless. There are several approaches to obtaining a single low dimensional representation from the set of samples. The simplest approach is to choose from the set of available samples, the best global configuration, {V∗, Θ∗} = arg maxΩ(s) p(X, Ω(s)|Ψ), and use this V∗. A second approach aims to give further information about the variability of the embedding. We begin by fixing the model parameters to {Θ∗, µ∗, Σ∗}. These can be set using the sample chosen in the first approach. We then sample V from the conditional distribution: V ∼p(V|X, Θ∗, µ∗, Σ∗) ∝p(X|V, Θ∗)p(V|µ∗, Σ∗) (8) where equation (8) is obtained using Bayes theorem and the joint probability distribution given in equation (6). We can now average these samples to obtain a single embedding since the problems of rotation and permutation have been removed by constraining the variables {Θ∗, µ∗, Σ∗}. We demonstrate this procedure using the synthetic data described in the previous section for K = 2. Figure 4 shows the embedding in the 2D space for 10 data points and 20 independent samples drawn according to equation (8). The graph shows that there is some mean value and also gives us an understanding of the variation that is possible, in this 2D embedding. The drawback of this last approach is that it does not give any indication of the effect of variation in Θ. To gain some understanding of this effect, we can further extend this approach by choosing Q random samples, Θ∗= {Θ∗(1), Θ∗(2), . . . , Θ∗(Q)}, at convergence of the HMC sampler. We then repeat the aforementioned procedure for these various Θ∗(q). This then gives an understanding of the variability of the final embedding, in terms of both Θ and V. 7 Conclusions and Future Work We have described a Bayesian approach to PCA which is generalised to the exponential family. We have employed a Hybrid Monte Carlo sampling scheme with an energy based on the log-joint probability of the model. In particular, we have demonstrated the ability of BXPCA to learn the structure of the data while avoiding overfitting problems, which are experienced by other maximum likelihood approaches to exponential family PCA. We have demonstrated this using both synthetic and real data. −40 −20 0 20 40 60 80 −40 −30 −20 −10 0 10 20 30 40 Variation in Final Embedding Dimension 2 Dimension 1 Figure 4: Variation in final embedding for 10 data points and various samples of V In future the model can be extended by considering an alternate distribution for the factor score matrix V. Instead of considering a Gaussian distribution, a Laplacian or other heavy tailed distribution could be used, which would allow us to determine the lower dimensional embedding of the data, and also give the model a sparseness property. We could also specifically include restrictions on the form of the score and the loading matrices, V and Θ respectively, to ensure identifiability. This makes learning in the model more complex since we must ensure that the restrictions are maintained. Also, it will prove interesting to consider alternate forms of inference, specifically the techniques of sequential Monte Carlo to allow for online inference. Acknowlegdements: We thank Peter Gehler for the EPCA implementation. SM thanks the NRF SA and the Commonwealth Commission for support. KH was supported by an EPSRC Postdoctoral Fellowship (grant no. EP/E042694/1). References [1] C. M. Bishop, Pattern Recognition and Machine Learning. Information Science and Statistics, Springer, August 2006. [2] D. D. Lee and H. S. Seung, “Algorithms for non-negative matrix factorization,” in Advances in Neural Information Processing Systems, vol. 13, pp. 556 – 562, MIT Press, Cambridge, MA, 2001. [3] W. Buntine and A. Jakulin, “Discrete components analysis,” in Subspace, Latent Structure and Feature Selection, vol. 3940/2006, pp. 1–33, Springer (LNCS), 2006. [4] M. Collins, S. Dasgupta, and R. Schapire, “A generalization of principal components to the exponential family,” in Advances in Neural Information Processing Systems, vol. 14, pp. 617 – 624, MIT Press, Cambridge, MA, 2002. [5] Sajama and A. Orlitsky, “Semi-parametric exponential family PCA,” in Advances in Neural Information Processing Systems, vol. 17, pp. 1177 – 1184, MIT Press, Cambridge, MA, 2004. [6] M. Welling, C. Chemudugunta, and N. Sutter, “Deterministic latent variable models and their pitfalls,” in SIAM Conference on Data Mining (SDM), pp. 196 – 207, 2008. [7] R. M. Neal, “Probabilistic inference using Markov Chain Monte Carlo methods,” Tech. Rep. CRG-TR-93-1, University of Toronto, Department of Computer Science, 1993. [8] C. Andrieu, N. De Freitas, A. Doucet, and M. I. Jordan, “An introduction to MCMC for machine learning,” Machine Learning, vol. 50, pp. 5–43, 2003. [9] D. J. C. MacKay, Information Theory, Inference & Learning Algorithms. Cambridge University Press, June 2002. [10] M. E. Tipping, “Probabilistic visualisation of high dimensional binary data,” in Advances in Neural Information Processing Systems, vol. 11, pp. 592 – 598, MIT Press, Cambridge, MA, 1999. [11] “UCI machine learning repository.” http://archive.ics.uci.edu/ml/datasets/.
2008
86
3,577
Sequential effects: Superstition or rational behavior? Angela J. Yu Department of Cognitive Science University of California, San Diego ajyu@ucsd.edu Jonathan D. Cohen Department of Psychology Princeton University jdc@princeton.edu Abstract In a variety of behavioral tasks, subjects exhibit an automatic and apparently suboptimal sequential effect: they respond more rapidly and accurately to a stimulus if it reinforces a local pattern in stimulus history, such as a string of repetitions or alternations, compared to when it violates such a pattern. This is often the case even if the local trends arise by chance in the context of a randomized design, such that stimulus history has no real predictive power. In this work, we use a normative Bayesian framework to examine the hypothesis that such idiosyncrasies may reflect the inadvertent engagement of mechanisms critical for adapting to a changing environment. We show that prior belief in non-stationarity can induce experimentally observed sequential effects in an otherwise Bayes-optimal algorithm. The Bayesian algorithm is shown to be well approximated by linear-exponential filtering of past observations, a feature also apparent in the behavioral data. We derive an explicit relationship between the parameters and computations of the exact Bayesian algorithm and those of the approximate linear-exponential filter. Since the latter is equivalent to a leaky-integration process, a commonly used model of neuronal dynamics underlying perceptual decision-making and trial-to-trial dependencies, our model provides a principled account of why such dynamics are useful. We also show that parameter-tuning of the leaky-integration process is possible, using stochastic gradient descent based only on the noisy binary inputs. This is a proof of concept that not only can neurons implement near-optimal prediction based on standard neuronal dynamics, but that they can also learn to tune the processing parameters without explicitly representing probabilities. 1 Introduction One common error human subjects make in statistical inference is that they detect hidden patterns and causes in what are genuinely random data. Superstitious behavior, or the inappropriate linking of stimuli or actions with consequences, can often arise in such situations, something also observed in non-human subjects [1, 2]. One common example in psychology experiments is that despite a randomized experimental design, which deliberately de-correlate stimuli from trial to trial, subjects pick up transient patterns such as runs of repetitions and alternations, and their responses are facilitated when a stimulus continues to follow a local pattern, and impeded when such a pattern is violated [3]. It has been observed in numerous experiments [3–5], that subjects respond more accurately and rapidly if a trial is consistent with the recent pattern (e.g. AAAA followed by A, BABA followed by B), than if it is inconsistent (e.g. AAAA followed by B, BABA followed by A). This sequential effect is more prominent when the preceding run has lasted longer. Figure 1a shows reaction time (RT) data from one such experiment [5]. Error rates follow a similar pattern, reflecting a true expectancy-based effect, rather than a shift in RT-accuracy trade-off. A natural interpretation of these results is that local patterns lead subjects to expect a stimulus, whether explicitly or implicitly. They readily respond when a subsequent stimulus extends the local pattern, and are “surprised” and respond less rapidly and accurately when a subsequent stimulus violates the pattern. When such local patterns persist longer, the subjects have greater confidence in 1 0.48 0.49 0.5 0.51 0.52 1st half 2nd half 0.3 0.4 0.5 0.6 0.7 1st half 2nd half 0 0.5 1 0 1 2 −50 0 50 1st haf 2nd half model a RT (ms) b 1 −P(xt|xt−1) R A R A R A R A R A R A R A R A R R A A R R A A R R A A R R A A R R R R A A A A R R R R A A A A R R R R R R R R A A A A A A A A c γ p0(γ) 1 −P(xt|xt−1) R A R A R A R A R A R A R A R A R R A A R R A A R R A A R R A A R R R R A A A A R R R R A A A A R R R R R R R R A A A A A A A A d RT (ms) R A R A R A R A R A R A R A R A R R A A R R A A R R A A R R A A R R R R A A A A R R R R A A A A R R R R R R R R A A A A A A A A Figure 1: Bayesian modeling of sequential effects. (a) Median reaction time (RT) from Cho et al (2002) affected by recent history of stimuli, in which subjects are required to discriminate a small “o” from a large “O” using button-presses. Along the abscissa are all possible four-trial sub-sequences, in terms of repetitions (R) and alternations (A). Each sequence, read from top to bottom, proceeds from the earliest stimulus progressively toward the present stimulus. As the effects were symmetric across the two stimulus types, A and B, each bin contains data from a pair of conditions (e.g. RRAR can be AAABB or BBBAA). RT was fastest when a pattern is reinforced (RRR followed by R, or AAA followed by A); it is slowest when an “established” pattern is violated (RRR followed by A, or AAA followed by R). (b) Assuming RT decreases with predicted stimulus probability (i.e. RT increases with 1−P(xt|xt−1), where xt is the actual stimulus seen), then FBM would predict much weaker sequential effects in the second half (blue: 720 simulated trials) than in the first half (red: 840 trials). (c) DBM predicts persistently strong sequential effects in both the first half (red: 840 trials) and second half (blue: 720 trials). Inset shows prior over γ used; the same prior was also used for the FBM in (b). α = .77. (d) Sequential effects in behavioral data were equally strong in the first half (red: 7 blocks of 120 trials each) and the second half (blue: 6 blocks of 120 trials each). Green dashed line shows a linear transformation from the DBM prediction in probability space of (c) into the RT space. The fit is very good given the errorbars (SEM) in the data. the pattern, and are therefore more surprised and more strongly affected when the pattern is violated. While such a strategy seems plausible, it is also sub-optimal. The experimental design consists of randomized stimuli, thus all runs of repetitions or alternations are spurious, and any behavioral tendencies driven by such patterns are useless. However, compared to artificial experimental settings, truly random sequential events may be rare in the natural environment, where the laws of physics and biology dictate that both external entities and the observer’s viewpoint undergo continuous transformations for the most part, leading to statistical regularities that persist over time on characteristic timescales. The brain may be primed to extract such statistical regularities, leading to what appears to be superstitious behavior in an artificially randomized experimental setting. In section 2, we use Bayesian probability theory to build formally rigorous models for predicting stimuli based on previous observations, and compare differentially complex models to subjects’ actual behavior. Our analyses imply that subjects assume statistical contingencies in the task to persist over several trials but non-stationary on a longer time-scale, as opposed to being unknown but fixed throughout the experiment. We are also interested in understanding how the computations necessary for prediction and learning can be implemented by the neural hardware. In section 3, we show that the Bayes-optimal learning and prediction algorithm is well approximated by a linear filter that weighs past observations exponentially, a computationally simpler algorithm that also seems to fit human behavior. Such an exponential linear filter can be implemented by standard models of neuronal dynamics. We derive an explicit relationship between the assumed rate of change in the world and the time constant of the optimal exponential linear filter. Finally, in section 4, we will show that meta-learning about the rate of change in the world can be implemented by stochastic gradient descent, and compare this algorithm with exact Bayesian learning. 2 Bayesian prediction in fixed and changing worlds One simple internal model that subjects may have about the nature of the stimulus sequence in a 2-alternative forced choice (2AFC) task is that the statistical contingencies in the task remain fixed throughout the experiment. Specifically, they may believe that the experiment is designed such that there is a fixed probability γ, throughout the experiment, of encountering a repetition (xt = 1) on any given trial t (thus probability 1−γ of seeing an alternation xt =0). What they would then learn 2 a FBM b DBM c p(γ|xt) Trial d p(γt|xt) Trial Figure 2: Bayesian inference assuming fixed and changing Bernoulli parameters. (a) Graphical model for the FBM. γ ∈[0, 1], xt ∈{0, 1}. The numbers in circles show example values for the variables. (b) Graphical model for the DBM. γt = αδ(γt −γt−1) + (1−α)p0(γt), where we assume the prior p0 to be a Beta distribution. The numbers in circles show examples values for the variables. (c) Grayscale shows the evolution of posterior probability mass over γ for FBM (darker color indicate concentration of mass), given the sequence of truly random (P(xt) = .5) binary data (blue dots). The mean of the distribution, in cyan, is also the predicted stimulus probability: P(xt = 1|xt−1) = ⟨γ|xt−1⟩. (d) Evolution of posterior probability mass for the DBM (grayscale) and predictive probability P(xt = 1|xt−1) (cyan); they perpetually fluctuate with transient runs of repetitions or alternations. about the task over the time course of the experiment is the appropriate value of γ. We call this the Fixed Belief Model (FBM). Bayes’ Rule tells us how to compute the posterior: p(γ|xt) ∝P(xt|γ)p(γ) = γrt+a+1(1 −γ)t−rt+b+1 where rt denotes the number of repetitions observed so far (up to t), xt is the set of binary observations (x1, . . . , xt), and the prior distribution p(γ) is assumed to be a beta distribution: p(γ) = p0(γ) = Beta(a, b). The predicted probability of seeing a repetition on the next trial is the mean of this posterior distribution: P(xt+1 =1|xt) = R γp(γ|xt)dγ = ⟨γ|xt⟩. A more complex internal model that subjects may entertain is that the relative frequency of repetition (versus alternation) can undergo discrete changes at unsignaled times during the experimental session, such that repetitions are more prominent at times, and alternation more prominent at other times. We call this the Dynamic Belief Model (DBM), in which γt has a Markovian dependence on γt−1, so that with probability α, γt = γt−1, and probability 1 −α, γt is redrawn from a fixed distribution p0(γt) (same Beta distribution as for the prior). The observation xt is still assumed to be drawn from a Bernoulli process with rate parameter γt. Stimulus predictive probability is now the mean of the iterative prior, P(xt =1|xt−1) = ⟨γt|xt−1⟩, where p(γt = γ|xt−1) = αp(γt−1 = γ|xt−1) + (1 −α)p0(γt = γ) p(γt|xt) ∝ P(xt|γt)p(γt|xt−1) Figures 2a;b illustrate the two graphical models. Figures 2c;d demonstrate how the two models respond differently to the exact same sequence of truly random binary observations (γ = .5). While inference in FBM leads to less variable and more accurate estimate of the underlying bias as the number of samples increases, inference in DBM is perpetually driven by local transients. Relating back to the experimental data, we plot the probability of not observing the current stimulus for each type of 5-stimulus sequences in Figure 1 for (b) FBM and (c) DBM, since RT is known to lengthen with reduced stimulus expectancy. Comparing the first half of a simulated experimental session (red) with the second half (blue), matched to the number of trials for each subject, we see that sequential effects significantly diminish in the FBM, but persist in the DBM. A re-analysis of the experimental data (Figure 1d) shows that sequential effects also persist in human behavior, confirming that Bayesian prediction based on a (Markovian) changeable world can account for behavioral data, while that based on a fixed world cannot. In Figure 1d, the green dashed line shows that a linear transformation of the DBM sequential effect (from Figure 1c) is quite a good fit of the behavioral data. It is also worth noting that in the behavioral data there is a slight over all preference (shorter RT) for repetition trials. This is easily captured by the DBM by assuming p0(γt) to be skewed toward repetitions (see Figure 1c inset). The same skewed prior cannot produce a bias in the FBM, however, because the prior only figures into Bayesian inference once at the outset, and is very quickly overwhelmed by the accumulating observations. 3 2 4 6 8 0 1 2 3 4 5x 10 −4 num exp 2 4 6 8 0 0.05 0.1 0.15 0.2 num exp 0 0.5 1 0 0.2 0.4 0.6 0.8 1 0 0.5 1 0 0.2 0.4 0.6 0.8 1 0.2 0.4 0.6 0.8 280 300 320 340 360 380 Bayes Exp 0.2 0.5 0.8 −2 0 2 a Coeffcients Trials b Coefficients Trials c β α .77 .57 d Reconstruction True P(xt =1|xt−1) e RT (ms) P(xt =1|xt−1) rep alt log b/(1 −b) b Figure 3: Exponential discounting a good descriptive and normative model. (a) For each of the six subjects, we regressed RR on repetition trials against past observations, RT ≈C + b1xt−1 + b2xt−2 + . . ., where xτ is assigned 0 if it was repetition, and 1 if alternation, the idea being that recent repetition trials should increase expectation of repetition and decrease RR, and recent alternation should decrease expectation of repetition and increase RR on a repetition trial. Separately we also regressed RR’s on alternation trials against past observations (assigning 0 to alternation trials, and 1 to repetitions). The two sets of coefficients did not differ significantly and were averaged togther (red: average across subjects, error bars: SEM). Blue line shows the best exponential fit to these coefficients. (b) We regressed Pt obtained from exact Bayesian DBM inference, against past observations, and obtained a set of average coefficients (red); blue is the best exponential fit. (c) For different values of α, we repeat the process in (b) and obtain the best exponential decay parameter β (blue). Optimal β closely tracks the 2/3 rule for a large range of values of α. β is .57 in (a), so α = .77 was used to generate (b). (d) Both the optimal exponential fit (red) and the 2/3 rule (blue) approxiate the true Bayesian Pt well (green dashed line shows perfect match). α = .77. For smaller values of α, the fit is even better; for larger α, the exponential approximation deteriorates (not shown). (e) For repetition trials, the greater the predicted probability of seeing a repetition (xt = 1), the faster the RT, whether trials are categorized by Bayesian predictive probabilities (red: α = .77, p0 = Beta(1.6, 1.3)), or by linear exponential filtering (blue). For alternation trials, RT’s increase with increasing predicted probability of seeing a repetition. Inset: for the biases b ∈[.2, .8], the log prior ratio (shift in the initial starting point, and therefore change in the distance to decision boundary) is approximately linear. 3 Exponential filtering both normative and descriptive While Bayes’ Rule tells us in theory what the computations ought to be, the neural hardware may only implement a simpler approximation. One potential approximation is suggested by related work showing that monkeys’ choices, when tracking reward contingencies that change at unsignaled times, depend linearly on previous observations that are discounted approximately exponentially into the past [6]. This task explicitly examines subjects’ ability to track unsignaled statistical regularities, much like the kind we hypothesize to be engaged inadvertently in sequential effects. First, we regressed the subjects’ reward rate (RR) against past observations and saw that the linear coefficients decay approximately exponentially into the past (Figure 3a). We define reward rate as mean accuracy/mean RT, averaged across subjects; we thus take into account both effects in RT and accuracy as a function of past experiences. We next examined whether there is also an element of exponential discounting embedded in the DBM inference algorithm. Linear regression of the predictive probability Pt ≜P(xt = 1|xt−1), which should correlate positively with RR (since it correlates positively with accuracy and negatively with RT) against previous observations xt−1, xt−2, . . . yields coefficients that also decay exponentially into the past (Figure 3b): Pt ≈C+η Pt−1 τ=1 βτxt−τ. Linear exponential filtering thus appears to be both a good descriptive model of behavior, and a good normative model approximating Bayesian inference. An obvious question is how this linear exponential filter relates to exact Bayesian inference, in particular how the rate of decay relates to the assumed rate of change in the world (parameterized by α). We first note that the linear exponential filter has an equivalent iterative form: Pt ≜P(xt =1|xt−1) = C+η t−1 X τ=1 βτxt−τ = C(1 −β)+ηβxt−1+βPt−1 . We then note that the nonlinear Bayesian update rule can also be written as: Pt+1 = 1 2(1 −α) + xt−1αKt −P 2 t Pt −P 2 t + αPt 1 −Kt Pt 1 −Pt ≈1 2(1−α) + 1 3αxt + 2 3αPt (1) 4 where Kt ≜⟨γ2 t |xt−1⟩, and we approximate Pt by its mean value ⟨Pt⟩= 1/2, and Kt by its mean value ⟨Kt⟩= 1/3. These expected values are obtained by expanding Pt and Kt in their iterative forms and assuming ⟨Pt⟩= ⟨Pt−1⟩and ⟨Kt⟩= ⟨Kt−1⟩, and also assuming that p0 is the uniform distribution. We verified numerically (data not shown) that this mean approximation is quite good for a large range of α (though it gets progressively worse when α≈1, probably because the equilibrium assumptions deviate farther from reality as changes become increasingly rare). Notably, our calculations imply β ≈2 3α, which makes intuitive sense, since slower changes should result in longer integration time window, whereas faster changes should result in shorter memory. Figure 3c shows that the best numerically obtained β (by fitting an exponential to the linear regression coefficients) for different values of α (blue) is well approximated by the 2/3 rule (black dashed line). For the behavioral data in Figure 3a, β was found to be .57, which implies α = .77; the simulated data in Figure 3b are in fact obtained by assuming α = .77, hence the remarkably good fit between data and model. Figure 3d shows that reconstructed Pt based on the numerically optimal linear exponential filter (red) and the 2/3 rule (blue) both track the true Bayesian Pt very well. In the previous section, we saw that exact Bayesian inference for the DBM is a good model of behavioral data. In this section, we saw that linear exponential filtering also seems to capture the data well. To compare which of the two better explains the data, we need a more detailed account of how stimulus history-dependent probabilities translate into reaction times. A growing body of psychological [7] and physiological data [8] support the notion that some form of evidence integration up to a fixed threshold underlies binary perceptual decision making, which both optimizes an accuracyRT trade-off [9] and seems to be implemented in some form by cortical neurons [8]. The idealized, continuous-time version of this, the drift-diffusion model (DDM), has a well characterized mean stopping time [10], Td = z A tanh Az c2 , where A and c are the mean and standard deviation of unit time fluctuation, and z is the distance between the starting point and decision boundary. The vertical axis for the DDM is in units of log posterior ratio log P (s0|xt) P (s1|xt). An unbiased (uniform) prior over s implies a stochastic trajectory that begins at 0 and drifts until it hits one of the two boundaries ±z. When the prior is biased at b ̸= .5, it has an additive effect in the log posterior ratio space and moves the starting point to log b 1−b. For the relevant range of b (.2 to .8), the shift shift in starting point is approximately linear in b (Figure 3e inset), so that the new distance to the boundary is approximately z + kb. Thus, the new mean decision time is z+kb A tanh Az+Akb c2 . Typically in DDM models of decision-making, the signal-to-noise ratio is small, i.e. A ≪c, such that tanh is highly linear in the relevant range. We therefore have Td(b) ≈z2 c2 + 2zk c2 b, implying that the change in mean decision time is linear in the bias b, in units of probability. This linear relationship between RT and b was already born out by the good fit between sequential effects in behavioral data and for the DBM in Figure 1d. To examine this more closely, we run the exact Bayesian DBM algorithm and the linear exponential filter on the actual sequences of stimuli observed by the subjects, and plot median RT against predicted stimulus probabilities. In Figure 3e, we see that for both exact Bayesian (red) and exponential (blue) algorithms, RT’s decrease on repetition stimuli when predicted probability for repetition increased; conversely, RT’s increase on alternation trials when predicted probability for repetition increase (and therefore predicted probability for alternation decrease). For both Bayesian inference and linear exponential filtering, the relationship between RT and stimulus probability is approximately linear. The linear fit in fact appears better for the exponential algorithm than exact Bayesian inference, which, conditioned on the DDM being an appropriate model for binary decision making, implies that the former may be a better model of sequential adaptation than exact Bayesian inference. Further experimentation is underway to examine this prediction more carefully. Another implication of the SPRT or DDM formulation of perceptual decision-making is that incorrect prior bias, such as due to sequential effects in a randomized stimulus sequence, induces a net cost in accuracy (even though the RT effects wash out due to the linear dependence on prior bias). The error rate with a bias x0 in starting point is 1 1+e2za −1−(e−ax0)2 e2az−e−2az [10], implying error rate rises monotonically with bias in either direction. This is a quantitative characterization of our claim that extrageneous prior bias, such as due to sequential effects, induces suboptimality in decision-making. 5 0 1000 2000 3000 4000 5000 0 0.2 0.4 0.6 0.8 1 α=0 α=.4 α=.5 α=.6 0 0.5 1 0 5 0 1000 2000 3000 4000 5000 0 0.2 0.4 0.6 0.8 1 a b Timesteps Estimate of α α p(α) c Timesteps Estimate of α d Timesteps p(α|xt) p(γt|xt) Probability Probability Figure 4: Meta-learning about the rate of change. (a) Graphical model for exact Bayesian learning. Numbers are example values for the variables. (b) Mean of posterior p(α|xt) as a function of timesteps, averaged over 30 sessions of simulated data, each set generated from different true values of α (see legend; color-coded dashed lines indicate true α). Inset shows prior over α, p(α) = Beta(17, 3). Time-course of learning is not especially sensitive to the exact form of the prior (not shown). (c) Stochastic gradient descent with a learning rate of .01 produce estimates of α (thick lines, width denotes SEM) that converge to the true values of α (dashed lines). Initial estimate of α, before seeing any data, is .9. Learning based on 50 sessions of 5000 trials for each value of α. (d) Marginal posterior distributions over α (top panel) and γt (bottom panel) on a sample run, where probability mass is color-coded: brighter color is more mass. 4 Neural implementation and learning So far, we have seen that exponential discounting of the past not only approximates exact Bayesian inference, but fits human behavioral data. We now note that it has the additional appealing property of being equivalent to standard models of neuronal dynamics. This is because the iterative form of the linear exponential filter in Equation 1 has a similar form to a large class of leaky integration neuronal models, which have been used extensively to model perceptual decision-making on a relatively fast time-scale [8,11–15], as well as trial-to-trial interactions on a slower time-scale [16–20]. It is also related to the concept of eligibility trace in reinforcement learning [21], which is important for the temporal credit assignment problem of relating outcomes to states or actions that were responsible for them. Here, we provided the computational rationale for this exponential discounting the past – it approximates Bayesian inference under DBM-like assumptions. Viewed as a leaky-integrating neuronal process, the parameters of Equation 1 have the following semantics: 1 2(1−α) can be thought of as a constant bias, 1 3αxt−1 as the feed-forward input, and 2 3αPt−1 as the leaky recurrent term. Equation 1 suggests that neurons utilizing a standard form of integration dynamics can implement near-optimal Bayesian prediction under the non-stationary assumption, as long as the relative contributions of the different terms are set appropriately. A natural question to ask next is how neurons can learn to set the weights appropriately. We first note that xt is a sample from the distribution P(xt|xt−1). Since P(xt|xt−1) has the approximate linear form in Equation 1, with dependence on a single parameter α, learning about near-optimal predictions can potentially be achieved based on estimating the value of α via the stochastic samples x1, x2, . . .. We implement a stochastic gradient descent algorithm, in which ˆα is adjusted incrementally on each trial in the direction of the gradient, which should bring ˆα closer to the true α. ˆαt = ˆαt−1 + ǫ(xt −ˆPt)dPt dα where ˆαt is the estimate of α after observing xt, and ˆPt is the estimate of Pt using the estimate ˆαt−1 (before seeing xt). Figure 4c shows that learning via the binary samples is indeed possible: for different true values of α (dashed lines) that generated different data sets, stochastic gradient descent produced estimates of ˆα that converge to the true values, or close to them (thick lines; widths denote SEM estimated from 50 sessions of learning). A key challenge for future work is to clarify whether and how the gradient, dPt dα , can be computed by neural machinery (perhaps approximately). For comparison, we also implement the exact Bayesian learning algorithm, which augments the DBM architecture by representing α as a hidden variable instead of a fixed known parameter: p(α, γt|xt) ∝p(α|xt−1)P(xt|γt)p(γt|α, xt−1) . Figure 4a illustrates this augmented model graphically. Figure 4b shows the evolution of the mean of the posterior distribution over α, or ⟨α|xt⟩. Based on sets of 30 sessions of 5000 trials, generated 6 from each of four different true values of α, the mean value of α under the posterior distribution tends toward the true α over time. The prior we assume for α is a beta distribution (Beta(17, 3), shown in the inset of Figure 4b). Compared to exact Bayesian learning, stochastic gradient descent has a similar learning rate. But larger values of α (e.g. α=.6) tend to be under-estimated, possibly due to the fact that the analytical approximation for β is under-estimated for larger α. For data that were generated from a fixed Bernoulli process with rate .5, an equivalently appropriate model is the DBM with α=0 – stochastic gradient descent produced estimates of α (thick red line) that converge to 0 on the order of 50000 trials (details not shown). Figure 4d shows that the posterior inference about α and γt undergoes distinct phases when true α = 0 and there is no correlation between one timestep and the next. There is an initial phase where marginal posterior mass for α tends toward high values of α, while marginal posterior mass for γt fluctuates around .5. Note that this combination is an alternative, equally valid generative model for completely randomized sequence of inputs. However, this joint state is somehow unstable, and α tends toward 0 while γt becomes broad and fluctuates wildly. This is because as inferred α gets smaller, there is almost no information about γt from past observations, thus the marginal posterior over γt tends to be broad (high uncertainty) and fluctuates along with each data point. α can only decrease slowly because so little information about the hidden variables is obtained from each data point. For instance, it is very difficult to infer from what is believed to be an essentially random sequence whether the underlying Bernoulli rate really tends to change once every 1.15 trials or 1.16 trials. This may explain why subjects show no diminished sequential effects over the course of a few hundred trials (Figure 1d). While the stochastic gradient results demonstrate that, in principle, the correct values of α can be learned via the sequence of binary observations x1, x2, . . . , further work is required to demonstrate whether and how neurons could implement the stochastic gradient algorithm or an alternative learning algorithm . 5 Discussion Humans and other animals constantly have to adapt their behavioral strategies in response to changing environments: growth or shrinkage in food supplies, development of new threats and opportunities, gross changes in weather patterns, etc. Accurate tracking of such changes allow the animals to adapt their behavior in a timely fashion. Subjects have been observed to readily alter their behavioral strategy in response to recent trends of stimulus statistics, even when such trends are spurious. While such behavior is sub-optimal for certain behavioral experiments, which interleave stimuli randomly or pseudo-randomly, it is appropriate for environments in which changes do take place on a slow timescale. It has been observed, in tasks where statistical contingencies undergo occasional and unsignaled changes, that monkeys weigh past observations linearly but with decaying coefficients (into the past) in choosing between options [6]. We showed that human subjects behave very similarly in 2AFC tasks with randomized design, and that such discounting gives rise to the frequently observed sequential effects found in such tasks [5]. We showed that such exponential discounting approximates optimal Bayesian inference under assumptions of statistical non-stationarity, and derived an analytical, approximate relationship between the parameters of the optimal linear exponential filter and the statistical assumptions about the environment. We also showed how such computations can be implemented by leaky integrating neuronal dynamics, and how the optimal tuning of the leaky integration process can be achieved without explicit representation of probabilities. Our work provides a normative account of why exponential discounting is observed in both stationary and non-stationary environments, and how it may be implemented neurally. The relevant neural mechanisms seem to be engaged both in tasks when the environmental contingencies are truly changing at unsignaled times, and also in tasks in which the underlying statistics are stationary but chance patterns masquerade as changing statistics (as seen in sequential effects). This work bridges and generalizes previous descriptive accounts of behavioral choice under non-stationary task conditions [6], as well as mechanistic models of how neuronal dynamics give rise to trial-to-trial interactions such as priming or sequential effects [5, 13, 18–20]. Based the relationship we derived between the rate of behavioral discounting and the subjects’ implicit assumptions about the rate of environmental changes, we were able to “reverse-engineer” the subjects’ internal assumptions. Subjects appear to assume α=.77, or changing about once every four trials. This may have implications for understanding why working memory has the observed capacity of 4-7 items. 7 In a recent human fMRI study [22], subjects appeared to have different learning rates in two phases of slower and faster changes, but notably the first phase contained no changes, while the second phase contained frequent ones. This is a potential confound, as it has been observed that adaptive responses change significantly upon the first switch but then settle into a more stable regime [23]. It is also worth noting that different levels of sequential effects/adaptive response appear to take place at different time-scales [4,23], and different neural areas seem to be engaged in processing different types of temporal patterns [24]. In the context of our model, it may imply that there is sequential adaptation happening at different levels of processing (e.g. sensory, cognitive, motor), and their different time-scales may reflect different characteristic rate of changes at these different levels. A related issue is that brain needs not to have explicit representation of the rate of environmental changes, which are implicitly encoded in the “leakiness” of neuronal integration over time. This is consistent with the observation of sequential effects even when subjects are explicitly told that the stimuli are random [4]. An alternative explanation is that subjects do not have complete faith in the experimenter’s instructions [25]. Further work is needed to clarify these issues. We used both a computationally optimal Bayesian learning algorithm, and a simpler stochastic gradient descent algorithm, to learn the rate of change (1-α). Both algorithms were especially slow at learning the case when α=0, which corresponds to truly randomized inputs. This implies that completely random statistics are difficult to internalize, when the observer is searching over a much larger hypothesis space that contains many possible models of statistical regularity, which can change over time. This is consistent with previous work [26] showing that discerning “randomness” from binary observations may require surprisingly many samples, when statistical regularities are presumed to change over time. Although this earlier work used a different model for what kind of statistical regularities are allowed, and how they change over time (temporally causal and Markovian in ours, an acausal correlation function in theirs), as well as the nature of the inference task (on-line in our setting, and off-line in theirs), the underlying principles and conclusions are similar: it is very difficult to discriminate a truly randomized sequence, which by chance would contain runs of repetitions and alternations, from one that has changing biases for repetitions and alternations over time. References [1] Skinner, B F (1948). J. Exp. Psychol. 38: 168-72. [2] Ecott, C L & Critchfield, T S (2004). J. App. Beh. Analysis 37: 249-65. [3] Laming, D R J (1968). Information Theory of of Choice-Reaction Times, Academic Press, London. [4] Soetens, E, Boer, L C, & Hueting, J E (1985). JEP: HPP 11: 598-616. [5] Cho, R, et al (2002). Cognitive, Affective, & Behavioral Neurosci. 2: 283-99. [6] Sugrue, L P, Corrado, G S, & Newsome, W T (2004). Science 304: 1782-7. [7] Smith, P L & Ratcliff, R. Trends Neurosci. 27: 161-8. [8] Gold, J I & Shadlen, M N (2002). Neuron 36: 299-308. [9] Wald, A & Wolfowitz, J (1948). Ann. Math. Statisti. 19: 326-39. [10] Bogacz, et al (2006). Psychological Review 113: 700-65. [11] Cook, E P & Maunsell, J H R (2002). Nat. Neurosci. 5: 985-94. [12] Grice, G R (1972). Perception & Psychophysics 12: 103-7. [13] McClelland, J L. Attention & Performance XIV: 655-88. MIT Press. [14] Smith, P L (1995). Psychol. Rev. 10: 567-93. [15] Yu, A J (2007). Adv. in Neur. Info. Proc. Systems 19: 1545-52. [16] Dayan, P & Yu, A J (2003). IETE J. Research 49: 171-81. [17] Kim, C & Myung, I J (1995). 17th Ann. Meeting. of Cog. Sci. Soc.: 472-7. [18] Mozer, M C, Colagrosso, M D, & Huber, D E (2002). Adv. in Neur. Info. Proc. Systems 14: 51-57. [19] Mozer, M C, Kinoshita, S, & Shettel, M (2007). Integrated Models of Cog. Sys.: 180-93. [20] Simen, P, Cohen, J D, & Holmes, P (2006). Neur. Netw. 19: 1013-26. [21] Sutton, R S & Barto, A G (1998). Reinforcement Learning: An Introduction, MIT Press. [22] Behrens, T E J, Woolrich, M W, Walton, M E, & Rushworth, M F S (2007). Nat. Neurosci. 10: 1214-21. [23] Kording, K P, Tenenbaum, J B, & Shadmehr, R (2007). Nat. Neurosci. 10: 779-86. [24] Huettel, S A, Mack, P B, & McCarthy, G (2002). Nat. Neurosci. 5: 485-90. [25] Hertwig, R & Ortmann, A (2001). Behavioral & Brain Sciences 24: 383-403. [26] Bialek, W (2005). Preprint q-bio.NC/0508044, Princeton University. 8
2008
87
3,578
PSDBoost: Matrix-Generation Linear Programming for Positive Semidefinite Matrices Learning Chunhua Shen†‡, Alan Welsh‡, Lei Wang‡ †NICTA Canberra Research Lab, Canberra, ACT 2601, Australia∗ ‡Australian National University, Canberra, ACT 0200, Australia Abstract In this work, we consider the problem of learning a positive semidefinite matrix. The critical issue is how to preserve positive semidefiniteness during the course of learning. Our algorithm is mainly inspired by LPBoost [1] and the general greedy convex optimization framework of Zhang [2]. We demonstrate the essence of the algorithm, termed PSDBoost (positive semidefinite Boosting), by focusing on a few different applications in machine learning. The proposed PSDBoost algorithm extends traditional Boosting algorithms in that its parameter is a positive semidefinite matrix with trace being one instead of a classifier. PSDBoost is based on the observation that any trace-one positive semidefinite matrix can be decomposed into linear convex combinations of trace-one rank-one matrices, which serve as base learners of PSDBoost. Numerical experiments are presented. 1 Introduction Column generation (CG) [3] is a technique widely used in linear programming (LP) for solving large-sized problems. Thus far it has mainly been applied to solve problems with linear constraints. The proposed work here—which we dub matrix generation (MG)—extends the column generation technique to non-polyhedral semidefinite constraints. In particular, as an application we show how to use it for solving a semidefinite metric learning problem. The fundamental idea is to rephrase a bounded semidefinite constraint into a polyhedral one with infinitely many variables. This construction opens possibilities for use of the highly developed linear programming technology. Given the limitations of current semidefinite programming (SDP) solvers to deal with large-scale problems, the work presented here is of importance for many real applications. The choice of a metric has a direct effect on the performance of many algorithms such as the simplest k-NN classifier and some clustering algorithms. Much effort has been spent on learning a good metric for pattern recognition and data mining. Clearly a good metric is task-dependent: different applications should use different measures for (dis)similarity between objects. We show how a Mahalanobis metric is learned from examples of proximity comparison among triples of training data. For example, assuming that we are given triples of images ai, aj and ak (ai, aj have same labels and ai, ak have different labels, ai ∈RD), we want to learn a metric between pairs of images such that the distance from aj to ai (distij) is smaller than from ak to ai (distik). Triplets like this are the input of our metric learning algorithm. By casting the problem as optimization of the inner product of the linear transformation matrix and its transpose, the formulation is based on solving a semidefinite program. The algorithm finds an optimal linear transformation that maximizes the margin between distances distij and distik. ∗NICTA is funded by the Australian Government as represented by the Department of Broadband, Communications and the Digital Economy and the Australian Research Council through the ICT Center of Excellence program. A major drawback of this formulation is that current SDP solvers utilizing interior-point (IP) methods do not scale well to large problems with computation complexity roughly O(n4.5) (n is the number of variables). On the other hand, linear programming is much better in terms of scalability. State-of-the-art solvers like CPLEX [4] can solve large problems up to millions of variables and constraints. This motivates us to develop an LP approach to solve our SDP metric learning problem. 2 Related Work We overview some relevant work in this section. Column generation was first proposed by Dantzig and Wolfe [5] for solving some special structured linear programs with extremely large number of variables. [3] has presented a comprehensive survey on this technique. The general idea of CG is that, instead of solving the original large-scale problem (master problem), one works on a restricted master problem with a reasonably small subset of variables at each step. The dual of the restricted master problem is solved by the simplex method, and the optimal dual solution is used to find the new column to be included into the restricted master problem. LPBoost [1] is a direct application of CG in Boosting. For the first time, LPBoost shows that in an LP framework, unknown weak hypotheses can be learned from the dual although the space of all weak hypotheses is infinitely large. This is the highlight of LPBoost, which has directly inspired our work. Metric learning using convex optimization has attracted a lot of attention recently [6–8]. These work has made it possible to learn distance functions that are more appropriate for a specific task, based on partially labeled data or proximity constraints. These techniques improve classification or clustering accuracy by taking advantage of prior information. There is plenty of work reported. We list a few that are most relevant to ours. [6] learns a Mahalanobis metric for clustering using convex optimization to minimize the distance between examples belonging to the same class, while at the same time restricting examples in difference classes not to be too close. The work in [7] also learns a Mahalanobis metric using SDP by optimizing a modified k-NN classifier. They have used first-order alternating projection algorithms, which are faster than generic SDP solvers. The authors in [8] learns a Mahalanobis by considering proximity relationships of training examples. The final formulation is also an SDP. They replace the positive semidefinite (PSD) conic constraint using a sequence of linear constraints under the fact that a diagonal dominance matrix must be PSD (but not vice versa). In other words the conic constraint is replaced by a more strict one. The feasibility set shrinks and the solution obtained is not necessarily a solution of the original SDP. 3 Preliminaries We begin with some notational conventions and basic definitions that will be useful. A bold lower case letter x represents a column vector and an upper case letter X is a matrix. We denote the space of D × D symmetric matrices by SD, and positive semidefinite matrices by SD +. Tr(·) is the trace of a square matrix and ⟨X, Z⟩= Tr(XZ⊤) = P ij XijZij calculates the inner product of two matrices. An element-wise inequality between two vectors writes u ≤v, which means ui ≤vi for all i. We use X ≽0 to indicate that matrix X is positive semidefinite. For a matrix X ∈SD, the following statements are equivalent: (1) X ≽0 (X ∈SD +); (2) All eigenvalues of X are nonnegative (λi(X) ≥0, i = 1, · · · , D); and (3) ∀u ∈RD, u⊤Xu ≥0. 3.1 Extreme Points of Trace-one Semidefinite Matrices Before we present our main results, we prove an important theorem that serves the basis of the proposed algorithm. Definition 3.1 For any positive integer M, given a set of points {x1, ..., xM} in a real vector or matrix space Sp, the convex hull of Sp spanned by M elements in Sp is defined as: convM(Sp) = XM i=1 θixi θi ≥0, XM i=1 θi = 1, xi ∈Sp  . Define the convex hull1 of Sp as: conv(Sp) = [ M convM(Sp) = XM i=1 θixi θi ≥0, XM i=1 θi = 1, xi ∈Sp, M ∈Z+  . Here Z+ denotes the set of all positive integers. Definition 3.2 Let us define Γ1 to be the space of all positive semidefinite matrices X ∈SD + with trace equaling one: Γ1 = {X | X ≽0, Tr(X) = 1} ; 2 and Ω1 to be the space of all positive semidefinite matrices with both trace and rank equaling one: Ω1 = {Z | Z ≽0, Tr(Z) = 1, rank(Z) = 1} . We also define Γ2 as the convex hull of Ω1, i.e., Γ2 = conv(Ω1). Lemma 3.3 Let Ω2 be a convex polytope defined as Ω2 = {λ ∈RD| λk ≥0, ∀k = 1, · · · , D, PD k=1 λk = 1}, then the points with only one element equaling one and all the others being zeros are the extreme points (vertexes) of Ω2. All the other points can not be extreme points. Proof: Without loss of generality, let us consider such a point λ′ = {1, 0, · · · , 0}. If λ′ is not an extreme point of Ω2, then it must be expressed as an convex combination of a few other points in Ω2: λ′ = PM i=1 θiλi, θi > 0, PM i=1 θi = 1 and λi ̸= λ′. Then we have equations: PM i=1 θiλi k = 0, ∀k = 2, · · · , D. It follows that λi k = 0, ∀i and k = 2, · · · , D. That means, λi 1 = 1 ∀i. This is inconsistent with λi ̸= λ′. Therefore such a convex combination does not exist and λ′ must be an extreme point. It is trivial to see that any λ that has more than one active element is an convex combination of the above-defined extreme points. So they can not be extreme points. □ Theorem 3.4 Γ1 equals to Γ2; i.e., Γ1 is also the convex hull of Ω1. In other words, all Z ∈Ω1, forms the set of extreme points of Γ1. Proof: It is easy to check that any convex combination P i θiZi, such that Zi ∈Ω1, resides in Γ1, with the following two facts: (1) a convex combination of PSD matrices is still a PSD matrix; (2) Tr P i θiZi = P i θi Tr(Zi)  = 1. By denoting λ1 ≥· · · ≥λD ≥0 the eigenvalues of a Z ∈Γ1, we know that λ1 ≤1 because PD i=1 λi = Tr(Z) = 1. Therefore, all eigenvalues of Z must satisfy: λi ∈[0, 1], ∀i = 1, · · · , D and PD i λi = 1. By looking at the eigenvalues of Z and using Lemma 3.3, it is immediate to see that a matrix Z such that Z ≽0, Tr(Z) = 1 and rank(Z) > 1 can not be an extreme point of Γ1. The only candidates for extreme points are those rank-one matrices (λ1 = 1 and λ2,··· ,D = 0). Moreover, it is not possible that some rank-one matrices are extreme points and others are not because the other two constraints Z ≽0 and Tr(Z) = 1 do not distinguish between different rank-one matrices. Hence, all Z ∈Ω1 forms the set of extreme points of Γ1. Furthermore, Γ1 is a convex and compact set, which must have extreme points. Krein-Milman Theorem [9] tells us that a convex and compact set is equal to the convex hull of its extreme points. □ This theorem is a special case of the results from [10] in the context of eigenvalue optimization. A different proof for the above theorem’s general version can also be found in [11]. In the context of SDP optimization, what is of interest about Theorem 3.4 is as follows: it tells us that a bounded PSD matrix constraint X ∈Γ1 can be equivalently replaced with a set of constrains which belong to Γ2. At the first glance, this is a highly counterintuitive proposition because Γ2 involves many more complicated constraints. Both θi and Zi (∀i = 1, · · · , M) are unknown variables. Even worse, M could be extremely (or even indefinitely) large. 1Strictly speaking, the union of convex hulls may not be a convex hull in general. It is a linear convex span. 2Such a matrix X is called a density matrix, which is one of the main concepts in quantum physics. A density matrix of rank one is called a pure state, and a density matrix of rank higher than one is called a mixed state. 3.2 Boosting Boosting is an example of ensemble learning, where multiple learners are trained to solve the same problem. Typically a boosting algorithm [12] creates a single strong learner by incrementally adding base (weak) learners to the final strong learner. The base learner has an important impact on the strong learner. In general, a boosting algorithm builds on a user-specified base learning procedure and runs it repeatedly on modified data that are outputs from the previous iterations. The inputs to a boosting algorithm are a set of training example x, and their corresponding class labels y. The final output strong classifier takes the form Fθ(x) = XM i=1 θifi(x). (1) Here fi(·) is a base learner. From Theorem 3.4, we know that a matrix X ∈Γ1 can be decomposed as X = XM i=1 θiZi, Zi ∈Ω1. (2) By observing the similarity between Equations (1) and (2), we may view Zi as a weak classifier and the matrix X as the strong classifier we want to learn. This is exactly the problem that boosting methods have been designed to solve. This observation inspires us to solve a special type of SDPs using boosting techniques. A sparse greedy approximation algorithm proposed by Zhang [2] is an efficient way of solving a class of convex problems, which provides fast convergence rates. It is shown in [2] that boosting algorithms can be interpreted within the general framework of [2]. The main idea of sequential greedy approximation is as follows. Given an initialization u0 ∈V, V can be a subset of a linear vector space, a matrix space or a functional space. The algorithm finds ui ∈V, i = 1, · · · , and 0 ≤λ ≤1 such that the cost function F((1 −λ)ui−1 + λui) is approximately minimized; Then the solution ui is updated as ui = (1 −λ)ui−1 + λui and the iteration goes on. 4 Large-margin Semidefinite Metric Learning We consider the Mahalanobis metric learning problem as an example although the proposed technique can be applied to many other problems in machine learning such as nonparametric kernel matrix learning [13]. We are given a set of training examples ai ∈RD, i = 1, 2, · · · . The task is to learn a distance metric such that with the learned metric, classification or clustering will achieve better performance on testing data. The information available is a bunch of relative distance comparisons. Mathematically we are given a set S which contains the training triplets: S = {(ai, aj, ak)| distij < distik}, where distij measures distance between ai and aj with a certain metric. In this work we focus on the case that dist calculates the Mahalanobis distance. Equivalently we are learning a linear transformation P ∈RD×d such that dist is the Euclidean distance in the projected space: distij = P⊤ai −P⊤aj 2 2 = (ai −aj)⊤PP⊤(ai −aj). It is not difficult to see that the inequalities in the set S are non-convex because a difference of quadratic terms in P is involved. In order to convexify the inequalities in S, a new variable X = PP⊤is instead used. This is a typical technique for modeling an SDP problem [14]. We wish to maximize the margin that is defined as the distance between distij and distik. That is, ρ = distik −distij = (ai −ak)⊤X(ai −ak) −(ai −aj)⊤X(ai −aj). Also one may use soft margin to tolerate noisy data. Putting these thoughts together, the final convex program we want to optimize is: max ρ,X,ξ ρ −C X|S| r=1 ξr s.t. X ≽0, Tr(X) = 1, ξ ≥0, (3) (ai −ak)⊤X(ai −ak) −(ai −aj)⊤X(ai −aj) ≥ρ −ξr, ∀(ai, aj, ak) ∈S. Here r indexes the training set S. |S| denotes the size of S. C is a trade-off parameter that balances the training error and the margin. Same as in support vector machine, the slack variable ξ ≥0 corresponds to the soft-margin hinge loss. Note that the constraint Tr(X) = 1 removes the scale ambiguity because the distance inequalities are scale invariant. To simplify our exposition, we write Ar = (ai −ak)(ai −ak)⊤−(ai −aj)(ai −aj)⊤. (4) The last constraint in (3) is then written ⟨Ar, X⟩≥ρ −ξr, ∀Ar built from S; r = 1, · · · |S|. (5) Problem (3) is a typical SDP since it has a linear cost function and linear constraints plus a PSD conic constraint. Therefore it can be solved using off-the-shelf SDP solvers like CSDP [15]. As mentioned general interior-point SDP solvers do not scale well to large-sized problems. Current solvers can only solve problems up to a few thousand variables, which makes many applications intractable. For example, in face recognition if the inputs are 30 × 30 images, then D = 900 and there would be 0.41 million variables. Next we show how we reformulate the above SDP into an LP. 5 Boosting via Matrix-Generation Linear Programming Using Theorem 3.4, we can replace the PSD conic constraint in (3) with a linear convex combination of rank-one unitary PSD matrices: X = PM i=1 θiZi. Substituting X in Problem (3), we obtain max ρ,θ,ξ,Z ρ −C X|S| r=1 ξr s.t. ξ ≥0, Ar, PM i=1 θiZi = PM i=1 Ar, Zi θi ≥ρ −ξr, ∀Ar built from S; r = 1, · · · |S|, (P1) PM i=1 θi = 1, θ ≥0, Zi ∈Ω1, i = 1, · · · , M. This above problem is still very hard to solve since it has non-convex rank constraints and an indefinite number of variables (M is indefinite because there are an indefinite number of rank-one matrices). However if we somehow know matrices Zi (i = 1, · · · ) a priori, we can then drop all the constraints imposed on Zi (i = 1, · · · ) and the problem becomes a linear program; or more precisely a semi-infinite linear program (SILP) because it has an infinitely large set of variables θ. Column generation is a state-of-the-art method for optimally solving difficult large-scale optimization problems. It is a method to avoid considering all variables of a problem explicitly. If an LP has extremely many variables (columns) but much fewer constraints, CG can be very beneficial. The crucial insight behind CG is: for an LP problem with many variables, the number of non-zero variables of the optimal solution is equal to the number of constraints, hence although the number of possible variables may be large, we only need a small subset of these in the optimal solution. It works by only considering a small subset of the entire variable set. Once it is solved, we ask the question: “Are there any other variables that can be included to improve the solution?”. So we must be able to solve the subproblem: given a set of dual values, one either identifies a variable that has a favorable reduced cost, or indicates that such a variable does not exist. In essence, CG finds the variables with negative reduced costs without explicitly enumerating all variables. For a general LP, this may not be possible. But for some types of problems it is possible. We now consider Problem (P1) as if all Zi, (i = 1, · · · ) were known. The dual of (P1) is easily derived: min π,w π s.t. P|S| r=1 Ar, Zi wr ≤π, i = 1, · · · , M, (D1) P|S| r=1 wr = 1, 0 ≤wr ≤C, r = 1, · · · , |S|. For convex programs with strong duality, the dual gap is zeros, which means the optimal value of the primal and dual problems coincide. For LPs and SDPs, strong duality holds under very mild conditions (almost always satisfied by LPs and SDPs considered here). We now only consider a small subset of the variables in the primal; i.e., only a subset of Z (denoted by ˜Z)3 is used. The LP solved using ˜Z is usually termed restricted master problem (RMP). Because the primal variables correspond to the dual constraints, solving RMP is equivalent to solving a relaxed version of the dual problem. With a finite ˜Z, the first set of constraints in (D1) are finite, and we can solve the LP that satisfies all the existing constraints. If we can prove that among all the constraints that we have not added to the dual problem, no single constraint is violated, then we can conclude that solving the restricted problem is equivalent to solving the original problem. Otherwise, there exists at least one constraint that is violated. The violated constraints correspond to variables in primal that are not in RMP. Adding these variables to RMP leads to a new RMP that needs to be re-optimized. In our case, by finding the violated constraint, we generate a rank-one matrix Z′. Hence, as in LPBoost [1] we have a base learning algorithm as an oracle that either finds a new Z′ such that P|S| r=1 Ar, Z′ wr > ˜π, where ˜π is the solution of the current restricted problem, or a guarantee that such a Z′ does not exist. To make convergence fast, we find the one that has largest deviation. That is, Z′ = argmaxZ nP|S| r=1 Ar, Z ˜wr, s.t. Z ∈Ω1 o . (B1) Again here ˜wr (r = 1, · · · , |S|) are obtained by solving the current restricted dual problem (D1). Let us denote Opt(B1) the optimal value of the optimization problem in (B1). We now have a criterion that guarantees the optimal convex combination over all Z’s satisfying the constraints in Γ2 has been found. If Opt(B1) ≤˜π, then we are done—we have solved the original problem. The presented algorithm is a variant of the CG technique. At each iteration, a new matrix is generated, hence the name matrix generation. 5.1 Base Learning Algorithm In this section, we show that the optimization problem (B1) can be exactly and efficiently solved using eigen-decomposition. From Z ≽0 and rank(Z) = 1, we know that Z has the format: Z = uu⊤, u ∈RD; and Tr(Z) = 1 means ∥u∥2 = 1. We have P|S| r=1 Ar, Z ˜wr = P|S| r=1 ˜wrAr, Z = u P|S| r=1 ˜wrAr u⊤. By denoting ˜H = P|S| r=1 ˜wrAr, (6) the optimization in (B1) equals: max u u⊤˜Hu, subject to ∥u∥2 = 1. (7) It is clear that the largest eigenvalue of ˜H, λmax( ˜H), and its corresponding eigenvector u1 give the solution to the above problem. Note that ˜H is symmetric. Therefore we have the solution of the original problem (B1): Opt(B1) = λmax( ˜H) and Z′ = u1u⊤ 1. There are approximate eigenvalue solvers, which guarantee that for a symmetric matrix U and any ε > 0, a vector v is found such that v⊤Uv ≥λmax−ε. To approximately find the largest eigenvalue and eigenvector can be very efficient using Lanczos or power method. We use the MATLAB function eigs to calculate the largest eigenvector, which calls mex files of ARPACK. ARPACK is a collection of Fortran subroutines designed to solve large scale eigenvalue problems. When the input matrix is symmetric, this software uses a variant of the Lanczos process called the implicitly restarted Lanczos method [16]. 3We also use ˜θ, ˜π and ˜ w etc. to denote the solution of the current RMP and its dual. Algorithm 1: PSDBoost for semidefinite metric learning. Input: Training set triplets (ai, aj, ak) ∈S; Calculate Ar, r = 1, · · · from S using Equation (4). Initialization: 1. M = 1 (no bases selected); 2. θ = 0 (all primal coefficients are zeros); 3. π = 0; 4. wr = 1 |S|, r = 1, · · · , |S| (uniform dual weights). while true do 1. Find a new base Z′ by solving Problem (B1), i.e., eigen-decomposition of ˜H in (6); 2. if Opt(B1) ≤π then break (problem solved); 3. Add Z′ to the restricted master problem, which corresponds to a new constraint in Problem (D1); 4. Solve the dual (D1) to obtain updated π and wr (r = 1, · · · , |S|); 5. M = M + 1 (base count). end Output: 1. Calculate the primal variable θ from the optimality conditions and the last solved dual LP; 2. The learned PSD matrix X ∈RD×D, X = PM i=1 θiZi. Putting all the above analysis together, we summarize our PSDBoost algorithm for metric learning in Algorithm 1. Note that, in practice, we can relax the convergence criterion by setting a small positive threshold ε′ > 0 in order to obtain a good approximation quickly. Namely the convergence criterion is Opt(B1) ≤π + ε′. The algorithm has some appealing properties. Each iteration the solution is provably better than the preceding one, and has rank at most one larger. Hence after M iterations the algorithm attains a solution with rank at most M. The algorithm preserves CG’s property that each iteration improves the quality of the solution. The bounded rank follows the fact that rank(A + B) ≤rank(A) + rank(B), ∀matrices A and B. An advantage of the proposed PSDBoost algorithm over standard boosting schemes is the totallycorrective weight update in each iteration, which leads faster convergence. The coordinate descent optimization employed by standard boosting algorithms is known to have a slow convergence rate in general. However, the price of this totally-corrective update is obvious. PSDBoost spans the space of the parameter X incrementally. The computational cost for solving the subproblem grows with the number of linear constraints, which increases by one at each iteration. Also it needs more and more memory to store the generated base learner Zi as represented by a series of unit vectors. To alleviate this problem, one can use a selection and compression mechanism as the aggregation step of bundle methods [17]. When the size of of the bundle becomes too large, bundle methods select columns to be discarded and the selected information is aggregated into a single one. It can be shown that as long as the aggregated column is introduced in the bundle, the bundle algorithm remains convergent, although different selection of discarded columns may lead to different convergence speeds. See [17] for details. 6 Experiments In the first experiment, we have artificially generated 600 points in 24 dimensions. Therefore the learned metric is of size 24 × 24. The triplets are obtained in this way: For a point ai, we find its nearest neighbor in the same class aj and its nearest neighbor in the different class ak. We subsample to have 550 triplets for training. To show the convergence, we have plotted the optimal values of the dual problem (D1) at each iteration in Figure 1. We see that PSDBoost quickly converges to the near-optimal solution. We have observed the so-called tailing-off effect of CG on large datasets. While a near-optimal solution is approached considerably fast, only little progress per iteration is made close to the optimum. Stabilization techniques have been introduced to partially alleviate this problem [3]. However, approximate solutions are sufficient for most machine learning tasks. Moreover, we usually are not interested in the numerical accuracy of the solution but the test error for many problems such as metric and kernel learning. The second experiment uses the Pendigits data from the UCI repository that contains handwritten samples of digits 1, 5, 7, 9. The data for each digits are 16-dimensional. 80 samples for each digit are used for training and 500 for each digit for testing. The results show that PSDBoost converges quickly and the learned metric is very similar to the results obtained by a standard SDP solver. The classification errors on testing data with a 1-nearest neighbor are identical using the metrics learned by PSDBoost and a standard SDP solver. Both are 1.3%. 7 Conclusion We have presented a new boosting algorithm, PSDBoost, for learning a positive semidefinite matrix. In particular, as an example, we use PSDBoost to learn a distance metric for classification. PSDBoost can also be used to learn a kernel matrix, which is of interest in machine learning. We are currently exploring new applications with PSDBoost. Also we want to know what kind of SDP optimization problems can be approximately solved by PSDBoost. References [1] A. Demiriz, K.P. Bennett, and J. Shawe-Taylor. Linear programming boosting via column generation. Mach. Learn., 46(1-3):225–254, 2002. [2] T. Zhang. Sequential greedy approximation for certain convex optimization problems. IEEE Trans. Inf. Theory, 49(3):682–691, 2003. [3] M. E. L¨ubbecke and J. Desrosiers. Selected topics in column generation. Operation Res., 53(6):1007–1023, 2005. [4] ILOG, Inc. CPLEX 11.1, 2008. http://www.ilog.com/products/cplex/. [5] G. B. Dantzig and P. Wolfe. Decomposition principle for linear programs. Operation Res., 8(1):101–111, 1960. [6] E. Xing, A. Ng, M. Jordan, and S. Russell. Distance metric learning, with application to clustering with side-information. In Proc. Adv. Neural Inf. Process. Syst. MIT Press, 2002. [7] K. Q. Weinberger, J. Blitzer, and L. K. Saul. Distance metric learning for large margin nearest neighbor classification. In Proc. Adv. Neural Inf. Process. Syst., pages 1473–1480, 2005. [8] R. Rosales and G. Fung. Learning sparse metrics via linear programming. In Proc. ACM Int. Conf. Knowledge Discovery & Data Mining, pages 367–373, Philadelphia, PA, USA, 2006. [9] M. Krein and D. Milman. On extreme points of regular convex sets. Studia Mathematica, 9:133–138, 1940. [10] M. L. Overton and R. S. Womersley. On the sum of the largest eigenvalues of a symmetric matrix. SIAM J. Matrix Anal. Appl., 13(1):41–45, 1992. [11] P. A. Fillmore and J. P. Williams. Some convexity theorems for matrices. Glasgow Math. Journal, 12:110–117, 1971. [12] R. E. Schapire. Theoretical views of boosting and applications. In Proc. Int. Conf. Algorithmic Learn. Theory, pages 13–25, London, UK, 1999. Springer-Verlag. [13] B. Kulis, M. Sustik, and I. Dhillon. Learning low-rank kernel matrices. In Proc. Int. Conf. Mach. Learn., pages 505–512, Pittsburgh, Pennsylvania, 2006. [14] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004. [15] B. Borchers. CSDP, a C library for semidefinite programming. Optim. Methods and Softw., 11(1):613–623, 1999. [16] D. Calvetti, L. Reichel, and D. C. Sorensen. An implicitly restarted Lanczos method for large symmetric eigenvalue problems. Elec. Trans. Numer. Anal, 2:1–21, Mar 1994. http://etna.mcs.kent.edu. [17] J. F. Bonnans, J. C. Gilbert, C. Lemar´echal, and C. A. Sagastiz´abal. Numerical Optimization: Theoretical and Practical Aspects (1st edition). Springer-Verlag, Berlin, 2003. 0 50 100 150 200 −25 −20 −15 −10 −5 0 5 Opt(D1) iterations 0 50 100 150 −300 −200 −100 0 100 200 300 iterations Opt(D1) Figure 1: The objective value of the dual problem (D1) on the first (left) and second (right) experiment. The dashed line shows the ground truth obtained by directly solving the original primal SDP (3) using interior-point methods.
2008
88
3,579
Kernel-ARMA for Hand Tracking and Brain-Machine Interfacing During 3D Motor Control Lavi Shpigelman1 , Hagai Lalazar 2 and Eilon Vaadia 3 Interdisciplinary Center for Neural Computation The Hebrew University of Jerusalem, Israel 1shpigi@gmail.com, 2hagai@alice.nc.huji.ac.il, 3eilonv@ekmd.huji.ac.il Abstract Using machine learning algorithms to decode intended behavior from neural activity serves a dual purpose. First, these tools allow patients to interact with their environment through a Brain-Machine Interface (BMI). Second, analyzing the characteristics of such methods can reveal the relative significance of various features of neural activity, task stimuli, and behavior. In this study we adapted, implemented and tested a machine learning method called Kernel Auto-Regressive Moving Average (KARMA), for the task of inferring movements from neural activity in primary motor cortex. Our version of this algorithm is used in an online learning setting and is updated after a sequence of inferred movements is completed. We first used it to track real hand movements executed by a monkey in a standard 3D reaching task. We then applied it in a closed-loop BMI setting to infer intended movement, while the monkey’s arms were comfortably restrained, thus performing the task using the BMI alone. KARMA is a recurrent method that learns a nonlinear model of output dynamics. It uses similarity functions (termed kernels) to compare between inputs. These kernels can be structured to incorporate domain knowledge into the method. We compare KARMA to various state-of-the-art methods by evaluating tracking performance and present results from the KARMA based BMI experiments. 1 Introduction Performing a behavioral action such as picking up a sandwich and bringing it to one’s mouth is a motor control task achieved easily every day by millions of people. This simple action, however, is impossible for many patients with motor deficits. In the future, patients with enough cortical activity remaining may benefit from Brain Machine Interfaces that will restore motor control with agility, precision, and the degrees of freedom comparable to natural movements. Such high quality BMI’s are not yet available. The BMI framework involves recording neural activity, typically using chronically implanted electrodes, which is fed in real-time to a decoding algorithm. Such algorithms attempt to infer the subject’s intended behavior. The algorithm’s predictions can be used to artificially control an end-effector: a cursor on a screen, a prosthetic arm, a wheelchair, or the subject’s own limbs by stimulation of their muscles. This study focuses on the algorithmic component. Motor control is a dynamic process involving many feedback loops, relevant time frames, and constraints of the body and neural processing. Neural activity in primary motor cortex (MI) is part of this process. An early approach at decoding movement from MI activity for BMI (see [1]) was rather simplistic. Instantaneous velocity of the cursor, across a set of movements, was linearly regressed against neuronal spike rates. This algorithm (known as the Population Vector Algorithm) is equivalent to modelling each neuron as a consine function of movement velocity. This method is still used today for BMI’s [2], and has become the standard model in many studies of encoding and learning in MI. Our understanding of motor cortex has progressed, and many other factors have been shown to 1 correlate with neuronal activity, but are typically overlooked in modeling. For example, MI activity has been shown to encode arm posture [3], the dynamic aspects of the movement (such as current acceleration, or interaction forces) and the interactions between neurons and their dynamics [4]. State-of-the-art movement decoding methods typically involve improved modeling of behavior, neural activity, and the relations between them. For example, Kalman filtering (see [5]) has been used to model the system state as being comprised of current hand position, velocity and acceleration. Thus, the hand movement is assumed to have roughly constant acceleration (with added Gaussian noise and, consequently, minimal jerk) and the neural activity is assumed to be a linear function of the hand state (with added Gaussian noise). Particle filtering, which relaxes some of the linearity and Gaussian assumptions, has also been applied in an offline setting (see [6]). Support Vector Regression (SVR) from neural activity to current hand velocity (see [7]) has the advantage of allowing for extraction of nonlinear information from neuronal interactions, but is missing a movement model. One of our previous studies ([8]) combines a linear movement model (as in Kalman filtering) with SVR-based nonlinear regression from neural activity. KARMA (see [9] for one of its first appearances, or [10] for a more recent one) is a kernelized version of the ARMA method [11]. It performs ARMA in a kernel-induced feature space (for a comprehensive explanation of this kernel-trick, see [12]). It estimates the next system state as a function of both the time window of previous state estimates (the Auto-Regressive part) and the time window of previous observations (the Moving-Average part). In our application, we extend its formulation to the Multi-Input Multi-Output (MIMO) case, allowing for better modeling of the system state. We apply it in an online learning paradigm, and by limiting the number of support vectors turn it into an adaptive method. This allows for real-time inference, as is necessary for BMI. In section 2 we explain the lab setup, the problem setting, and introduce our notation. In section 3 we describe KARMA, and our online and adaptive version of it, in detail. We explain the available modeling options and how they can be used to either improve performance or to test ideas regarding motor control and neural encoding. Section 4 describes KARMA’s performance in tracking hand movements and compares it with other state-of-the-art methods. Section 5 presents results from our BMI experiment using KARMA and, finally, we summarize in section 6 2 Lab setup and problem setting In our experiments, a monkey performed a visuomotor control task that involved moving a cursor on a screen from target to target in 3D virtual space. Neuronal activity from a population of single and multi-units was recorded with a chronically implanted array of 96 electrodes (Cyberkinetics, Inc.) in MI, and used to calculate spike rates (spike counts in 50ms time bins smoothed by a causal filter). In hand-control (open-loop) mode, the monkey used its hand (continuously tracked by an optical motion capture system; Phoenix Tech., Inc.) to move the cursor. Data collected from these sessions is used here to assess algorithm performance, by using the real arm movements as the target trajectories. In hand-control, behavior is segmented into sequences of continuous recordings, separated by time periods during which the monkey’s hand is not in view of the tracking device (e.g. when the monkey stops to scratch itself). Each sequence is made up of target-to-target reaching trials (some successful and some not). The targets appeared randomly in the 27 corners, centers of faces, and middle of a virtual cube whose side was 6cm. The target radii were 2.4cm. A successful trial consisted of one reaching movement that started at rest in one target and ended at rest in the next target (with required target hold periods during which the cursor must not leave the target). The next target appears at some point during the hold period of the previous target. Food reward is provided through a tube after each success. In case of failure, the cursor disappears for 1-2 seconds (failure period). During this time the monkey’s hand is still tracked. In the BMI (closed-loop) setting, the monkey’s hands were comfortably restrained and the KARMA algorithm’s inference was used to move the cursor. Trial failures occured if the reaching movement took longer than 6 seconds or if the cursor was not kept inside the target during a 0.25s hold period. During trial-failure periods the inference was stopped and at the next trial the cursor reappeared where it left off. The trial-failure period was also used to pass the latest recorded trial sequence to the model-learning process, and to replace the working model with an updated one, if available. In this text, X (Capital, bold) is a matrix, x (bold) is a vector ( so is xi, xt or xi t) and x is a scalar. (x)T signifies transposition. We will use xi t ∈IRq to designate the neural activity (of q cortical units) at time bin t in behavior sequence i, which we refer to as observations. Given a window size, 2 s, xi t−s+1:t = xi t−s+1 T , . . . , xi t T T ∈IRsq is an sq-long vector comprising a concatenated window of observations ending at time t, of trajectory i. xi will be short-hand notation meaning xi 1:tfi where tfi is the number of steps in the whole ith trajectory. Similarly, yi t ∈IRd are used to designate cursor position (d = 3). We refer to yi t as the state trajectory. Given a window size, r ,yi t−r:t−1 ∈IRrd is a concatenated vector of states. Estimated states are differentiated from true states (or desired states, as will be explained later) by addition of a hat: ˆyi t. Furthermore (given s and r) we will use ˆvi t = ˆyi t−r:t−1 T xi t−s+1:t T T ∈IRrd+sq to concatenate windows of estimated states and of neural observations and vi t to concatenate true (rather than estimated) state values. In the hand-control setting, we are given a (fully observed) data-set of neural activities and state trajectories:  xi, yi n i=1. Our goal is to learn to reconstruct the state trajectories from the neural activities. We adhere to the online learning paradigm in which at each step, i, of the process we are given one observation sequence, xi, predict ˆyi, then receive the true yi and update the model. This allows the model to adapt to changes in the input-output relation that occurs over time. In BMI mode, since hand movements were not performed, we do not know the correct cursor movement. Instead, during learning we use the cursor movement generated in the BMI and the positions of the targets that the monkey was instructed to reach to guess a desired cursor trajectory which is used to replace the missing true trajectory as feedback. The illustration on the right shows the BMI setup from an algorithmic point of view. 3 KARMA, modeling options, and online learning As stated earlier, KARMA is a kernelized ARMA. In ARMA: yi t = Pr k=1 Akyi t−k + Ps l=1 Bl xi t−l+1 + ei t, where {Ak}r k=1 and {Bl}s l=1 are the respective Auto-Regressive (AR) and Moving Average (MA) parameters and ei t are residual error terms. Given these model parameters and initial state values, the rest of the state trajectory can be estimated from the observations by recursive application, replacing true state values with the estimated ones. Thus, ARMA inference is essentially application of a linear (MIMO) IIR filter. Defining W = [Ar, . . . , A1, Bs, . . . , B1], the next state estimate is simply ˆyi t = Wˆvi t (see notation section). Kernelizing ARMA involves application of the kernel trick. A kernel function k (v1, v2) : IRrd+sq × IRrd+sq →IR is introduced, which, conceptually, can be viewed as a dot product of feature vectors: k (v1, v2) = φT (v1) φ (v2) where the features are possibly complicated functions of both states and (neural) observations. Inference takes the form ˆyi t = P j,τ αj τk ˆvi t, vj τ  where αj τ ∈IRd are learned weight vectors and vj τ are examples from a training set, known as the support set. Conceptually, KARMA inference can be viewed as ˆyi t = Wφφ ˆvi t  where, as compared with ARMA, ˆvi t is replaced by its feature vector, W is replaced by Wφ = P j,τ αj τφT (vj τ) and each recursive step of KARMA is linear regression in the feature space of observations + states. The weights,  αj τ are learned so as to solve the following optimization problem (presented in its primal form): arg minWφ ∥Wφ∥2 + c P i,t,k yi t  k − Wφφ vi t  k ϵ, where ∥w∥2 = P a,b (W)2 ab is the Frobenius matrix norm, the sum in the second term is over all trials, times and state dimensions of the examples in the training set, |v|ϵ = max{0, |v|−ϵ} is the ϵ-insensitive absolute error and c is a constant that determines the relative trade-off between the first (regularization) term and the second (error) term. Note that during learning, the states are estimated using the true / desired previous state values as input instead of the estimated ones (contrary to what is done during inference). “Luckily”, this optimization problem reduces to standard SVR where xi t is replaced with vi t. This replacement can be done as a preprocessing step in learning and a standard SVR solver can then be used to find the weights. Inference would require plugging in the previously estimated state values as part of the inputs between iterative calls to SVR inference. Application of KARMA to a specific domain entails setting of some key hyper-parameters that may have drastic effects on performance. The relatively simple ones are the window sizes (r and 3 s) and trade-off parameter c. Beyond the necessary selection of the cursor trajectories as the states, augmenting state dimensions (whose values are known at training and inferred during model testing) can be added in order to make the model use them as explicit features. This idea was tried in our hand tracking experiments using features such as absolute velocity and current trial state (reaching target, holding at target and trial-failure time). But since results did not improve significantly, we discontinued this line of research. The kernel function and its parameters must also be chosen. Note that the kernel in this algorithm is over structured data, which opens the door to a plethora of choices. Depending on one’s view this can be seen as an optimization curse or as a modeling blessing. It obviously complicates the search for effective solutions but it allows to introduce domain knowledge (or assumptions) into the problem. It can also be used as a heuristic for testing the relative contribution of the assumptions behind the modeling choices. For example, by choosing r = 0 the algorithm reduces to SVR and the state model (and its dynamics) are ignored. By selecting a kernel which is a linear sum of two kernels, one for states and one for observations, the user assumes that states and observations have no “synergy” (i.e. each series can be read without taking the other into account). This is because summing of kernels is equivalent to calculating the features on their inputs separately and then concatenating the feature vectors. Selecting linear kernels reduces KARMA to ARMA (using its regularized loss function). In online learning, one may change the learned model between consecutive inferences of (whole) time series. At learning step k, all of  xi, yi k i=1 are available for training. A naive solution would be to re-learn the model at every step, however, this would not be the best solution if one believes that the source of the input-output relation is changing (for example, in our BMI, cortical units may change their response properties, or units may appear or disappear during a recording session). Also, it may not be feasible to store all the sequences, or learning may take too long (opening up a delay between data acquisition until a new model is ready). If the resulting model has too many support vectors, too much time is required for each inference step (which is less than 50ms in our BMI setup). We deal with all the issues above by limiting the number of examples (vi t) that are kept in memory (to 5000 in hand-control tracking and 3000 for real-time use in the BMI). At each online learning iteration, the latest data is added to the pool of examples one example at a time, and if the limit has been reached another example is selected at random (uniformly over the data-set) and thrown out. This scheme gives more mass to recent observations while allowing for a long tail of older observations. For a 3000 sized database and examples coming in at the rate of one per 50ms, the cache is filled after the first 150 seconds. Afterwards, the half life (the time required for an example to have a 50% chance of being thrown out) of an example is approximately 104 seconds, or conversely, at each point, approx. 63% of the examples in the database are from the last 150 seconds and the rest are earlier ones. This scheme keeps the inference time to a constant and seems reasonable in terms of rate of adaptation. We chose 5000 for the tracking (hand-control) experiments since in those experiments there is no real-time inference constraint and the performance improves a bit (suggesting that the 3000 size is not optimal in terms of inference quality). The similarity between consecutive examples is rather high as they share large parts of their time windows (when ARMA parameters r or s are large). Throwing away examples at random has a desired effect of lessening the dependency between remaining examples. 4 Open-loop hand tracking testing To test various parametrization of KARMA and to compare its performance to other methods we used data from 11 hand-control recording days. These sessions vary in length from between 80 to 120 minutes of relatively continuous performance on the part of the monkey. Success rates in this task were at the 65-85% range. Cortical units were sorted in an automated manner every day with additional experimenter tuning. The number of very well-isolated single units ranged between 21-41. The remaining units consisted of 140-150 medium quality and multi-units, which the sorting software often split into more than one channel. Most of the different algorithms that are compared here have free hyper-parameters that need to be set (such as a Gaussian kernel width for spike rates, the maximal informative time window of neural activities, s and the c trade-off parameter). We had a rough estimate for some of these from previous experiments using similar data (see [8]). To fine-tune these parameters, a brute-force grid search was performed on data from one of the 11 sessions in a (batch) 5-fold cross validation scheme. Those parameters were then kept fixed. 4 Earlier experiments showed the Gaussian kernel to be a good candidate for comparing neural spike rate vectors. It can also be calculated quickly, which is important for the BMI real-time constraint. We tested several variations of structured kernels on neuro-movement inputs. These variations consisted of all combinations of summing or multiplying Gaussian or linear kernels for the spike rates and movement states. Taking a sum of Gaussians or their product produced the best results (with no significant difference between these two options). We chose the sum (having the conceptual inference form: ˆyi t = Wψψ(ˆyi t−r:t−1) + Wφφ(xi t−s:t) where ψ, φ are the infinite feature vectors of the Gaussian kernel). The next best result was achieved by summing a Gaussian spike rate kernel and a linear movement kernel (which we will call lin-y-KARMA). The sum of linear kernels produces ARMA (which we also tested). The test results that are presented in this study are only for the remaining 10 recording sessions. The main performance measure that we use here is the (Pearson) correlation coefficient (CC) between true and estimated values of positions (in each of the 3 movement dimensions). To gauge changes in prediction quality over time we use CC in a running window of sequences (window size is chosen so as to decrease the noise in the CC estimate). In other cases, the CC for a whole data-set is computed. To illustrate KARMA’s performance we provide a movie (see video 1 in supplementary material) showing the true hand position (in black) and KARMA’s tracking estimate (in blue) during a continuous 150 second sequence of target-to target reach movements. This clip is of a model that was learned in an online manner using the previous (180) sequences, using a support vector cache of 3000 (as described in section 3). The initial position of the cursor is not given to the algorithm. Instead the average start positions in previous sequences is given as the starting point. The CC in the time window of the previous 40 sequences (0.9, 0.92 and 0.96 for the 3 dimensions) is given to provide a feeling of what such CC’s look like. Similarly, Figure 1.B shows tracking and true position values for an 80 second segment towards the end of a different session. KARMA achieves good performance and it does so with relatively small amounts of data. Figure 1.A shows tracking quality in terms of a running window of CC’s over time. CC’s for the first sequences are calculated on predictions made up to those times. While these CC’s are more noisy it is clear that a useful model is reached within the first 3 minutes (CC’s all higher than 0.6) and close to peak performance is already available within the first 10 minutes. C D C D A B earliest delay train delays test positions A B earliest delay train delays test positions Figure 1: KARMA performance: (A) Correlation coefficients in a running window of 20 sequences (or less at the session start) for a whole 95 minute session (mean CC window size is 9.7 minutes). (B) True (gray) vs. tracked positions in an 80 second segment at minute 90 of the session. (C) Effect of loosing recording electrodes: tracking was performed over a full recording session using randomly chosen subsets of electrodes. For each selected number of electrodes (from the full 92 available down to 5 electrodes) 50 repetitions were run. CC’s were calculated per run over the last two thirds of the session (to avoid effects of initial performance transients) and averaged over the 3 movement dimensions. Their distributions (over repetitions) are shown in terms of the median CC (red center line), quartile values (skirt containing 50% of the CC’s) and extremal values (whiskers) for each number of electrodes. (D) Effect of delay time between training data and test data: For the session shown in (A), marked ’day 1’ and for another session (marked ’day 2’), hand movement in 20 minute time windows towards the session ends were reconstructed in an online manner but instead of using the same sequences as training data (with a 1 step delay), earlier sequences were used. Figure (A) shows the time window that corresponded to opening a maximal time difference of 60 minutes between the last inferred sequence (at minute 90) and the last learned sequence (at minute 30). CC’s for the test windows (averaged over movement dimensions) are shown as a function of delay time for the two days. 5 KARMA ARMA Kalman Filter SVR 95.6% Above diagonal is better Figure 2: Algorithm comparisons: 10 hand-movement sessions were each divided into 3 equally long blocks of 25-35 minutes (the last few minutes were discarded since during this time the monkey often stopped paying attention to the task) to create 30 data-sets. The following algorithms were run on each data-set in an online manner: KARMA, lin-y-KARMA, ARMA and SVR. All four were implemented as versions of the KARMA by varying its parameters. In all cases a support vector cache of 5000 was enforced as described in section 4. A Kalman Filter was also implemented so as to allow for a window of observations as input and a window of movement positions as the state (this version was previously shown to outperform the standard Kalman Filter which has r = s = 1). It was also learned in an online manner, replacing inverses with pseudo-inverses where necessary to avoid non-invertible matrices when data-sets are too small. Results are shown as scatter plots of CC values (30 data-sets and 3 movement dimensions produce 90 points per scatter plot). Each point compares KARMA to another algorithm in a specific data-set and movement dimension pair. Points above the diagonal mean a higher score for KARMA. The Graph on the left shows win-scores for each pair of algorithm. Winscore is defined at the percentage of times one algorithm got a higher CC than another. Edge direction points to the loser. The movement reconstruction on the right (horizontal position only) shows KARMA vs. SVR in a sample 18 second window. Probably the highest source of variability in BMI performance across subjects is the location and number of electrodes in the brain. To test how performance with KARMA would degrade if electrodes were lost we simulated an electrode dropping experiment (see figure 1.C). Let’s consider a CC of 0.7 as a minimal reasonable performance quality. Let’s also assume that with 50 repetitions, minimal values roughly represent a worst case scenario in terms of mishaps that do not involve movement of the whole array. Then it seems that we can get by easily with only a third of the array (28 electrodes) operational. In terms of medians (average bad luck) we could do with less. Maximal values are relevant in case we need to choose the good electrodes. This may be relevant in situations involving implanted chips that extract and wirelessly transmit neural activity and may have constraints in terms of energy expenditure or bandwidth. Most BMI experiments (e.g. [2, 13] with the exception of [1]) use fixed models that are learned once at the beginning of a session. Our version of KARMA is adaptive. In order to check the importance of adapting to changes in the recorded neural activity we ran an experiment in which variable delays were opened between the inference times and the latest available data for learning. i.e. after inference of sequence yi from xi , sequence pair  xi−k, yi−k where k > 0 was first made available. Figure 1.D shows a degradation in performance during the test periods as the delay grows for two recording sessions. This suggests that adaptability of the algorithm is important for keeping high performance levels. There are two possible reasons for the observed degradation. One is changes in the neural activity within the brain. The other is changes in the process that extracts the neural activity in the BMI (e.g. electrode movements). Differentiating between the two options is a subject of future work. In the BMI setting, feedback is involved. The subject might be able to effortlessly modulate his neural activity and keep it in good fit with the algorithm. In section 5 we address this issue by running BMI sessions in which the model was frozen. Comparison of KARMA to other methods is shown in figure 2. It is clear that KARMA performs much better than ARMA and the Kalman Filter, suggesting that a nonlinear interpretation of neural activity is helpful. While KARMA is statistically significantly better than SVR, the differences in CC values are not very big (note the scaling difference of the scatter plots). Looking at the movement reconstruction comparison it seems that SVR has a good average estimate of the current position, 6 however, missing a movement model (SVR has the form ˆyi t = Wφφ(xi t−s:t) ) it fluctuates rapidly around the true value. This fluctuation may not be very apparent in the CC values however it would make a BMI much more difficult to control. Lin-y-KARMA uses a linear movement model (and has the form: ˆyi t = Aˆyi t−r:t−1 + Wφφ(xi t−s:t)). Its performance is inferior to full KARMA. Having a nonlinear movement model means that different areas of the state-space get treated in locally relevant fashion. This might explain why full KARMA outperforms. Note that the difference between lin-y-KARMA and SVR are not very significant (win-score of only 65.6%). Comparison to the Population Vector algorithm was also done however the PVA achieved especially bad results for our long sequences since it accumulates errors without any decay (this is less of a problem in BMI since the subject can correct accumulated errors ). We therefore omit showing them here. 0 20 40 60 80 100 Success rate hand moving pure BMI control mixed mode both hands restrained mixing factor: 0−>50% mixing factor: 0−>35% learning algorithm frozen Day 1 Day 2 Day 3 Day 4 Day 5 Day 6 Day 7 Day 8 days start with full amplitude targets Freeze Freeze trials 144 min. 33 min. 49 min. 57 min. Success rate Hand control Hand control BMI mode Figure 3: All graphs show success rates. The light, noisy plots are success rates in consecutive bins of 10 trials while the smoother plots are the result of running a smoothing filter on the noisy plots. Mixing mode was used on day 1 and part of day 2. Afterwards we switched to full BMI (mixing factor 100%). Hands were allowed to move freely until day 4. On day 4 both hands were comfortably restrained for the first time and though performance levels dropped, the monkey did not attempt to move its hands. On day 5 an attempt to freeze the model was made. When performance dropped and the monkey became agitated we restarted the BMI from scratch and performance improved rapidly. Day 6 consists of full BMI but with the targets not as far apart as with hand-control. This makes the task a bit easier and allowed for higher success rates. On days 7 and 8 a BMI block was interleaved with hand control blocks. Only the BMI blocks are shown in the top graph. The full three blocks of day 8 are shown in the bottom right graph. Bottom left graph shows a recording session during which the model was frozen. 5 BMI experiment The BMI experiment was conducted after approximately four months of neural recordings during which the monkey learned and performed the hand-control task. Figure 3 shows a trace of the first days of the BMI task. A movie showing footage from these days is in the supplementary material (video 2). To make the transition to BMI smooth, the first 1.5 days consisted of work in a mixed mode, during which control of the cursor was a linear combination of the hand position and KARMA’s predictions. We did this in order to see if the monkey would accept a noisier control scheme than it was used to. During the next 1.5 days the cursor was fully controlled by KARMA, but the monkey kept moving as if it was doing hand-control. i.e. it made movements and corrections with its hands. On day 4 the monkey’s free hand was comfortably restrained. Despite our concerns that the monkey would stop performing it seemed impervious to this change, except that control over the cursor became more difficult. On days 5 and 6 we made some tweaks to the algorithm (tried to freeze learning and change the way in which correct movement trajectories are generated) and the 7 task (tried to decrease target size and the distance between targets) which had some effects of task difficulty and on success rates. On days 8 and 9 we interleaved a BMI block with hand-control blocks. We saw that performance is better in hand-control than in BMI but not drastically so. In the following sessions we discontinued all tweaking with the algorithm and we’ve seen some steady improvement in performance. We repeated the freezing of model learning on two days (one of these session appears in figure 3). In all cases where we froze the model, we noticed that the monkey starts experiencing difficulty in controlling the cursor after a period of 10-15 minutes and stopped working completely when this happened. As stated earlier, in most BMI experiments the subjects interact with fixed models. One possible explanation for the deterioration is that because our monkey is trained in using an adaptive model it does not have experience in conforming to such a model’s constraints. Instead, the opposite burden (that of following a drifting source of neural activity) falls on the algorithm’s shoulders. As mentioned in section 2, in BMI mode no hand movements are performed and therefore model learning is based on our guess of what is the desired cursor trajectory (the monkey’s emphintended cursor movement). We chose to design the desired trajectory as a time varying linear combination of the cursor trajectory that the monkey saw and the target location: yi t =  1 − t tfi  ˆyi t + t tfi ˜yi where ˜yi is the target location on trial i. Note that this trajectory starts at the observed cursor location at the trial start and ends at the target location (regardless of where the cursor actually was at the end of the trial). 6 Summary This study was motivated by the view that lifting overly simplifying assumptions and integrating domain knowledge into machine learning algorithms can make significant improvements to BMI technology. In turn, this technology can be used as a testbed for improved modeling of the interaction between the brain and environment, especially in visuomotor control. We showed in open-loop hand tracking experiments that the incorporation of a nonlinear movement model, interpreting the neural activity as a whole (rather than as the sum of contributions made by single neurons) and allowing the model to adapt to changes, results in better predictions. The comparison was done against similar models that lack one or more of these properties. Finally, we showed that this model can be used successfully in a real-time BMI setting, and that the added mathematical ’complications’ result in a very intuitive and high quality interface. References [1] D. M. Taylor, S. I. Helms, S.I. Tillery, and A.B. Schwartz. Direct cortical control of 3d neuroprosthetic devices. Science, 296(7):1829– 1832, 2002. [2] M.Velliste, S.Perel, M.C.Spalding, A.S. Whitford, and A. B. Schwartz. Cortical control of a prosthetic arm for self-feeding. Nature (online), May 2008. [3] S. Kakei, D. S. Hoffman, and P. L. Strick. Muscle and movement representations in the primary motor cortex. Science, 285:2136–2139, 1999. [4] E. Vaadia, I. Haalman, M. Abeles, H. Bergman, Y. Prut, H. Slovin, and A. Aertsen. Dynamics of neuronal interactions in monkey cortex in relation to behavioral events. Nature, 373:515–518, Febuary 1995. [5] W. Wu, Y. Gao, E. Bienenstock, J.P. Donoghue, and M.J. Black. Bayesian population coding of motor cortical activity using a kalman filter. Neural Computation, 18:80–118, 2005. [6] A. E. Brockwell, A. L. Rojas, and R. E. Kass. Recursive Bayesian Decoding of Motor Cortical Signals by Particle Filtering. J Neurophysiol, 91(4):1899–1907, 2004. [7] L. Shpigelman, Y. Singer, R. Paz, and E. Vaadia. Spikernels: Predicting hand movements by embedding population spike rates in inner-product spaces. Neural Computation, 17(3), 2005. [8] L. Shpigelman, K.Crammer, R. Paz, E.Vaadia, and Y.Singer. A temporal kernel-based model for tracking hand movements from neural activities. In NIPS. 2005. [9] P.M.L. Drezet and R.F. Harrison. Support vector machines for system identification. In UKACC, volume 1, pages 688–692, Sep 1998. [10] M. Martínez-Ramón, J. Luis Rojo-Álvarez, G. Camps-Valls, J. Muñoz-Marí, A. Navia-Vázquez, E. Soria-Olivas, and A. R. FigueirasVidal. Support vector machines for nonlinear kernel ARMA system identification. IEEE Trans. Neural Net., 17(6):1617–1622, 2006. [11] G. E. P. Box and G. M. Jenkins. Time Series Analysis: Forecasting and Control. Prentice Hall PTR, Upper Saddle River, NJ, USA, 1994. [12] B. Schoelkopf and A. J. Smola. Learning with Kernels. The MIT Press, Cambridge, MA, 2002. [13] L.R.Hochberg, M.D. Serruya, G. M. Friehsand J.A. Mukand, M.Saleh, A. H. Caplan, A. Branner, D. Chen, R. D. Penn, and J. P. Donoghue. Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature, 442:164–171, 2006. 8
2008
89
3,580
Structured Ranking Learning using Cumulative Distribution Networks Jim C. Huang Probabilistic and Statistical Inference Group University of Toronto Toronto, ON, Canada M5S 3G4 jim@psi.toronto.edu Brendan J. Frey Probabilistic and Statistical Inference Group University of Toronto Toronto, ON, Canada M5S 3G4 frey@psi.toronto.edu Abstract Ranking is at the heart of many information retrieval applications. Unlike standard regression or classification in which we predict outputs independently, in ranking we are interested in predicting structured outputs so that misranking one object can significantly affect whether we correctly rank the other objects. In practice, the problem of ranking involves a large number of objects to be ranked and either approximate structured prediction methods are required, or assumptions of independence between object scores must be made in order to make the problem tractable. We present a probabilistic method for learning to rank using the graphical modelling framework of cumulative distribution networks (CDNs), where we can take into account the structure inherent to the problem of ranking by modelling the joint cumulative distribution functions (CDFs) over multiple pairwise preferences. We apply our framework to the problem of document retrieval in the case of the OHSUMED benchmark dataset. We will show that the RankNet, ListNet and ListMLE probabilistic models can be viewed as particular instances of CDNs and that our proposed framework allows for the exploration of a broad class of flexible structured loss functionals for learning to rank. 1 Introduction Ranking is the central problem for many information retrieval applications such as web search, collaborative filtering and document retrieval [8]. In these problems, we are given a set of objects to be ranked and a series of observations where each observation consists of some subset of the objects, a feature vector and some ordering of the objects with highly ranked objects corresponding to a higher relevance or degree of importance. The goal is to then learn a model which allows us to assign a score to new test objects: this often takes the form of a ranking function [2, 4] which assigns a higher score to objects with higher rankings. Unlike the canonical problems of regression or classification in which we predict outputs independently of one another, in ranking we are interested in predicting structured outputs, as the rank of one item can only be determined given the scores of all other items, and so complex inter-dependencies exist between outputs. This requires measures of loss which are multivariate and structured. However, such ranking measures are typically difficult to optimize directly [3], making the problem of learning difficult. A previous approach has been to treat the problem as one of structured prediction [7], where the aim is to directly optimize ranking measures. Another approach has been to approximate these ranking measures with smooth differentiable loss functionals by formulating probabilistic models on pairwise preferences between objects (RankNet; [2]), or on ordered lists of objects (ListNet and ListMLE; [4, 13]). In practice, these methods either require approximating a learning problem with an intractable number of constraints, or they require observations containing complete orderings over the objects to be ranked or one must make independence assumptions on pairwise preferences. In practice however, we can take advantage of the fact that each observation in the training set only provides preference information about a small subset of the objects to be ranked, so that a sensible probabilistic representation would be the probability of observing a partial ordering over nodes for a given observation. We will show that 1) a probability over orderings is equivalent to a probability over pairwise inequalities between objects to be ranked and 2) this amounts to specifying a joint cumulative distribution function (CDF) over pairwise object preferences. We will present a framework for ranking using the recently-developed probabilistic graphical modelling framework of CDNs which compactly represents this joint CDF as a product of local functions [5]. While the problem of inference in CDNs was addressed in [5], here we address the problem of learning in CDNs in the context of ranking learning where we estimate model parameters under a structured loss functional that accounts for dependencies between pairwise object preferences. We will then test the proposed framework on the OHSUMED dataset [8], a benchmark dataset used in information retrieval research. Finally we will show that the frameworks proposed by [2, 4, 13] can be viewed as particular types of CDNs so that novel classes of flexible structured loss functionals for ranking learning can be specified under our framework. 2 Cumulative distribution networks The CDN [5] is an undirected graphical model in which the joint CDF F(z) over a set of random variables is represented as a product over functions defined over subsets of these variables. More formally, F(z) =  c∈C φc(zc), (1) where φc(zc) is a function defined over some subset of variables. An example of a CDN is shown in Figure 1(a), along with an example bivariate density which can be obtained by differentiating a product of 2 Gaussian CDF functions (Figure 1(b)). In contrast to undirected models for probability density functions, the global normalization constraint on the CDF does not require computing a partition function and can be enforced locally for each φc(zc). Thus, in order for the CDN to represent a valid CDF, it is sufficient that each of the local functions φc satisfy all of the properties of a multivariate CDF. These properties include the requirements that each CDN function φc be bounded between 0 and 1, and that each φc is monotonically non-decreasing with respect to all of its argument variables zc, so that the joint CDF F(z) is also bounded between 0 and 1 and is monotonically non-decreasing with respect to any and all subsets of variables. In a CDN, disjoint sets of variables A, B are marginally independent if they share no functions in common, and disjoint sets of variables A, B are conditionally independent given variable set C if no path linking any variable in A to any variable in B passes through C. In addition, marginalization of variables in a CDN can be done in constant-time via a trivial maximization of the joint CDF with respect to the variables being marginalized. The problem of inference in a CDN can be solved efficiently using a message-passing algorithm called derivative-sum-product. For detailed derivations of the properties of CDNs, including marginal and conditional independence properties, we refer the reader to [5]. The CDN framework provides us with a means to compactly represent multivariate joint CDFs over many variables: in the next section we will formulate a loss functional for learning to rank which takes on such a form. (a) x y 0 2 4 6 8 0 2 4 6 8 (b) Figure 1: a) Cumulative distribution network representing the joint CDF F(z1, z2, z3, z4, z5) = φa(z2)φb(z1, z2, z3)φc(z3)φd(z4)φe(z3, z4, z5)φf(z5); b) Example of a bivariate density P(x, y) corresponding to differentiating a CDF F(x, y) obtained from taking the product of 2 Gaussian bivariate CDFs. 3 Structured loss functionals for ranking learning We now proceed to formulate the problem of learning to rank in a structured setting. Suppose we wish to rank N nodes in the set V = {V1, · · · , VN} and we are given a set of observations D1, · · · , DT . Each observation Dt consists of an ordering over the nodes in a subset Vt ⊆V, where each node is provided with a corresponding feature vector x ∈RL which may be specific to the given observation. The orderings could be provided in the form of ordinal node labels1, or in the form of pairwise node preferences. The orderings can be represented as a directed graph over the nodes in which a directed edge e = (Vi →Vj) is drawn between 2 nodes Vi, Vj iff Vi is preferred to node Vj, which we denote as Vi ≻Vj. In general, we assume that for any given observation, we observe a partial ordering over nodes, with complete orderings being a special case. We denote the above graph consisting of edges e = (Vi →Vj) ∈Et and the node set Vt as the order graph Gt = (Vt, Et) for observation Dt so that Dt = {Gt, {xt n}Vn∈Vt}. A toy example of an observation over 4 nodes is shown in Figure 2(a). Note that under this framework, the absence of an edge between two nodes Vi, Vj in the order graph indicates we cannot assert any preference between the two nodes for the given observation. (a) (b) Figure 2: a) An example of an order graph over 4 nodes V1, V2, V3, V4 corresponding to the objects to be ranked. The graph represents the set of preference relationships V1 ≻V2, V1 ≻V3, V1 ≻V4, V2 ≻V4, V3 ≻ V4; b) Learning the ranking function from training data. The training data consists of a set of order graphs over subsets of the objects to be ranked. For order graph, the ranking function ρ maps each node to the real line . The goal is to learn ρ such that we minimize our probability of misranking on test observations. We now define ρ : V →R as a ranking function which assigns scores to nodes via their feature vectors so that for node Vi, Si = ρ(Vi) + πi (2) where Si is a scalar and πi is a random variable specific to node Vi. We wish to learn such a function given multiple observations D1, · · · , DT so that we minimize the probability of misranking on test observations (Figure 2(b)). The above model allows us to account for the fact that the amount of uncertainty about a node’s rank may depend on unobserved features for that node (e.g.: documents associated with certain keywords might have less variability in their rankings than other documents). Under this model, the preference relation Vi ≻Vj is completely equivalent to ρ(Vi) + πi ≥ρ(Vj) + πj ⇔πij = πj −πi ≤ρ(Vi) −ρ(Vj). (3) where we have defined πij as a preference variable between nodes Vi, Vj. For each edge e = (Vi →Vj) ∈Et in the order graph, we can define r(ρ; e, Dt) ≡ρ(Vi) −ρ(Vj) and collect these into the vector r(ρ; Gt) ∈R|Et|. Similarly, let πe ≡πij. Having defined the preferences, we must select an appropriate loss measure. A sensible metric here [13] is the joint 1It is crucial to note that node labels may in general not be directly comparable with one another from one observation to the next (e.g.: documents with the same rating might not truly have the same degree of relevance for different queries), or the scale of the labels may be arbitrary. probability of observing the order graph Gt = (Vt, Et) corresponding to the partial ordering of nodes in Vt. From Equation (3), this will take the form of a probability measure over events of the type πe ≤r(ρ; e, Dt) so that we obtain Pr{Et|Vt, ρ} = Pr   e∈Et [πe ≤r(ρ; e, Dt)]  = Fπ  r(ρ; Gt)  , (4) where Fπ is the joint CDF over the preference variables πe. Given an observation Dt, the goal is to learn the ranking function ρ by maximizing Equation (4). Note that under this framework, the set of edges Et corresponding to the set of pairwise preferences are treated as random variables which may have a high degree of dependence between one another, so that Fπ  r(ρ; Gt)  is a joint CDF over multiple pairwise preferences. The problem of learning the ranking function then consists of scoring multiple nodes simultaneously whilst accounting for dependencies between node scores. Now, if we are given multiple independent (but not necessarily identically distributed) observations D = {D1, · · · , DT }, we can define a structured loss functional L(ρ, Fπ, D) = − T  t=1 log Fπ  r(ρ; Gt)  (5) where each term in the loss functional depends on multiple preference relationships specified by the order graph for observation t. The problem of learning then consists of solving the optimization problem inf ρ,Fπ L(ρ, Fπ, D). (6) In general, the above structured loss functional may be difficult to specify, as it takes on the form of a joint CDF over many random variables with a high degree of inter-dependency which may require a large number of parameters to specify. We can, however, compactly represent this using the CDN framework, as we will now show. 3.1 Tranforming order graphs into CDNs Figure 3: Transforming the order graph Gt into a CDN. For each edge e = (Vi →Vj) in the order graph (left), a preference variable πij is created. All such random variables are then connected to one another in a CDN (right), allowing for complex dependencies between preferences. The representation of the structured loss functional in Equation (5) as a CDN consists of transforming the order graph Gt for a each observation into a set of variable nodes in a CDN. More precisely, for each edge e = (Vi →Vj) in the order graph, the preference variable πij is created. All such variables are then connected to one another in a CDN (Figure 3), where the pattern of connectivity used will determine the set of dependencies between these preferences πij as given by the marginal and conditional independence properties of CDNs [5]. Thus for any given CDN topology, each preference node πe is a member of some neighborhood of preference nodes πe′ so that neighboring preferences nodes are marginally dependent of one another. One possible concern here is that we may require a fully connected CDN topology over all possible pairwise preferences between all nodes in order to capture all of these dependencies, leading to a model which is cumbersome to learn. In practice, because any observation only conveys information about a small subset of the nodes in V and because in practice we observe partial orderings between these, the order graph is sparse and so the number of preference nodes in the CDN for the given observation will be much smaller than the worst-case number of all possible pairwise preferences between nodes. Furthermore, we do not have to store a large CDN in memory during training, as we only need to store a single CDN over a relatively small number of preference variables for the current observation. We can thus perform ranking learning in an online fashion by constructing a single CDN for each observation Dt and optimizing the loss −log Fπ  r(ρ; Gt)  defined by that CDN for the given observation. 4 StructRank: a probabilistic model for structured ranking learning with node labels Suppose now that each node in the training set is provided with an ordinal node label y along with a feature vector x. For any given order graph over some subset of the nodes, the node labels y allow us to establish edges in the order graph, so that an edge Vi →Vj exists between two nodes Vi, Vj iff yi > yj. We can then parametrically model the ranking function ρ(V ) ≡ρ(x; a) (where a is a set of parameters) using a Nadaraya-Watson [10, 12] local estimator with a Gaussian kernel so that ρ(x; a) =  i K(xi, x; a)yi  i K(xi, x; a) , K(˜x, x; a) = exp −1 2  x −˜x T A  x −˜x  , (7) where the summations are taken over all feature vector-label pairs in the training set, with A = diag(a2 1, · · · , a2 L). Consider now an edge e = (Vi →Vj) in the order graph and define re ≡ re(a; Dt) = ρ(xt i; a) −ρ(xt j; a). For a given order graph, the structured loss functional L(θ; Dt) is given by L(θ; Dt) = −log Fπ  r(ρ; Gt)  = −  e,e′ log φ(re(a; Dt), re′(a; Dt)), (8) where θ = a w1 w2 is the parameter vector and the function φ(r1, r2) set to a multivariate sigmoidal function so that φ(r1, r2) = 1 1 + exp(−w1r1) + exp(−w2r2), w1, w2 ≥0, (9) where w1, w2 are weights parameterizing the CDN function φ(r1, r2). It can be readily shown that this choice of CDN function φ(r1, r2), when combined with the constraints w1, w2 > 0, satisfies all of the necessary and sufficient conditions required for the CDN to represent a valid CDF, as 0 ≤φ(r1, r2) ≤1 and is monotonically non-decreasing with respect to all of its arguments. For the given CDN and ranking functions, the learning problem for the current observation Dt then becomes inf θ  t  e,e′ log 1 + exp  −w1re(a; Dt)  + exp  −w2re′(a; Dt)  s.t. θ ≥0 ∥θ∥1 ≤t, (10) where we have introduced a regularizer in the form of an L1-norm constraint. Notice that our model has one parameter per data feature and 2 parameters defining the CDN for any given observation. The gradient ∇aL(θ; Dt) and the derivatives with respect to the CDN function weights w1, w2 for a given observation Dt are provided in the Supplementary Information. 5 Results To compare the performance of our proposed framework to other methods, we will use the following three metrics commonly in use in information retrieval research: Precision, Mean Average Precision (MAP) and Normalized Discounted Cumulative Gain (NDCG) [6]. The NDCG accounts for the fact that less relevant documents are less likely to be examine by a user by putting more weight on highly relevant documents than marginally relevant ones. We downloaded the OHSUMED dataset provided as part of the LETOR 2.0 benchmark [8]. The dataset consists of a set of 106 query-document pairs, with a feature vector and relevance judgment (a) (b) (c) Figure 4: a) Average NDCG as a function of truncation level n for the OHSUMED dataset. NDCG values are averaged over 5 cross-validation splits; b) Mean average precision (MAP) as a function of truncation level n; c) Mean average precision value for several methods. provided for each pair, where queries correspond to medical searches associated with patient and topic information. There are a total of 16,140 query-document pairs with relevance judgments provided by humans on three ordinal levels: definitely relevant, partially relevant or not relevant. For any given query, we used the ordinal labels y for each document in the query in order to establish preferences between documents for that query. Each node in the order graph is provided with 25 query-specific features including term frequency, document length, BM25 and LMIR features as well as combinations thereof [1, 11, 14]. In accordance with the nomenclature above, we use the terms query and observation interchangeably. The OHSUMED dataset is provided in the form of 5 training/validation/test splits of sizes 63/21/22 observations each. To ensure that features are comparable across all observations, we normalized each feature vector within each observation as described in [8]. We performed learning of our model using a constrained stochastic gradients algorithm where for each observation, we prevent updates from violating the inequality constraints in the optimization problem defined by Equation (10) by reducing the learning rate α until the update becomes feasible. We set the default learning rate to α = 0.5 and we randomly initialized the model parameters a, w1, w2 in the range [0, 1]. This optimization was run for 10 epochs (passes through the training set) and α was scaled by 1 √ 2 at the end of each epoch. We set the regularization parameter using the validation set for a given data split. Due to the nonconvex nature of the optimization problem, for each cross-validation split, we performed learning using 3 random initializations, and we then selected the model which achieved the best MAP score on the validation set. We tested a fully connected CDN which models full interdependence between preferences, and a completely disconnected CDN which models preferences independently of one another. The above 3 performance metrics are shown in Figures 4(a),4(b),4(c) in addition to the performances of seven state-of-the-art methods which are part of the LETOR 2.0 benchmarks. At the time of submission, numerical performance scores for ListMLE [13] were not available and so were not included in these plots. With the exception of ListNet and ListMLE, none of the above methods explicitly model dependencies between pairwise preferences. As can be seen, accounting for dependencies between pairwise preferences provides a significant gain in performance compared to modellling preferences as being independent. Additional results on the TREC2004 dataset from LETOR 2.0 are provided in Supplemental Information. 6 Discussion We have proposed here a novel framework for ranking learning using structured loss functionals. We have shown that the problem of learning to rank can be reduced to maximizing a joint CDF over multiple pairwise preferences. We have shown how to compactly represent this using the CDN framework and have applied it to the OHSUMED benchmark dataset. We have demonstrated that representing the dependencies between pairwise preferences leads to improved performance over modelling preferences as being independent of one another. 6.1 Relation to RankNet and ListNet/ListMLE The probability models for ranking proposed by [2, 4, 13] can all be expressed as special cases of models defined by different CDNs. In the case of RankNet [2], the corresponding probability over a given pairwise preference Vi ≻Vj is modelled by a logistic function of ρ(xi)−ρ(xj) and the model was optimized using cross-entropy loss. The joint probability of preferences can thus be represented as a completely disconnected CDN with logistic functions in which all pairwise object preferences are treated as being independent. In the case of ListNet [4] and ListMLE [13], the probability of observing a complete ordering V1 ≻· · · ≻VN over N objects are defined as products of functions of the type P(V1 ≻· · · ≻VN|D) = N  i=1 exp(ρ(xi)) N k=i exp(ρ(xk)) = N  i=1 1 1 + N k=i+1 exp −  ρ(xi) −ρ(xk)  = N  i=1 φi(ri), which we see is equivalent to a CDN with N multivariate sigmoids. As noted by the authors of [13], the above model is also an example of the Plackett-Luce class of probability models over object scores [9]. In addition, the ListNet/ListMLE frameworks both require a complete ordering over objects by definition: under the CDN framework, we can model partial orderings, with complete orderings as a special case. The connections between RankNet, ListNet and ListMLE and the CDN framework are illustrated in Supplementary Figure 2. Our proposed framework unifies the above views of ranking as different instantiations of a joint CDF over pairwise preferences and hence as particular types of CDNs. This allows us to consider flexible joint CDFs defined over different subsets of object preferences and over different families of CDN functions so as to capture various data specific properties. 6.2 Future directions Our work here suggests several future directions for research. In [13], it was shown that the loglikelihood corresponding to the probability of an ordering is a good surrogate to the 0-1 loss between the predicted ordering and the true ordering, as the former is differentiable and penalizes mis-orderings in a sensible way. One could investigate connections between the structured loss functionals proposed in this paper and other ranking measures such as NDCG. Another possible direction is to generalize StructRank to products over Gaussian multivariate CDFs or other classes of functions which satisfy the requirements of CDN functions , as in this paper we have elected to use a product of bivariate sigmoids φ(re, re′) to represent our loss functional. Also, it may be fruitful to investigate different CDN topologies: for example, we found that averaging randomly connected CDNs are very fast to learn and perform comparably to the fully-connected CDN we used in this paper (data not shown). In addition, we have only investigated representing the loss functional using a single CDN function: this could easily be generalized to K functions. Lastly, alternatives to the Nadaraya-Watson local estimator, such as the neural networks used in [2, 4, 13], can be investigated. References [1] R. Baeza-Yates and B. Ribeiro-Neto. Modern information retrieval. Addison Wesley, 1999. [2] C.J.C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton and G. Hullender. Learning to rank using gradient descent. In Proceedings of the Twenty-Second International Conference on Machine Learning (ICML), 2005. [3] C.J.C. Burges, R. Ragno and Q.V. Le. Learning to rank with nonsmooth cost functions. In Proceedings of the Nineteenth Annual Conference on Neural Information Processing Systems (NIPS), 2007. [4] Z. Cao, T. Qin, T.Y. Liu, M.F. Tsai and H. Li. Learning to rank: from pairwise approach to listwise approach. In Proceedings of the Twenty-Fourth International Conference on Machine Learning (ICML), 2007. [5] J.C. Huang and B.J. Frey. Cumulative distribution networks and the derivative-sum-product algorithm. In Proceedings of the Twenty-Fourth Conference on Uncertainty in Artificial Intelligence (UAI), 2008. [6] K. Jarvelin and J. Kekalainen. Cumulated evaluation of IR techniques, ACM Information Systems, 2002. [7] T. Joachims. A support vector method for multivariate performance measures. In Proceedings of the Twenty-Second International Conference on Machine Learning (ICML), 2005. [8] T.Y. Liu, J. Xu, T. Qin, W. Xiong and H. Li. LETOR: Benchmark dataset for research on learning to rank for information retrieval. LR4IR 2007, in conjunction with SIGIR 2007, 2007. [9] J. I. Marden. Analyzing and modeling rank data. CRC Press, 1995. [10] E.A. Nadaraya. On estimating regression. Theory of Probability and its Applications 9(1), pp. 141-142, 1964. [11] S.E. Robertson. Overview of the OKAPI projects. Journal of Documentation 53 (1), pp. 3-7, 1997. [12] G.S. Watson. Smooth regression analysis. The Indian Journal of Statistics. Series A 26, pp. 359-372, 1964. [13] F. Xia, T.Y. Liu, J. Wang, W. Zhang and H. Li. Listwise approach to learning to rank - theory and algorithm. In Proceedings of the Twenty-Fifth International Conference on Machine Learning (ICML), 2008. [14] C. Zhai and J. Lafferty. A study of smoothing methods for language models applied to ad hoc information retrieval. In Proceedings of SIGIR 2001, 2001.
2008
9
3,581
Model Selection in Gaussian Graphical Models: High-Dimensional Consistency of ℓ1-regularized MLE Pradeep Ravikumar†, Garvesh Raskutti†, Martin J. Wainwright†∗and Bin Yu†∗ Department of Statistics†, Department of EECS∗, University of California, Berkeley {pradeepr,garveshr,wainwright,binyu}@stat.berkeley.edu Abstract We consider the problem of estimating the graph structure associated with a Gaussian Markov random field (GMRF) from i.i.d. samples. We study the performance of study the performance of the ℓ1-regularized maximum likelihood estimator in the high-dimensional setting, where the number of nodes in the graph p, the number of edges in the graph s and the maximum node degree d, are allowed to grow as a function of the number of samples n. Our main result provides sufficient conditions on (n, p, d) for the ℓ1-regularized MLE estimator to recover all the edges of the graph with high probability. Under some conditions on the model covariance, we show that model selection can be achieved for sample sizes n = Ω(d2 log(p)), with the error decaying as O(exp(−c log(p))) for some constant c. We illustrate our theoretical results via simulations and show good correspondences between the theoretical predictions and behavior in simulations. 1 Introduction The area of high-dimensional statistics deals with estimation in the “large p, small n” setting, where p and n correspond, respectively, to the dimensionality of the data and the sample size. Such highdimensional problems arise in a variety of applications, among them remote sensing, computational biology and natural language processing, where the model dimension may be comparable or substantially larger than the sample size. It is well-known that such high-dimensional scaling can lead to dramatic breakdowns in many classical procedures. In the absence of additional model assumptions, it is frequently impossible to obtain consistent procedures when p ≫n. Accordingly, an active line of statistical research is based on imposing various restrictions on the model—-for instance, sparsity, manifold structure, or graphical model structure—-and then studying the scaling behavior of different estimators as a function of sample size n, ambient dimension p and additional parameters related to these structural assumptions. In this paper, we study the problem of estimating the graph structure of a Gauss Markov random field (GMRF) in the high-dimensional setting. This graphical model selection problem can be reduced to the problem of estimating the zero-pattern of the inverse covariance or concentration matrix Θ∗. A line of recent work [1, 2, 3, 4] has studied estimators based on minimizing Gaussian log-likelihood penalized by the ℓ1 norm of the entries (or the off-diagonal entries) of the concentration matrix. The resulting optimization problem is a log-determinant program, which can be solved in polynomial time with interior point methods [5], or by faster co-ordinate descent algorithms [3, 4]. In recent work, Rothman et al. [1] have analyzed some aspects of high-dimensional behavior, in particular establishing consistency in Frobenius norm under certain conditions on the model covariance and under certain scalings of the sparsity, sample size, and ambient model dimension. The main contribution of this paper is to provide sufficient conditions for model selection consistency of ℓ1-regularized Gaussian maximum likelihood. It is worth noting that such a consistency result for structure learning of Gaussian graphical models cannot be derived from Frobenius norm consistency alone. For any concentration matrix Θ, denote the set of its non-zero off-diagonal entries 1 by E(Θ) = {s ̸= t | Θst ̸= 0}. (As will be clarified below, the notation E alludes to the fact that this set corresponds to the edges in the graph defining the GMRF.) Under certain technical conditions to be specified, we prove that the ℓ1-regularized (on off-diagonal entries of Θ) Gaussian MLE recovers this edge set with high probability, meaning that P[E(bΘ) = E(Θ∗)] →1. In many applications of graphical models (e.g., protein networks, social network analysis), it is this edge structure itself, as opposed to the weights Θ∗ st on the edges, that is of primary interest. Moreover, we note that model selection consistency is useful even when one is interested in convergence in spectral or Frobenius norm; indeed, having extracted the set E(Θ∗), we could then restrict to this subset, and estimate the non-zero entries of Θ∗at the faster rates applicable to the reduced dimension. The remainder of this paper is organized as follows. In Section 2, we state our main result, discuss its connections to related work, and some of its consequences. Section 3 provides an outline of the proof. In Section 4, we provide some simulations that illustrate our results. Notation For the convenience of the reader, we summarize here notation to be used throughout the paper. Given a vector u ∈Rd and parameter a ∈[1, ∞], we use ∥u∥a to denote the usual ℓa norm. Given a matrix U ∈Rp×p and parameters a, b ∈[1, ∞], we use |||U|||a,b to denote the induced matrix-operator norm max∥y∥a=1 ∥Uy∥b; see [6] for background. Three cases of particular importance in this paper are the spectral norm |||U|||2, corresponding to the maximal singular value of U; the ℓ∞/ℓ∞-operator norm, given by |||U|||∞ := max j=1,...,p p X k=1 |Ujk|, (1) and the ℓ1/ℓ1-operator norm, given by |||U|||1 = |||U T |||∞. Finally, we use ∥U∥∞to denote the element-wise maximum maxi,j |Uij|; note that this is not a matrix norm, but rather a norm on the vectorized form of the matrix. For any matrix U ∈Rp×p, we use vec(U) or equivalently U ∈Rp2 to denote its vectorized form, obtained by stacking up the rows of U. We use ⟨⟨U, V ⟩⟩:= P i,j UijVij to denote the trace inner product on the space of symmetric matrices. Note that this inner product induces the Frobenius norm |||U|||F := qP i,j U 2 ij. Finally, for asymptotics, we use the following standard notation: we write f(n) = O(g(n)) if f(n) ≤cg(n) for some constant c < ∞, and f(n) = Ω(g(n)) if f(n) ≥c′g(n) for some constant c′ > 0. The notation f(n) ≍g(n) means that f(n) = O(g(n)) and f(n) = Ω(g(n)). 2 Background and statement of main result In this section, we begin by setting up the problem, with some background on Gaussian MRFs and ℓ1-regularization. We then state our main result, and discuss some of its consequences. 2.1 Gaussian MRFs and ℓ1 penalized estimation Consider an undirected graph G = (V, E) with p = |V | vertices, and let X = (X1, . . . , Xp) denote a p-dimensional Gaussian random vector, with variate Xi identified with vertex i ∈V . A Gauss-Markov random field (MRF) is described by a density of the form f(x1, . . . , xp; Θ∗) = 1 (2π det(Θ∗))p/2 exp  −1 2xT Θ∗x  . (2) As illustrated in Figure 1, Markov structure is reflected in the sparsity pattern of the inverse covariance or concentration matrix Θ∗, a p × p symmetric matrix. In particular, by the HammersleyClifford theorem [7], it must satisfy Θ∗ ij = 0 for all (i, j) /∈E. Consequently, the problem of graphical model selection is equivalent to estimating the off-diagonal zero-pattern of the concentration matrix—that is, the set E(Θ∗) := {i, j ∈V | i ̸= j, Θ∗ ij ̸= 0}. In this paper, we study the minimizer of the ℓ1-penalized Gaussian negative log-likelihood. Letting ⟨⟨A, B⟩⟩:= P i,j AijBij be the trace inner product on the space of symmetric matrices, this objective function takes the form bΘ = arg min Θ⪰0 n ⟨⟨Θ, bΣ⟩⟩−logdet(Θ) + λn∥Θ∥1,off o = arg min Θ⪰0 g(Θ; bΣ, λn). (3) 2 1 2 3 4 5 Zero pattern of inverse covariance 1 2 3 4 5 1 2 3 4 5 (a) (b) Figure 1. (a) Simple undirected graph. A Gauss Markov random field has a Gaussian variable Xi associated with each vertex i ∈V . This graph has p = 5 vertices, maximum degree d = 3 and s = 6 edges. (b) Zero pattern of the inverse covariance Θ∗associated with the GMRF in (a). The set E(Θ∗) corresponds to the off-diagonal non-zeros (white blocks); the diagonal is also non-zero (grey squares), but these entries do not correspond to edges. The black squares correspond to non-edges, or zeros in Θ∗. Here bΣ denotes the sample covariance—that is, bΣ := 1 n Pn ℓ=1 X(ℓ)[X(ℓ)]T , where each X(ℓ) is drawn in an i.i.d. manner according to the density (2). The quantity λn > 0 is a user-defined regularization parameter. and ∥Θ∥1,off:= P i̸=j |Θij| is the off-diagonal ℓ1 regularizer; note that it does not include the diagonal. Since the negative log-determinant is a strictly convex function [5], this problem always has a unique solution, so that there is no ambiguity in equation (3). We let E(bΘ) = {(i, j) | i ̸= j, bΘij ̸= 0} denote the edge set associated with the estimate. Of interest in this paper is studying the probability P[E(Θ∗) = E(bΘ)] as a function of the graph size p (which serves as the “model dimension” for the Gauss-Markov model), the sample size n, and the structural properties of bΘ. In particular, we define both the sparsity index s := |E(Θ∗)| = {i, j ∈V | i ̸= j, Θ∗ ij ̸= 0}|. (4) corresponding to the total number of edges, and the maximum degree or row cardinality d := max j=1,...,p |{i | Θ∗ ij ̸= 0}, (5) corresponding to the maximum number of non-zeros in any row of Θ∗, or equivalently the maximum degree in the graph G, where we include the diagonal in the degree count. 2.2 Statement of main result Our assumptions involve the Hessian with respect to Θ of the objective function g defined in equation (3), evaluated at the true model Θ∗. Using standard results on matrix derivatives [5], it can be shown that this Hessian takes the form Γ∗ := ∇2 Θg(Θ) Θ=Θ∗= Θ∗−1 ⊗Θ∗−1, (6) where ⊗denotes the Kronecker matrix product. By definition, Γ∗is a p2 × p2 matrix indexed by vertex pairs, so that entry Γ∗ (j,k),(ℓ,m) corresponds to the second partial derivative ∂2g ∂Θjk∂Θℓm , evaluated at Θ = Θ∗. When X has multivariate Gaussian distribution, then Γ∗is the Fisher information of the model, and by standard results on cumulant functions in exponential families [8], we have the more specific expression Γ∗ (j,k),(ℓ,m) = cov{XjXk, XℓXm}. For this reason, Γ∗can be viewed as an edge-based counterpart to the usual covariance matrix Σ∗. We define the set of non-zero off-diagonal entries in the model concentration matrix Θ∗: S(Θ∗) := {(i, j) ∈V × V | i ̸= j, Θ∗ ij ̸= 0}, (7) and let S(Θ∗) = {S(Θ∗) ∪{(1, 1), . . ., (p, p)} be the augmented set including the diagonal. We let Sc(Θ∗) denote the complement of S(Θ∗) in the set {1, . . . , p} × {1, . . . , p}, corresponding to all 3 pairs (ℓ, m) for which Θ∗ ℓm = 0. When it is clear from context, we shorten our notation for these sets to S and Sc, respectively. Finally, for any two subsets T and T ′ of V × V , we use Γ∗ T T ′ to denote the |T | × |T ′| matrix with rows and columns of Γ∗indexed by T and T ′ respectively. We require the following conditions on the Fisher information matrix Γ∗: [A1] Incoherence condition: This condition captures the intuition that variable-pairs which are non-edges cannot exert an overtly strong effect on variable-pairs which form edges of the Gaussian graphical model. |||Γ∗ ScS(Γ∗ SS)−1|||∞ ≤ (1 −α), for some fixed α > 0. (8) We note that similar conditions arise in the analysis of the Lasso in linear regression [9, 10, 11]. [A2] Covariance control: There exist constants KΣ∗, KΓ∗< ∞such that |||Θ∗−1|||∞≤KΣ∗, and |||(Γ∗ SS)−1|||∞≤KΓ∗. (9) These assumptions require that the covariance elements along any row of (Θ∗)−1 and (Γ∗ SS)−1 have bounded ℓ1 norms. Note that similar assumptions are are also required for consistency in Frobenius norm [1]. Recall from equations (4) and (5) the definitions of the sparsity index s and maximum degree d, respectively. With this notation, we have: Theorem 1. Consider a Gaussian distribution with concentration matrix Θ∗that satisfies conditions (A1) and (A2). Suppose the penalty is set as λn = C1 q log p n , and the minimum edge-weight Θ∗ min := min(i,j)∈S |Θ∗ ij| scales as Θ∗ min > C2 q log p n for some constants C1, C2 > 0. Further, suppose the triple (n, d, p) satisfies the scaling n > L d2 log(p), (10) for some constant L > 0. Then the edge set E(bΘ) specified by the estimator specifies the true edge set w.h.p.—in particular, P[E(bΘ) = E(Θ∗)] ≥ 1 −exp(−c log p) →1. (11) for some constant c > 0. Remarks: Rothman et al. [1] prove that the error of the estimator in Frobenius norm obeys the bound |||bΘ −Θ∗|||2 F = O {((s + p) log p)/n}, with high probability. We note that model selection consistency does not follow from this result, since an estimate may be close in Frobenius norm while differing substantially in terms of zero-pattern. In one sense, the model selection criterion is more demanding, since given knowledge of the edge set E(Θ∗), one could restrict estimation procedures to this subset, and so achieve faster rates. On the other hand, Theorem 1 requires incoherence conditions [A1] on the covariance matrix, which are not required for Frobenius norm consistency [1]. 2.3 Comparison to neighbor-based graphical model selection It is interesting to compare the estimator to the Gaussian neighborhood regression method studied by Meinshausen and B¨uhlmann [9], in which each node is linearly regressed with an ℓ1 penalty (Lasso) on the rest of the nodes; and the location of the non-zero regression weights is taken as the neighborhood estimate of that node. These neighborhoods are then combined, by either an OR rule or an AND rule, to estimate the full graph. Wainwright [12] shows that the rate n ≍d log p is a sharp threshold for the success/failure of neighborhood selection by Lasso. By a union bound over the p nodes, it follows this threshold holds for the Meinshausen and B¨uhlmann approach as well. This is superior to the scaling in our result (10). However, the two methods rely on slightly different underlying assumptions, and the current form of the neighborhood-based approach requires solving a total of p Lasso programs, as opposed to a single log-determinant problem. Below we show two cases where the Lasso irrepresentability condition holds, while the log-determinant requirement fails. However, in general, we do not know whether the log-determinant irrepresentability strictly dominates its analog for the Lasso. 4 2.3.1 Illustration of irrepresentability: Diamond graph Consider the following Gaussian MRF example from [13]. Figure 2(a) shows a diamond-shaped graph G = (V, E), with vertex set V = {1, 2, 3, 4} and edge-set as the fully connected graph over V with the edge (1, 4) removed. The covariance matrix Σ∗is parameterized by the correlation param1 2 3 4 1 2 3 4 (a) (b) Figure 2: (a) Graph of the example discussed by [13]. (b) A simple 4-node star graph. eter ρ ∈[0, 1/ √ 2]: the diagonal entries are set to Σ∗ ii = 1, for all i ∈V ; the entries corresponding to edges are set to Σ∗ ij = ρ for (i, j) ∈E\{(2, 3)}, Σ∗ 23 = 0; and finally the entry corresponding to the non-edge is set as Σ∗ 14 = 2ρ2. For this model, [13] showed that the ℓ1-regularized MLE bΘ fails to recover the graph structure for any sample size, if ρ > −1 + (3/2)1/2 ≈0.23. It is instructive to compare this necessary condition to the sufficient condition provided in our analysis, namely the incoherence Assumption [A1] as applied to the Hessian Γ∗. For this particular example, a little calculation shows that Assumption [A1] is equivalent to the constraint 4|ρ|(|ρ| + 1) < 1, an inequality which holds for all ρ ∈(−0.2017, 0.2017). Note that the upper value 0.2017 is just below the necessary threshold discussed by [13]. On the other hand, the irrepresentability condition for the Lasso requires only that 2|ρ| < 1, i.e., ρ ∈(−0.5, 0.5). Thus, in the regime |ρ| ∈[0.2017, 0.5), the Lasso irrepresentability condition holds while our log-determinant counterpart fails. 2.3.2 Illustration of irrepresentability: Star graphs A second interesting example is the star-shaped graphical model, illustrated in Figure 2(b), which consists of a single hub node connected to the rest of the spoke nodes. We consider a four node graph, with vertex set V = {1, 2, 3, 4} and edge-set E = {(1, s) | s ∈{2, 3, 4}}. The covariance matrix Σ∗is parameterized the correlation parameter ρ ∈[−1, 1]: the diagonal entries are set to Σ∗ ii = 1, for all i ∈V ; the entries corresponding to edges are set to Σ∗ ij = ρ for (i, j) ∈E; while the non-edge entries are set as Σ∗ ij = ρ2 for (i, j) /∈E. Consequently, for this particular example, Assumption [A1] reduces to the constraint |ρ|(|ρ|+2) < 1, which holds for all ρ ∈(−0.414, 0.414). The irrepresentability condition for the Lasso on the other hand allows the full range ρ ∈(−1, 1). Thus there is again a regime, |ρ| ∈[0.414, 1), where the Lasso irrepresentability condition holds while the log-determinant counterpart fails. 3 Proof outline Theorem 1 follows as a corollary to Theorem 2 in Ravikumar et al [14], an extended and more general version of this paper. There we consider the more general problem of estimation of the covariance matrix of a random vector (that need not necessarily be Gaussian) from i.i.d. samples; and where we relax Assumption [A2], and allow quantities KΣ∗, KΓ∗to grow with sample size n. We provide here a high-level outline of the proof of Theorem 1, deferring details to the extended version [14]. Our proofs are based on a technique that we call a primal-dual witness method, used previously in analysis of the Lasso [12]. It involves following a specific sequence of steps to construct a pair (eΘ, eZ) of symmetric matrices that together satisfy the optimality conditions associated with the convex program (3) with high probability. Thus, when the constructive procedure succeeds, eΘ is equal to the unique solution bΘ of the convex program (3), and eZ is an optimal solution to its 5 dual. In this way, the estimator bΘ inherits from eΘ various optimality properties in terms of its distance to the truth Θ∗, and its recovery of the signed sparsity pattern. To be clear, our procedure for constructing eΘ is not a practical algorithm for solving the log-determinant problem (3), but rather is used as a proof technique for certifying the behavior of the ℓ1-regularized MLE (3). 3.1 Primal-dual witness approach At the core of the primal-dual witness method are the standard convex optimality conditions that characterize the optimum bΘ of the convex program (3). For future reference, we note that the subdifferential of the norm ∥· ∥1,offevaluated at some Θ consists the set of all symmetric matrices Z ∈Rp×p such that Zij =    0 if i = j sign(Θij) if i ̸= j and Θij ̸= 0 ∈[−1, +1] if i ̸= j and Θij = 0. (12) Lemma 1. For any λn > 0 and sample covariance bΣ with strictly positive diagonal, the ℓ1regularized log-determinant problem (3) has a unique solution bΘ ≻0 characterized by bΣ −bΘ−1 + λn eZ = 0, (13) where eZ is an element of the subdifferential ∂∥bΘ∥1,off. Based on this lemma, we construct the primal-dual witness solution (eΘ, eZ) as follows: (a) We determine the matrix eΘ by solving the restricted log-determinant problem eΘ := arg min Θ≻0, ΘSc=0  ⟨⟨Θ, bΣ⟩⟩−log det(Θ) + λn∥Θ∥1,off . (14) Note that by construction, we have eΘ ≻0, and moreover eΘSc = 0. (b) We choose eZS as a member of the sub-differential of the regularizer ∥· ∥1,off, evaluated at eΘ. (c) We set eZSc as eZSc = 1 λn  −bΣSc + [eΘ−1]Sc , (15) which ensures that constructed matrices (eΘ, eZ) satisfy the optimality condition (13). (d) We verify the strict dual feasibility condition | eZij| < 1 for all (i, j) ∈Sc. To clarify the nature of the construction, steps (a) through (c) suffice to obtain a pair (eΘ, eZ) that satisfy the optimality conditions (13), but do not guarantee that eZ is an element of sub-differential ∂∥eΘ∥1,off. By construction, specifically step (b) of the construction ensures that the entries eZ in S satisfy the sub-differential conditions, since eZS is a member of the sub-differential of ∂∥eΘS∥1,off. The purpose of step (d), then, is to verify that the remaining elements of eZ satisfy the necessary conditions to belong to the sub-differential. If the primal-dual witness construction succeeds, then it acts as a witness to the fact that the solution eΘ to the restricted problem (14) is equivalent to the solution bΘ to the original (unrestricted) problem (3). We exploit this fact in our proof of Theorem 1: we first show that the primal-dual witness technique succeeds with high-probability, from which we can conclude that the support of the optimal solution bΘ is contained within the support of the true Θ∗. The next step requires checking that none of the entries in eΘS constructed in Equation (14) are zero. It is to verify this that we require the lower bound assumption in Theorem 1 on the value of the minimum value Θ∗ min. 6 4 Experiments In this section, we describe some experiments which illustrate the model selection rates in Theorem 1. We solved the ℓ1 penalized log-determinant optimization problem using the “glasso” program [4], which builds on the block co-ordinate descent algorithm of [3]. We report experiments for star-shaped graphs, which consist of one node connected to the rest of the nodes. These graphs allow us to vary both d and p, since the degree of the central hub can be varied between 1 and p −1. Applying the algorithm to these graphs should therefore provide some insight on how the required number of samples n is related to d and p. We tested varying graph sizes p from p = 64 upwards to p = 375. The edge-weights were set as entries in the inverse of a covariance matrix Σ∗with diagonal entries set as Σ∗ ii = 1 for all i = 1, . . . , p, and Σ∗ ij = 2.5/d for all (i, j) ∈E, so that the quantities (KΣ∗, KΓ∗, α) remain constant. Dependence on graph size: 100 200 300 400 500 0 0.2 0.4 0.6 0.8 1 n Prob. of success Star graph p=64 p=100 p=225 p=375 20 40 60 80 100 120 140 0 0.2 0.4 0.6 0.8 1 n/log p Prob. of success Star graph p=64 p=100 p=225 p=375 (a) (b) Figure 3. Simulations for a star graph with varying number of nodes p, fixed maximal degree d = 40, and edge covariances Σ∗ ij = 1/16 for all edges. Plots of probability of correct signed edge-set recovery versus the sample size n in panel (a), and versus the rescaled sample size n/ log p in panel (b). Each point corresponds to the average over N = 100 trials. Panel (a) of Figure 3 plots the probability of correct signed edge-set recovery against the sample size n for a star-shaped graph of three different graph sizes p. For each curve, the probability of success starts at zero (for small sample sizes n), but then transitions to one as the sample size is increased. As would be expected, it is more difficult to perform model selection for larger graph sizes, so that (for instance) the curve for p = 375 is shifted to the right relative to the curve for p = 64. Panel (b) of Figure 3 replots the same data, with the horizontal axis rescaled by (1/ log p). This scaling was chosen because our theory predicts that the sample size should scale logarithmically with p (see equation (10)). Consistent with this prediction, when plotted against the rescaled sample size n/ log p, the curves in panel (b) all stack up. Consequently, the ratio (n/ log p) acts as an effective sample size in controlling the success of model selection, consistent with the predictions of Theorem 1. Dependence on the maximum node degree: Panel (a) of Figure 4 plots the probability of correct signed edge-set recovery against the sample size n for star-shaped graphs; each curve corresponds to a different choice of maximum node degree d, allowing us to investigate the dependence of the sample size on this parameter. So as to control these comparisons, we fixed the number of nodes to p = 200. Observe how the plots in panel (a) shift to the right as the maximum node degree d is increased, showing that star-shaped graphs with higher degrees are more difficult. In panel (b) of Figure 4, we plot the same data versus the rescaled sample size n/d. Recall that if all the curves were to stack up under this rescaling, then it means the required sample size n scales linearly with d. These plots are closer to aligning than the unrescaled plots, but the agreement is not perfect. In particular, observe that the curve d (right-most in panel (a)) remains a bit to the right in panel (b), which suggests that a somewhat more aggressive rescaling—perhaps n/dγ for some γ ∈(1, 2)—is appropriate. The sufficient condition from Theorem 1, as summarized 7 1000 1500 2000 2500 3000 3500 0 0.2 0.4 0.6 0.8 1 n Prob. of success Truncated Star with Varying d d=50 d=60 d=70 d=80 d=90 d=100 26 28 30 32 34 36 38 0 0.2 0.4 0.6 0.8 1 n/d Prob. of success Truncated Star with Varying d d=50 d=60 d=70 d=80 d=90 d=100 (a) (b) Figure 4. Simulations for star graphs with fixed number of nodes p = 200, varying maximal (hub) degree d, edge covariances Σ∗ ij = 2.5/d. Plots of probability of correct signed edge-set recovery versus the sample size n in panel (a), and versus the rescaled sample size n/d in panel (b). in equation (10), is n = Ω(d2 log p), which appears to be overly conservative based on these data. Thus, it might be possible to tighten our theory under certain regimes. References [1] A.J. Rothman, P.J. Bickel, E. Levina, and J. Zhu. Sparse permutation invariant covariance estimation. Electron. J. Statist., 2:494–515, 2008. [2] M. Yuan and Y. Lin. Model selection and estimation in the Gaussian graphical model. Biometrika, 94(1):19–35, 2007. [3] A. d’Aspr´emont, O. Banerjee, and L. El Ghaoui. First-order methods for sparse covariance selection. SIAM J. Matrix Anal. Appl., 30(1):56–66, 2008. [4] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical Lasso. Biostat., 9(3):432–441, 2007. [5] S. Boyd and L. Vandenberghe. Convex optimization. Cambridge University Press, Cambridge, UK, 2004. [6] R. A. Horn and C. R. Johnson. Matrix Analysis. Cambridge University Press, Cambridge, 1985. [7] S. L. Lauritzen. Graphical Models. Oxford University Press, Oxford, 1996. [8] L.D. Brown. Fundamentals of statistical exponential families. Institute of Mathematical Statistics, Hayward, CA, 1986. [9] N. Meinshausen and P. B¨uhlmann. High-dimensional graphs and variable selection with the Lasso. Ann. Statist., 34(3):1436–1462, 2006. [10] J. A. Tropp. Just relax: Convex programming methods for identifying sparse signals. IEEE Trans. Info. Theory, 51(3):1030–1051, 2006. [11] P. Zhao and B. Yu. On model selection consistency of Lasso. Journal of Machine Learning Research, 7:2541–2567, 2006. [12] M. J. Wainwright. Sharp thresholds for high-dimensional and noisy recovery of sparsity using the Lasso. Technical Report 709, UC Berkeley, May 2006. To appear in IEEE Trans. Info. Theory. [13] N. Meinshausen. A note on the Lasso for graphical Gaussian model selection. Statistics and Probability Letters, 78(7):880–884, 2008. [14] P. Ravikumar, M. J. Wainwright, G. Raskutti, and B. Yu. High-dimensional covariance estimation by minimizing ℓ1-penalized log-determinant divergence. Technical Report 767, Department of Statistics, UC Berkeley, November 2008. 8
2008
90
3,582
Stress, noradrenaline, and realistic prediction of mouse behaviour using reinforcement learning Gediminas Lukˇsys1,2, Carmen Sandi2, Wulfram Gerstner1 1Laboratory of Computational Neuroscience 2Laboratory of Behavioural Genetics Ecole Polytechnique F´ed´erale de Lausanne (EPFL) Lausanne, CH-1015, Switzerland {gediminas.luksys,carmen.sandi,wulfram.gerstner}@epfl.ch Abstract Suppose we train an animal in a conditioning experiment. Can one predict how a given animal, under given experimental conditions, would perform the task? Since various factors such as stress, motivation, genetic background, and previous errors in task performance can influence animal behaviour, this appears to be a very challenging aim. Reinforcement learning (RL) models have been successful in modeling animal (and human) behaviour, but their success has been limited because of uncertainty as to how to set meta-parameters (such as learning rate, exploitation-exploration balance and future reward discount factor) that strongly influence model performance. We show that a simple RL model whose metaparameters are controlled by an artificial neural network, fed with inputs such as stress, affective phenotype, previous task performance, and even neuromodulatory manipulations, can successfully predict mouse behaviour in the ”hole-box” - a simple conditioning task. Our results also provide important insights on how stress and anxiety affect animal learning, performance accuracy, and discounting of future rewards, and on how noradrenergic systems can interact with these processes. 1 Introduction Animal behaviour is guided by rewards that can be received in different situations and by modulatory factors, such as stress and motivation. It is known that acute stress can affect learning and memory by modulating plasticity through stress hormones and neuromodulators [1, 2, 3], but their role in highlevel processes such as learning, memory, and action selection is not well understood. A number of interesting conceptual and computational models have been proposed relating neuromodulatory systems, cognitive processes, and abstract statistical quantities characterizing the environment [4, 5]. While such models provide great mechanistic insights, they alone are often unable to accurately predict animal behaviour in a realistic situation due to a great number of diverse modulatory factors. Stress [2], genotype [6], affective traits such as anxiety and impulsivity [7], motivation [8], and evaluation of performance errors [9] can all influence individual performance in any single task, yet it may prove difficult and inefficient to explicitly model each factor in order to accurately predict animal behaviour. Instead, we propose a method which could account for the influence of arbitrary modulatory factors on behaviour as control parameters of a general behavioural model. In modeling reward-based behavioural learning, approaches based on the formal theory of reinforcement learning (RL) have been the most successful. The basic idea of RL is that animals (or artificial agents) select their actions based on predicted future rewards that could be acquired upon taking these actions. The expected values of future rewards for different actions (Q-values) can be gradually learned by observing rewards received under different state-action combinations. An efficient 1 way to do this is temporal difference (TD) learning [10], which uses an error signal that correlates with the activity of dopaminergic neurons in the Substantia Nigra [11]. TD models have been successfully applied to explain a wide range of experimental data, including animal conditioning [8], human decision-making [12], and even addiction [13]. Learning and action selection in TD models can be strongly influenced by the choice of model metaparameters such as the learning rate, the future reward discounting, and the exploitation-exploration balance. While in most modeling studies they have received relatively little attention, it has been proposed that RL meta-parameters are related to specific neuromodulators - noradrenaline, serotonin, acetylcholine [14], and to neural activity occurring in different brain regions - notably amygdala, striatum, and anterior cingulate [15]. Modulatory factors such as stress, anxiety, and impulsivity often act through the same brain systems, which suggests that in RL models their effects could be expressed through changes in meta-parameter values. In the present study, we tested mouse behaviour in a simple conditioning task - the hole-box, and showed how various modulatory factors could control a simple RL model to accurately predict animal behaviour. We used food deprived mice of two genetic strains - ’calm’ C57BL/6 and ’anxious’ DBA/2 [6], half of which were exposed to an additional stressor - sitting on an elevated platform before each experimental session. We formalized animal behaviour using a simple RL model, and trained an artificial neural network that could control RL meta-parameters using information about stress, motivation, individual affective traits, and previous learning success. We demonstrate that such model can successfully predict mouse behaviour in the hole-box task and that the resulting model meta-parameters provide useful insights into how animals adjust their performance throughout the course of a learning experience, and how they respond to stressors and motivational demands. Finally, using systemic manipulations of the noradrenergic system we show how noradrenaline interacts with stress and anxiety in regulating performance accuracy and temporal discounting. 2 Description of the hole-box experiment In our hole-box experiments, we used 64 male mice (32 of C57BL/6 strain and 32 of DBA/2 strain) that were 10-week old at the beginning of the experiment. During an experimental session, each animal was placed into the hole-box (Figure 1a). The mice had to learn to make a nose poke into the hole upon the onset of lights and not to make it under the condition of no light. After a response to light, the animals (which were food deprived to 87.3+/-1.0% of their initial weight) received a reward in form of a food pellet (Figure 1b). The inter-trial interval (ITI) between subsequent trials was varying: the probability of a new trial during each 0.5 sec long time step was 1/30, resulting in the average ITI of 15 sec. The total session duration was 500 sec, equivalent to 1000 time steps. Figure 1: a. Scheme of the hole-box. b. Protocol of the hole-box experiment. c. Hole-box stateaction chart. Rectangles are states, thin arrows are actions. During 2 days of habituation (when the food delivery was not paired with light) the mice learned that food could be delivered from the boxes. After this, they were trained for 8 consecutive days, during 2 which half of the mice were exposed to extrinsic stress (30min on the elevated platform) before each training session. On training days 3, 6, and 8, animals have been injected i.p. (5 ml/kg, 30 min before the experimental session) with either saline (1/2 of mice), or adrenergic alpha-2 agonist clonidine (1/4 of mice, 0.05 mg/kg) that reduces brain noradrenaline levels, or adrenergic alpha-2 antagonist yohimbine (1/4 of mice, 1 mg/kg) that increases brain noradrenaline levels. Mice of each strain were treated equivalently with respect to pharmacological and stress conditions. Stress and pharmacological treatment groups were the same during all training days. 3 Challenges of behavioural analysis To quantify animal performance in the hole-box experiment, we used 7 different performance measures (PMs). These were behavioural statistics, calculated for each daily session: number of trials (within 500 sec), number of ITI pokes, mean response time (after light onset), mean nose poke duration, number of uneaten food pellets, ”TimePreference”1, and ”DurationPreference”2. Different PMs reflected different aspects of behaviour - learning to respond, associating responses with light, overcoming anxiety to make sufficiently long nose pokes, etc. For this reason, during the process of learning PMs exhibited a variety of dynamics: slowly increasing numbers of trials, rapidly decreasing mean response times, first increasing and later decreasing numbers of ITI pokes. Figure 2: a. Development of selected PMs with learning for C57BL/6 mice. b. Results of the PCA applied for all PMs: eigenvalues and loadings for the first 3 components. When comparing the PMs between different experimental groups (Figure 2a), it is often hard to interpret the differences, as each PM describes an unknown mixture of cognitive processes such as learning, memory, performance intensity and accuracy. In some cases, performing a principal component analysis (PCA) or similar tools may be suitable for reducing the behavioural measures to few main components that could be easily interpreted [16]. However, more often that is not the case - for instance, in our experiment, the 3 principal components are not sufficient to explain even 75% of the variation, and the composition of the components is not easy to interpret (Figure 2b). As an alternative to conventional behavioural analysis, we propose that a computational model of behaviour, based on reinforcement learning, could be sufficiently flexible to fit a wide range of behavioural effects, and in contrast to the PMs, RL meta-parameters could be easily interpreted in cognitive terms. 4 Modeling the hole-box using reinforcement learning We used a simple temporal difference RL model to formalize the behaviour. Conceptually, the model had 4 states: [ITI, trial] x [animal outside, making a nose poke], and 2 actions: move (in or out) and stay. However, to make model’s performance realistic several extensions had to be introduced (Figure 1c). First of all, the state animal outside was divided into 6 states corresponding 1TimePreference = (average time between adjacent ITI pokes) / (average response time) 2DurationPreference = (average trial response poke duration) / (average ITI poke duration) 3 to different places in the box which the animal could occupy, adding additional actions for the transitions between these new states (moving around the box). Secondly, we observed that when our animals made too short trial responses (with nose poke duration under 0.5 sec), they often could not pick up the delivered food. Conversely, when the nose pokes were longer than 1.5 sec, animals nearly always managed to pick up the delivered food immediately. To account for this, the state making a nose poke was divided into 5 states, representing different nose poke durations, with the increasing probability of picking up the reward (to keep things simple, we chose a linear increase: from p = 0.2 for the first state to p = 1.0 for the fifth). Note that a food pellet is delivered at the start of each trial response, irrespectively of whether the animal picks it up during that nose poke or not. Unconsumed pellets could be eaten during later (sufficiently long) ITI nose pokes. The Q-values, defined as Q(st, at) = E[r(t) + γr(t + 1) + γ2r(t + 2) + ...|st, at], were updated based on the temporal difference error: ∆Q(st, at) = α[r(t) + γQ(st+1, at+1) −Q(st, at)], (1) where r(t) is the reward at time t, st the state, at the action, α the learning rate, and γ the future reward discount factor. High γ values (close to 1) signified that future rewards were given high weight, while low γ values (0-0.5) meant that immediate rewards were preferred. Actions were chosen probabilistically, based on Q-values and the exploitation factor β, as follows: p(ai|s) = exp(βQ(s, ai))/ X k∈A(s) exp(βQ(s, ak))) (2) where A(s) are actions available at state s. Low β values implied that the actions were being chosen more or less randomly (exploration), while high β values strongly biased the choice towards the action(s) with the highest Q-value (exploitation). Q-values were initialized as zeros before the first training day, and the starting state was always ITI / outside, near the hole. 5 Predicting mouse behaviour using dynamic control of model meta-parameters To compare the model with animal behaviour we used the following goodness-of-fit function [17]: χ2 = NPM X k=1 (PMexp k −PMmod k (α, β, γ))2/(σexp k )2 , (3) where PMexp k and PMmod k are the PMs calculated for each animal and the model, respectively, and NPM = 7 is the number of the PMs. PMmod k (α, β, γ) were calculated after simulation of one session (averaged over multiple runs) with fixed values of the meta-parameters. To evaluate whether our model is sufficiently flexible to fit a wide range of animal behaviours (including effects of stress, strain, and noradrenaline), we performed an estimation procedure of daily meta-parameters. Using stochastic gradient ascent from multiple starting points, we minimized (3) with respect to α, β, γ for each session separately by systematically varying the meta-parameters in the following ranges: α, γ ∈[0.03, 0.99] and β ∈[10−1, 101.5]. To evaluate how well the model fits the experimental data we used χ2-test with ν = NPM −3 degrees of freedom (since our model has 3 free parameters). The P(χ2, ν) value, defined as the probability that a realization of a chi-square-distributed random variable would exceed χ2 by chance, was calculated for each session separately. Generally, values of P(χ2, ν) > 0.01 correspond to a fairly good model [17]. Even if our RL model with estimated meta-parameters is capable of reproducing behaviour of different experimental groups in the hole-box, this does not tell us how, given a new animal in an arbitrary experimental condition, we should set daily meta-parameters to predict its behaviour. However, information about animal’s affective phenotype, its experimental condition, and recent task performance may be helpful in determining these meta-parameter settings, and thus, predicting behaviour. For this purpose, we trained an artificial neural network (NN) model (Figure 3b), whose outputs would be the predicted values of α, β, and γ. The inputs of the model included the following information: animal’s genetic strain (0 for C57BL/6, 1 for DBA/2), its anxiety (% of time it spends in the center of the open field - a separate experiment for characterization of affective traits), its novelty response (% of time it spends in the center of the field once a novel object is introduced there), 4 stress prior to a training session (0 or 1), motivation (% of initial weight, correlating with hunger), noradrenergic manipulation (-1 for NA reduction, 1 for NA increase, and 0 for control), and two important measures describing performance on the previous day - a number of food pellets eaten (’rewards’), and a number of nose pokes during which no food was consumed (’misses’). Our NN had merely 4 hidden layer ”neurons” (to prevent from over-fitting, as we only had 762 samples of data for training and validation). Its target outputs were the daily estimated meta-parameter sets, and after normalizing inputs and targets to zero mean and unit variance, the network was trained (100 times) using the Levenberg-Marquardt method [18]. Because of the normalization, the resulting mean square errors (MSEs) directly indicated how much variance in the meta-parameters could not be explained by the NN. Using 10 trained networks with lowest MSEs, we performed simulations to analyze how much different input factors affect each meta-parameter. For this purpose we simulated the NN 106 times, linearly varying 1 or 2 selected inputs, while all the remaining inputs would be given random values with zero mean and unit variance. Then we could plot mean resulting meta-parameter values corresponding to different values of the selected inputs. The range of meta-parameter variation and relative noise in such plots indicated how strongly the selected inputs (compared to other inputs) influenced the resulting meta-parameters. Finally, to predict the performance of selected animals and the differences between experimental groups, we simulated the NN with input values of each animal and analyzed the resulting meta-parameters. Figure 3: a. Comparison of model performance and animal behaviour. b. Scheme of the NN model. c. Comparison of daily estimated meta-parameters and outputs of the trained NN model. In a and c arbitrary performance measures and experimental groups were selected for comparison. 6 Results The results of daily meta-parameter estimation indicated a good fit between the model and animal performance (Figure 3a). The condition P(χ2, ν) > 0.01 was satisfied for 92% of estimated parameter sets. The mean χ2 value was ⟨χ2⟩= 5.4, or only ⟨χ2⟩= 0.77 per PM. Figure 4: Estimated daily meta-parameter values and differences between experimental conditions. a. Exploitation factors β, strain, and stress. b. Reward discount factors γ and mouse strain. c. Effects of noradrenergic manipulations (on days 3, 6, and 8). 5 Meta-parameters, estimated for each daily session, indicated interesting dynamics as well as some profound differences depending on stress condition, animal’s strain, and noradrenergic manipulation. During the process of learning, estimated exploitation-exploration factors β and future reward discount factors γ showed progressive increase (Figure 4a,b; regression p < 0.001), meaning that the better animals learn the task - the more accurately they use their knowledge for selecting actions, and the longer time horizon they can take into account. In addition, extrinsic stress increases exploitation factors β for calm C57BL/6 mice (ANOVA p < 0.01) but not for anxious DBA/2 mice (Figure 4a). Reward discount factors γ were higher for C57BL/6 mice (Figure 4b, ANOVA p < 0.001), indicating that anxious DBA/2 mice act more impulsively. Dynamics of the learning rates and effects of stress on future reward discounting showed certain trends, however, for these daily estimated values they were not significant. For the pharmacological manipulations, two results were significant (Figure 4c): a decrease in noradrenaline led to reduced exploitation factors for the anxious DBA/2 mice (ANOVA p < 0.001), and to increased reward discount factors for C57BL/6 mice (on day 3, t-test p < 0.01), suggesting that decreasing NA levels counteracts anxiety and impulsivity. A problem of daily estimated meta-parameters is their excessive flexibility, allowing them to follow everyday ups and downs of individual animal behaviour, many of which happen because of factors unknown to the experimenter. This ”noise” often makes it difficult to see the effects that known factors (such as stress and strain) have on meta-parameter dynamics. Results of the trained NN model for prediction of daily meta-parameters indicated that only about 25% of their variation could be explained. However, the resulting meta-parameter averages for experimental groups indicated a very good fit with estimated daily meta-parameters (Figure 3c). It is also evident that different meta-parameters can be predicted to a different extent: for the learning rates only a small part of variation can be explained (MSE(α) = 0.92), while for exploitation and reward discount factors a substantial part (MSE(β) = 0.72, MSE(γ) = 0.62), showing that their values are more reliable and more sensitive to modulatory influences. The comparison of NN training and validation errors (Figure 5a) indicated that the effects of over-fitting were negligible. Figure 5: a. Typical training and validation errors for the NN model. b. Model simulations: interactions between anxiety and noradrenaline in affecting exploitation factors β and reward discount factors γ. c. Model simulations: interactions between rewards and misses in task performance. In b and c light colors represent high meta-parameter values, dark colors - low values. The meta-parameter prediction model allows us to analyze how (and how much) each modulatory factor affects meta-parameters and what the interactions between factors are. This is particularly useful for studying possibly non-linear interactions between continuous-valued factors, such as anxiety, motivation, and previous task performance. Results in Figure 5b,c describe such interactions. The level of noise in the color plots indicate that previous task performance (Fig. 5c) has a relatively strong influence on meta-parameters, compared to that of anxiety (Fig. 5b). Future reward discounting is mainly affected by received rewards, while for exploitation factors misses also have a significant effect, supporting an observation that well trained animals (who receive many rewards and make few misses) decrease their effort to perform quickly and accurately (Fig. 5c). Finally, anxiety and high noradrenaline levels act additively in lowering the reward discount factors, while their effects on exploitation factors are more complex: for calm animals NA increase leads to higher exploitation, but for highly anxious animals (whose NA levels are already presumably high) increasing NA does not improve their performance accuracy (Fig. 5b). 6 When comparing meta-parameter averages between various experimental conditions, the output of the NN model fits well the daily estimated values (Figure 3c), however, the dynamics become much smoother and the error bars - much smaller, since they account only for known factors, included in the NN input. While all meta-parameter effects observed when comparing daily estimated values are reproduced, excluding unpredicted variability makes some additional effects statistically significant. For instance, it is evident (Figure 6a) that extrinsic stress decreases future reward discount factors for the DBA/2 mice (ANOVA p < 0.01) and that the learning rates slightly decrease with learning, particularly for the C57BL/6 mice (regression p < 0.01). The effects of the pharmacological manipulations of the noradrenergic system have been ”denoised” as well, and several additional effects become evident (Figure 6b). For C57BL/6 mice, stress plays an important role in modulating effects of NA: non-stressed mice increase their exploitation upon increased NA level (ANOVA p < 0.01), and slightly decrease it upon decreased NA levels. Stressed mice do not show significant changes in exploitation factors. For DBA/2 mice, stimulating noradrenergic function does not lead to higher exploitation factors (similarly to stressed C57BL/6 mice), but their future reward discounting is sensitive to NA changes - the lower NA, the higher their γ values (ANOVA, p < 0.01). Figure 6: ”Denoised” meta-parameters: outputs of the trained neural network model. Several additional differences between experimental conditions become evident. a. Meta-parameters, stress, and strain. b. Effects of noradrenergic manipulations (on days 3, 6, and 8). 7 Discussion In this paper, we demonstrated that a simple RL model, whose parameters are controlled by a neural network that uses the information about various modulatory influences, can successfully predict mouse behaviour in the hole-box conditioning task. Compared to the conventional performance measures, the resulting meta-parameters of our model showed more pronounced effects between experimental groups and they have the additional advantage of being easier to relate to cognitive processes. Moreover, the results of pharmacological manipulations provided supporting evidence that RL meta-parameters are indeed related to neuromodulators such as noradrenaline. The progressive increase of exploitation factors β and the decrease of learning rates α are consistent with how the meta-parameters of artificial agents should presumably be controlled to achieve optimal performance [14]. The increase in reward discount factors γ may have fundamental reasons too, e.g. when exposed to a new environment, hungry animals may become anxious about the uncertainty in the situation (whether they will be able to find food to survive), which makes them prefer immediate rewards to delayed ones. However, it may also be related to the specific reward structure in the model. In order to stay in the hole for longer than 1 time step (and thus have a higher chance to pick up the food) γ values should be much larger than 0.5. In addition, to avoid making unnecessary ITI pokes (given that food is usually picked up during the trial response) γ values close to 1.0 are necessary. For this reason, animal behavioural dynamics (e.g. when the mice start making sufficiently long nose pokes, and when, if at all, they learn to avoid making ITI pokes) could determine (or be determined by) the prevailing dynamics of γ-s. Our specific results provide insights into biological mechanisms of stress, anxiety, behavioural performance, and how they relate to formal RL quantities. Stress increased performance accuracy (β factors) for the calm C57BL/6 mice, but not for the anxious DBA/2 mice. Similarly, increasing noradrenaline levels had a positive effect on β-s only for the non-stressed C57BL/6 mice, but not for 7 the other groups, while decreasing NA levels had the strongest negative effect on β-s for the anxious DBA/2 mice. This suggests that within a certain range (which is dependent on animal’s anxiety) performance accuracy is determined by NA level. Outside this range, NA effects get saturated or may even get reversed, as suggested by the inverse-U-shaped-relation theory of arousal/stress effects on cognition [4]. The effects of stress, strain, and NA on future reward discounting indicate that stress, high anxiety, and elevated noradrenaline are all detrimental for learning delayed future rewards. However, since the effects of NA and stress on reward discount factors are more pronounced for DBA/2 mice, γ-s might be sensitive to noradrenaline at higher levels than β-s are. It is also likely that serotonin, mPFC, and other brain systems often implicated in processing of delayed rewards [15, 19] may be interacting with stress and NA in controlling future reward discounting. Although the basis of our hole-box behavioural prediction is a simple RL model with discrete states and actions, it is not obvious that such a model could predict animal behaviour in other significantly more complex tasks. However, even in more complex models (involving continuous state-action spaces, episodic memories, etc.), a RL-like module is likely to be central to their performance, and a similar approach could be applied for controlling its meta-parameters based on numerous modulatory influences. Further studies relating such meta-parameters to other neuromodulatory systems and activation patterns of specific brain areas could provide interesting insights and may prove to be an ultimate test-box for the biological relevance of such an approach. References [1] J. J. Kim and K. S. Yoon. Stress: metaplastic effects in the hippocampus. TINS, 21(12):505–9. 1998. [2] C. Sandi, M. Loscertales, and C. Guaza. Experience-dependent facilitating effect of corticosterone on spatial memory formation in the water maze. Eur J Neurosci., 9(4):637–42., Apr 1997. [3] M. Joels, Z. Pu, O. Wiegert, M. S. Oitzl, and H. J. Krugers. Learning under stress: how does it work? Trends Cogn Sci., 10(4):152–8. Apr 2006. [4] G. Aston-Jones, J. Rajkowski, and J. Cohen. Locus coeruleus and regulation of behavioral flexibility and attention. Prog Brain Res., 126:165–82., 2000. [5] A. J. Yu and P. Dayan. Uncertainty, Neuromodulation, and Attention. Neuron, 46:681–92, May 19 2005. [6] A. Holmes, C. C. Wrenn, A. P. Harris, K. E. Thayer, and J. N. Crawley. Behavioral profiles of inbred strains on novel olfactory, spatial and emotional tests for reference memory in mice. Genes Brain Behav., 1(1):55–69., Jan 2002. [7] M. J. Kreek, D. A. Nielsen, E. R. Butelman, and K. S. LaForge. Genetic influences on impulsivity, risk taking, stress responsivity and vulnerability to drug abuse and addiction. Nat Neurosci., 8:1450–7, 2005. [8] P. Dayan and B. W. Balleine. Reward, Motivation, and Reinforcement Learning. Neuron, 36:285–98, 2002. [9] M. M. Botvinick, T. S. Braver, C. S. Carter, D. M. Barch, and J. D. Cohen. Conflict monitoring and cognitive control. Psychol Review,108(3):624–52, Mar 2001. [10] R. Sutton and A. G. Barto. Reinforcement Learning - An Introduction. MIT Press, 1998. [11] W. Schultz, P. Dayan, and P. R. Montague. A neural substrate of prediction and reward. Science, 275(5306):1593–9, Mar 14 1997. [12] S. C. Tanaka, K. Doya, G. Okada, K. Ueda, Y. Okamoto, and S. Yamawaki. Prediction of immediate and future rewards differentially recruits cortico-basal ganglia loops. Nat Neurosci., 7:887–93, Jul 2004. [13] A. D. Redish. Addiction as a Computational Process Gone Awry. Science., 306(5703):1944–7, 2004. [14] K. Doya. Metalearning and neuromodulation. Neural Netw, 15(4-6):495–506, Jun-Jul 2002. [15] K. Doya. Modulators of decision making. Nat Neurosci., 11:410–6, Apr 2008. [16] Y. Clement, C. Joubert, C. Kopp, E. M. Lepicard, P. Venault, R. Misslin, M. Cadot, and G. Chapouthier Anxiety in Mice: A Principal Component Analysis Study. Neural Plast., 35457, Mar 21 2007. [17] W. H. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling. Numerical Recipes in C : The Art of Scientific Computing. Cambridge University Press, 1992. [18] D. Marquardt. An algorithm for least squares estimation of nonlinear parameters. SIAM J. Appl. Math, 11:431–441, 1963. [19] J. Amat, M. V. Baratta, E. Paul, S. T. Bland, L. R. Watkins, and S. F. Maier. Medial prefrontal cortex determines how stressor controllability affects behavior and dorsal raphe nucleus. Nat Neurosci., 8(3):365– 71. Mar 2005. 8
2008
91
3,583
How memory biases affect information transmission: A rational analysis of serial reproduction Jing Xu Thomas L. Griffiths Department of Psychology University of California, Berkeley Berkeley, CA 94720-1650 {jing.xu,tom griffiths}@berkeley.edu Abstract Many human interactions involve pieces of information being passed from one person to another, raising the question of how this process of information transmission is affected by the capacities of the agents involved. In the 1930s, Sir Frederic Bartlett explored the influence of memory biases in “serial reproduction” of information, in which one person’s reconstruction of a stimulus from memory becomes the stimulus seen by the next person. These experiments were done using relatively uncontrolled stimuli such as pictures and stories, but suggested that serial reproduction would transform information in a way that reflected the biases inherent in memory. We formally analyze serial reproduction using a Bayesian model of reconstruction from memory, giving a general result characterizing the effect of memory biases on information transmission. We then test the predictions of this account in two experiments using simple one-dimensional stimuli. Our results provide theoretical and empirical justification for the idea that serial reproduction reflects memory biases. 1 Introduction Most of the facts that we know about the world are not learned through first-hand experience, but are the result of information being passed from one person to another. This raises a natural question: how are such processes of information transmission affected by the capacities of the agents involved? Decades of memory research have charted the ways in which our memories distort reality, changing the details of experiences and introducing events that never occurred (see [1] for an overview). We might thus expect that these memory biases would affect the transmission of information, since such a process relies on each person remembering a fact accurately. The question of how memory biases affect information transmission was first investigated in detail in Sir Frederic Bartlett’s “serial reproduction” experiments [2]. Bartlett interpreted these studies as showing that people were biased by their own culture when they reconstruct information from memory, and that this bias became exaggerated through serial reproduction. Serial reproduction has become one of the standard methods used to simulate the process of cultural transmission, and several subsequent studies have used this paradigm (e.g., [3, 4]). However, this phenomenon has not been systematically and formally analyzed, and most of these studies have used complex stimuli that are semantically rich but hard to control. In this paper, we formally analyze and empirically evaluate how information is changed by serial reproduction and how this process relates to memory biases. In particular, we provide a rational analysis of serial reproduction (in the spirit of [5]), considering how information should change when passed along a chain of rational agents. Biased reconstructions are found in many tasks. For example, people are biased by their knowledge of the structure of categories when they reconstruct simple stimuli from memory. One common effect of this kind is that people judge stimuli that cross boundaries of two different categories to be further apart than those within the same category, although the distances between the stimuli are the same in the two situations [6]. However, biases need not reflect suboptimal performance. If we assume that memory is solving the problem of extracting and storing information from the noisy signal presented to our senses, we can analyze the process of reconstruction from memory as a Bayesian inference. Under this view, reconstructions should combine prior knowledge about the world with the information provided by noisy stimuli. Use of prior knowledge will result in biases, but these biases ultimately make memory more accurate [7]. If this account of reconstruction from memory is true, we would expect the same inference process to occur at every step of serial reproduction. The effects of memory biases should thus be accumulated. Assuming all participants share the same prior knowledge about the world, serial reproduction should ultimately reveal the nature of this knowledge. Drawing on recent work exploring other processes of information transmission [8, 9], we show that a rational analysis of serial reproduction makes exactly this prediction. To test the predictions of this account, we explore the special case where the task is to reconstruct a one-dimensional stimulus using the information that it is drawn from a fixed Gaussian distribution. In this case we can precisely characterize behavior at every step of serial reproduction. Specifically, we show that this defines a simple first-order autoregressive, or AR(1), process, allowing us to draw on a variety of results characterizing such processes. We use these predictions to test the Bayesian models of serial reproduction in two laboratory experiments and show that the predictions hold serial reproduction both between- and within-subjects. The plan of the paper is as follows. Section 2 lays out the Bayesian account of serial reproduction. In Section 3 we show how this Bayesian account corresponds to the AR(1) process. Sections 4 and 5 present two experiments testing the model’s prediction that serial reproduction reveals memory biases. Section 6 concludes the paper. 2 A Bayesian view of serial reproduction We will outline our Bayesian approach to serial reproduction by first considering the problem of reconstruction from memory, and then asking what happens when the solution to this problem is repeated many times, as in serial reproduction. 2.1 Reconstruction from memory Our goal is to give a rational account of reconstruction from memory, considering the underlying computational problem and finding the optimal solution to that problem. We will formulate the problem of reconstruction from memory as a problem of inferring and storing accurate information about the world from noisy sensory data. Given a noisy stimulus x, we seek to recover the true state of the world µ that generated that stimulus, storing an estimate ˆµ in memory. The optimal solution to this problem is provided by Bayesian statistics. Previous experience provides a “prior” distribution on possible states of the world, p(µ). On observing x, this can be updated to a “posterior” distribution p(µ|x) by applying Bayes’ rule p(µ|x) = p(x|µ)p(µ) R p(x|µ)p(µ) dµ (1) where p(x|µ) – the “likelihood” – indicates the probability of observing x if µ is the true state of the world. Having computed p(µ|x), a number of schemes could be used to select an estimate ofˆµ to store. Perhaps the simplest such scheme is sampling from the posterior, with ˆµ ∼p(µ|x). This analysis provides a general schema for modeling reconstruction from memory, applicable for any form of x and µ. A simple example is the special case where x and µ vary along a single continuous dimension. In the experiment presented later in the paper we take this dimension to be the width of a fish, showing people a fish and asking them to reconstruct its width from memory, but the dimension of interest could be any subjective quantity such as the perceived length, loudness, duration, or brightness of a stimulus. Assume that previous experience establishes that µ has a Gaussian distribution, with µ ∼N(µ0, σ2 0), and that the noise process means that x has a Gaussian distribution centered on µ, x|µ ∼N(µ, σ2 x). In this case, we can use standard results from Bayesian statistics [10] to show that the outcome of Equation 1 is also a Gaussian distribution, with p(µ|x) being N(λx + (1 −λ)µ0, λσ2 x), where λ = 1/(1 + σ2 x/σ2 0). The analysis presented in the previous paragraph makes a clear prediction: that the reconstruction ˆµ should be a compromise between the observed value x and the mean of the prior µ0, with the terms of the compromise being set by the ratio of the noise in the data σ2 x to the uncertainty in the prior σ2 0. This model thus predicts a systematic bias in reconstruction that is not a consequence of an error of memory, but the optimal solution to the problem of extracting information from a noisy stimulus. Huttenlocher and colleagues [7] have conducted several experiments testing this account of memory biases, showing that people’s reconstructions interpolate between observed stimuli and the mean of a trained distribution as predicted. Using a similar notion of recosntruction from memory, Hemmer and Steyvers [11] have conducted experiments to show that people formed appropriate Bayesian reconstructions for realistic stimuli such as images of fruit, and seemed capable of drawing on prior knowledge at multiple levels of abstraction in doing so. 2.2 Serial reproduction With a model of how people might approach the problem of reconstruction from memory in hand, we are now in a position to analyze what happens in serial reproduction, where the stimuli that people receive on one trial are the results of a previous reconstruction. On the nth trial, a participant sees a stimulus xn. The participant then computes p(µ|xn) as outlined in the previous section, and stores a sample ˆµ from this distribution in memory. When asked to produce a reconstruction, the participant generates a new value xn+1 from a distribution that depends on ˆµ. If the likelihood, p(x|µ), reflects perceptual noise, then it is reasonable to assume that xn+1 will be sampled from this distribution, substituting ˆµ for µ. This value of xn+1 is the stimulus for the next trial. Viewed from this perspective, serial reproduction defines a stochastic process: a sequence of random variables evolving over time. In particular, it is a Markov chain, since the reconstruction produced on the current trial depends only on the value produced on the preceding trial (e.g. [12]). The transition probabilities of this Markov chain are p(xn+1|xn) = Z p(xn+1|µ)p(µ|xn) dµ (2) being the probability that xn+1 is produced as a reconstruction for the stimulus xn. If this Markov chain is ergodic (see [12] for details) it will converge to a stationary distribution π(x), with p(xn|x1) tending to π(xn) as n →∞. That is, after many reproductions, we should expect the probability of seeing a particular stimulus being produced as a reproduction to stabilize to a fixed distribution. Identifying this distribution will help us understand the consequences of serial reproduction. The transition probabilities given in Equation 2 have a special form, being the result of sampling a value from the posterior distribution p(µ|xn) and then sampling a value from the likelihood p(xn+1|µ). In this case, it is possible to identify the stationary distribution of the Markov chain [8, 9]. The stationary distribution of this Markov chain is the prior predictive distribution π(x) = Z p(x|µ)p(µ) dµ (3) being the probability of observing the stimulus x when µ is sampled from the prior. This happens because this Markov chain is a Gibbs sampler for the joint distribution on x and µ defined by multiplying p(x|µ) and p(µ) [9]. This gives a clear characterization of the consequences of serial reproduction: after many reproductions, the stimuli being produced will be sampled from the prior distribution assumed by the participants. Convergence to the prior predictive distribution provides a formal justification for the traditional claims that serial reproduction reveals cultural biases, since those biases would be reflected in the prior. In the special case of reconstruction of stimuli that vary along a single dimension, we can also analytically compute the probability density functions for the transition probabilities and stationary distribution. Applying Equation 2 using the results summarized in the previous section, we have xn+1|xn ∼N(µn, (σ2 x + σ2 n)), where µn = λxn + (1 −λ)µ0, and σ2 n = λσ2 x. Likewise, Equation 3 indicates that the stationary distribution is N(µ0, (σ2 x + σ2 0)). The rate at which the Markov chain converges to the stationary distribution depends on the value of λ. When λ is close to 1, convergence is slow since µn is close to xn. As λ gets closer to 0, µn is more influenced by µ0 and convergence is faster. Since λ = 1/(1 + σ2 x/σ2 0), the convergence rate thus depends on the ratio of the participant’s perceptual noise and the variance of the prior distribution, σ2 x/σ2 0. More perceptual noise results in faster convergence, since the specific value of xn is trusted less; while more uncertainty in the prior results in slower convergence, since xn is given greater weight. 3 Serial reproduction of one-dimensional stimuli as an AR(1) process The special case of serial reproduction of one-dimensional stimuli can also give us further insight into the consequences of modifying our assumptions about storage and reconstruction from memory, by exploiting a further property of the underlying stochastic process: that it is a first-order autoregressive process, abbreviated to AR(1). The general form of an AR(1) process is xn+1 = c + φxn + ǫn+1 (4) where ǫn+1 ∼N(0, σ2 ǫ ). Equation 4 has the familiar form of a regression equation, predicting one variable as a linear function of another, plus Gaussian noise. It defines a stochastic process because each variable is being predicted from that which precedes it in sequence. AR(1) models are widely used to model timeseries data, being one of the simplest models for capturing temporal dependency. Just as showing that a stochastic process is a Markov chain provides information about its dynamics and asymptotic behavior, showing that it reduces to an AR(1) process provides access to a number of results characterizing the properties of these processes. If φ < 1 the process has a stationary distribution that is Gaussian with mean c/(1 −φ) and variance σ2 ǫ /(1 −φ2). The autocovariance at a lag of n is φnσ2 ǫ /(1 −φ2), and thus decays geometrically in φ. An AR(1) process thus converges to its stationary distribution at a rate determined by φ. It is straightforward to show that the stochastic process defined by serial reproduction where a sample from the posterior distribution on µ is stored in memory and a new value x is sampled from the likelihood is an AR(1) process. Using the results in the previous section, at the (n + 1)th iteration xn+1 = (1 −λ)µ0 + λxn + ǫn+1 (5) where λ = 1/(1 + σ2 x/σ2 0) and ǫn+1 ∼N(0, (σ2 x + σ2 n)) with σ2 n = λσ2 x. This is an AR(1) process with c = (1 −λ)µ0, φ = λ, and σ2 ǫ = σ2 x + σ2 n. Since λ is less than 1 for any σ2 0 and σ2 x, we can find the stationary distribution by substituting these values into the expressions given above. Identifying serial reproduction for single-dimensional stimuli as an AR(1) process allows us to relax our assumptions about the way that people are storing and reconstructing information. The AR(1) model can accommodate different assumptions about memory storage and reconstruction.1 All these ways of characterizing serial reproduction lead to the same basic prediction: that repeatedly reconstructing stimuli from memory will result in convergence to a distribution whose mean corresponds to the mean of the prior. In the remainder of the paper we test this prediction. In the following sections, we present two serial reproduction experiments conducted with stimuli that vary along only one dimension (width of fish). The first experiment follows previous research in using a between-subjects design, with the reconstructions of one participant serving as the stimuli for the next. The second experiment uses a within-subjects design in which each person reconstructs stimuli that they themselves produced on a previous trial, testing the potential of this design to reveal the memory biases of individuals. 4 Experiment 1: Between-subjects serial reproduction This experiment directly tested the basic prediction that the outcome of serial reproduction will reflect people’s priors. Two groups of participants were trained on different distributions of a onedimensional quantity – the width of a schematic fish – that would serve as a prior for reconstructing 1In the memorization phase, the participant’s memory ˆµ can be 1) a sample from the posterior distribution p(µ|xn), as assumed above, or 2) a value such that ˆµ = argmaxµ p(µ|xn), which is also the expected value of the Gaussian posterior, p(µ|xn). In the reproduction phase, the participant’s reproduction xn+1 can be 1) a noisy reconstruction, which is a sample from the likelihood p(xn+1|ˆµ), as assumed above, or 2) a perfect reconstruction from memory, such that xn+1 = ˆµ. This defines four different models of serial reproduction, all of which correspond to AR(1) processes that differ only in the variance σ2 ǫ (although maximizing p(µ|xn) and then storing a perfect reconstruction is degenerate, with σ2 ǫ = 0). In all four cases serial reproduction thus converges to a Gaussian stationary distribution with mean µ0, but with different variances. similar stimuli from memory. The two distributions differed in their means, allowing us to examine whether the mean of the distribution produced by serial reproduction is affected by the prior. 4.1 Method The experiment followed the same basic procedure as Bartlett’s classic experiments [2]. Participants were 46 members of the university community. Stimuli were the same as those used in [7]: fish with elliptical bodies and fan-shaped tails. All the fish stimuli varied only in one dimension, the width of the fish, ranging from 2.63cm to 5.76cm. The stimuli were presented on an Apple iMac computer by a Matlab script using PsychToolBox extensions [13, 14]. Participants were first trained to discriminate fish-farm and ocean fish. The width of the fish-farm fish was normally distributed and that of the ocean fish was uniformly distributed between 2.63 and 5.75cm. Two groups of participants were trained on one of the two distributions of fish-farm fish (prior distributions A and B), with different means and same standard deviations. In condition A, µ0 = 3.66cm, σ0 = 1.3cm; in condition B, µ0 = 4.72cm, σ0 = 1.3cm. In the training phase, participants first received a block of 60 trials. On each trial, a stimulus was presented at the center of a computer monitor and participants tried to predict which type of fish it was by pressing one of the keys on the keyboard and they received feedback about the correctness of the prediction. The participants were then tested for 20 trials on their knowledge of the two types of fish. The procedure was the same as the training block except there was no feedback. The trainingtesting loop was repeated until the participants reached 80% correct in using the optimal decision strategy. If a participant could not pass the test after five iterations, the experiment halted. In the reproduction phase, the participants were told that they were to record fish sizes for the fish farm. On each trial, a fish stimulus was flashed at the center of the screen for 500ms and then disappeared. Another fish of random size appeared at one of four possible positions near the center of screen and the participants used the up and down arrow keys to adjust the width of the fish until they thought it matched the fish they just saw. The fish widths seen by the first participant in each condition were 120 values randomly sampled from a uniform distribution from 2.63 to 5.75cm. The first participant tried to memorize these random samples and then gave the reconstructions. Each subsequent participant in each condition was then presented with the data generated by the previous participant and they again tried to reconstruct those fish widths. Thus, each participant’s data constitute one slice of time in 120 serial reproduction chains. At the end of the experiment, the participants were given a final 50-trial test to check if their prior distributions had drifted. Ten participants’ data were excluded from the chains based on three criteria: 1) final testing score was less than 80% of optimal performance; 2) the difference between the reproduced value and stimulus shown was greater than the difference between the largest and the smallest stimuli in the training distribution on any trial; 3) there were no adjustments from the starting value of the fish width for more than half of the trials. 4.2 Results and Discussion There were 18 participants in each condition, resulting in 18 generations of serial reproduction. Figure 1 shows the initial and final distributions of the reconstructions, together with the autoregression plots for the two conditions. The mean reconstructed fish widths produced by the first participants in conditions A and B were 4.22 and 4.21cm respectively, which were not statistically significantly different (t(238) = 0.09, p = 0.93). For the final participants in each chain, the mean reconstructed fish widths were 3.20 and 3.68cm respectively, a statistically significant difference (t(238) = 6.93, p < 0.001). The difference in means matches the direction of the difference in the training provided in conditions A and B, although the overall size of the difference is reduced and the means of the stationary distributions were lower than those of the distributions used in training. The autoregression plots provide a further quantitative test of the predictions of our Bayesian model. The basic prediction of the model is that reconstruction should look like regression, and this is exactly what we see in Figure 1. The correlation between the stimulus xn and its reconstruction xn+1 is the correlation between the AR(1) model’s predictions and the data, and this correlation was high in both conditions, being 0.91 and 0.86 (p < 0.001) for conditions A and B respectively. Finally, we examined whether the Markov assumption underlying our analysis was valid, by computing the 0.02 0.04 0.06 Fish width (m) Initial distribution Stimuli Condition A Condition B 0.02 0.04 0.06 Fish width (m) Final distribution 0.02 0.04 0.06 0.01 0.02 0.03 0.04 0.05 0.06 0.07 xn xn+1 Autoregression Figure 1: Initial and final distributions for the two conditions in Experiment 1. (a) The distribution of stimuli and Gaussian fits to reconstructions for the first participants in the two conditions. (b) Gaussian fits to reconstructions generated by the 18th participants in each condition. (c) Autoregression plot for xn+1 as a function of xn for the two conditions. correlation between xn+1 and xn−1 given xn. The resulting partial correlation was low for both conditions, being 0.04 and 0.01 in conditions A and B respectively (both p < 0.05). 5 Experiment 2: Within-subjects serial reproduction The between-subjects design allows us to reproduce the process of information transmission, but our analysis suggests that serial reproduction might also have promise as a method for investigating the memory biases of individuals. To explore the potential of this method, we tested the model with a within-subjects design, in which a participant’s reproduction in the current trial became the stimulus for that same participant in a later trial. Each participant’s responses over the entire experiment thus produced a chain of reproductions. Each participant produced three such chains, starting from widely separated initial values. Control trials and careful instructions were used so that the participants would not realize that some of the stimuli were their own reproductions. 5.1 Method Forty-six undergraduates from the university research participation pool participated the experiment. The basic procedure was the same as Experiment 1, except in the reproduction phase. Each participant’s responses in this phase formed three chains of 40 trials. The chains started with three original stimuli with width values of 2.63cm, 4.19cm, and 5.76cm, then in the following trials, the stimuli participants saw were their own reproductions in the previous trials in the same chain. To prevent participants from realizing this fact, chain order was randomized and the Markov chain trials were intermixed with 40 control trials in which widths were drawn from the prior distribution. 5.2 Results and Discussion Participants’ data were excluded based on the same criteria as used in Experiment 1, with a lower testing score of 70% of optimal performance and one additional criterion relevant to the withinsubjects case: participants were also excluded if the three chains did not converge, with the criterion for convergence being that the lower and upper chains must cross the middle chain. After these screening procedures, 40 participants’ data were accepted, with 21 in condition A and 19 in condition B. It took most participants about 20 trials for the chains to converge, so only the second half of the chains (trials 21-40) were analyzed further. The locations of the stationary distributions were measured by computing the means of the reproduced fish widths for each participant. For conditions A (3.66cm) and B (4.72cm), the average of these means was 3.32 and 4.01cm respectively (t(38) = 2.41, p = 0.021). The right panel of Figure Figure 2: Stimuli, training distributions and stationary distributions for Experiment 2. Each data point in the right panel shows the mean of the last 20 iterations for a single participant. Boxes show the 95% confidence interval around the mean for each condition. 0 10 20 30 40 0.02 0.04 0.06 Fish Width (m) condition A Serial Reproduction Training Gaussian Fit Auto Regression xt+1 xt 0 10 20 30 40 0.02 0.04 0.06 Fish Width (m) Iteration condition B xt+1 xt Figure 3: Chains and stationary distributions for individual participants from the two conditions. (a) The three Markov chains generated by each participant, starting from three different values. (b) Training distributions for each condition. (c) Gaussian fits for the last 20 iterations of each participant’s data. (d) Autoregression for the last 20 iterations of each participant’s data. 2 shows the mean values for these two conditions. The basic prediction of the model was borne out: participants converged to distributions that differed significantly in their means when they were exposed to data suggesting a different prior. However, the means were in general lower than those of the prior. This effect was less prominent in the control trials, which produced means of 3.63 and 4.53cm respectively.2 Figure 3 shows the chains, training distributions, the Gaussian fits and the autoregression for the second half of the Markov chains for two participants in the two conditions. Correlation analysis showed that the AR(1) model’s predictions are highly correlated with the data generated by each participant, with mean correlations being 0.90 and 0.81 for conditions A and B respectively. The 2Since both experiments produced stationary distributions with means lower than those of the training distributions, we conducted a separate experiment examining the reconstructions that people produced without training. The mean fish width produced by 20 participants was 3.43cm, significantly less than the mean of the initial values of each chain, 4.19cm (t(19) = 3.75, p < 0.01). This result suggested that people seem to have an a priori expectation that fish will have widths smaller than those used as our category means, suggesting that people in the experiments are using a prior that is a compromise between this expectation and the training data. correlations are significant for all participants. The mean partial correlation between xt+1 and xt−1 given xt was low, being 0.07 and 0.11 for conditions A and B respectively, suggesting that the Markov assumption was satisfied. The partial correlations were significant (p < 0.05) for only one participant in condition B. 6 Conclusion We have presented a Bayesian account of serial reproduction, and tested the basic predictions of this account using two strictly controlled laboratory experiments. The results of these experiments are consistent with the predictions of our account, with serial reproduction converging to a distribution that is influenced by the prior distribution established through training. Our analysis connects the biases revealed by serial reproduction with the more general Bayesian strategy of combining prior knowledge with noisy data to achieve higher accuracy [7]. It also shows that serial reproduction can be analyzed using Markov chains and first-order autoregressive models, providing the opportunity to draw on a rich body of work on the dynamics and asymptotic behavior of such processes. These connections allows us to provide a formal justification for the idea that serial reproduction changes the information being transmitted in a way that reflects the biases of the people transmitting it, establishing that this result holds under several different characterizations of the processes involved in storage and reconstruction from memory. Acknowledgments This work was supported by grant number 0704034 from the National Science Foundation. References [1] D. L. Schacter, J. T. Coyle, G. D. Fischbach, M. M. Mesulam, and L. E. Sullivan, editors. Memory distortion: How minds, brains, and societies reconstruct the past. Harvard University Press, Cambridge, MA, 1995. [2] F. C. Bartlett. Remembering: a study in experimental and social psychology. Cambridge University Press, Cambridge, 1932. [3] A. Bangerter. Transformation between scientific and social representations of conception: The method of serial reproduction. British Journal of Social Psychology, 39:521–535, 2000. [4] J. Barrett and M. Nyhof. Spreading nonnatural concepts: The role of intuitive conceptual structures in memory and transmission of cultural materials. Journal of Cognition and Culture, 1:69–100, 2001. [5] J. R. Anderson. The adaptive character of thought. Erlbaum, Hillsdale, NJ, 1990. [6] A. M. Liberman, F. S. Cooper, D. P. shankweiler, and M. Studdert-Kennedy. Perception of the speech code. Psychological Review, 74:431–461, 1967. [7] J. Huttenlocher, L. V. Hedges, and J. L. Vevea. Why do categories affect stimulus judgment? Journal of Experimental Psychology: General, pages 220–241, 2000. [8] T. L. Griffiths and M. L. Kalish. A Bayesian view of language evolution by iterated learning. In B. G. Bara, L. Barsalou, and M. Bucciarelli, editors, Proceedings of the Twenty-Seventh Annual Conference of the Cognitive Science Society, pages 827–832. Erlbaum, Mahwah, NJ, 2005. [9] T. L. Griffiths and M. L. Kalish. Language evolution by iterated learning with bayesian agents. Cognitive Science, 31:441–480, 2007. [10] A. Gelman, J. B. Carlin, H. S. Stern, and D. B. Rubin. Bayesian data analysis. Chapman & Hall, New York, 1995. [11] P. Hemmer and M. Steyvers. A bayesian account of reconstructive memory. In Proceedings of the 30th Annual Conference of the Cognitive Science Society, 2008. [12] J. R. Norris. Markov Chains. Cambridge University Press, Cambridge, UK, 1997. [13] D. H. Brainard. The Psychophysics Toolbox. Spatial Vision, 10:433–436, 1997. [14] D. G. Pelli. The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision, 10:437–442, 1997.
2008
92
3,584
Diffeomorphic Dimensionality Reduction Christian Walder and Bernhard Sch¨olkopf Max Planck Institute for Biological Cybernetics 72076 T¨ubingen, Germany first.last@tuebingen.mpg.de Abstract This paper introduces a new approach to constructing meaningful lower dimensional representations of sets of data points. We argue that constraining the mapping between the high and low dimensional spaces to be a diffeomorphism is a natural way of ensuring that pairwise distances are approximately preserved. Accordingly we develop an algorithm which diffeomorphically maps the data near to a lower dimensional subspace and then projects onto that subspace. The problem of solving for the mapping is transformed into one of solving for an Eulerian flow field which we compute using ideas from kernel methods. We demonstrate the efficacy of our approach on various real world data sets. 1 Introduction The problem of visualizing high dimensional data often arises in the context of exploratory data analysis. For many real world data sets this is a challenging task, as the spaces in which the data lie are often too high dimensional to be visualized directly. If the data themselves lie on a lower dimensional subspace however, dimensionality reduction techniques may be employed, which aim to meaningfully represent the data as elements of this lower dimensional subspace. The earliest approaches to dimensionality reduction are the linear methods known as principal components analysis (PCA) and factor analysis (Duda et al., 2000). More recently however, the majority of research has focussed on non-linear methods, in order to overcome the limitations of linear approaches—for an overview and numerical comparison see e.g. (Venna, 2007; van der Maaten et al., 2008), respectively. In an effort to better understand the numerous methods which have been proposed, various categorizations have been proposed. In the present case, it is pertinent to make the distinction between methods which focus on properties of the mapping to the lower dimensional space, and methods which focus on properties of the mapped data, in that space. A canonical example of the latter is multidimensional scaling (MDS), which in its basic form finds the minimizer with respect to y1, y2, . . . , ym of (Cox & Cox, 1994) m X i,j=1 (∥xi −xj∥−∥yi −yj∥)2 , (1) where here, as throughout the paper, the xi ∈Ra are input or high dimensional points, and the yi ∈Rb are output or low dimensional points, so that b < a. Note that the above term is a function only of the input points and the corresponding mapped points, and is designed to preserve the pairwise distances of the data set. The methods which focus on the mapping itself (from the higher to the lower dimensional space, which we refer to as the downward mapping, or the upward mapping which is the converse) are less common, and form a category into which the present work falls. Both auto-encoders (DeMers & Cottrell, 1993) and the Gaussian process latent variable model (GP-LVM) (Lawrence, 2004) also fall into this category, but we focus on the latter as it provides an appropriate transition into the 1 main part the paper. The GP-LVM places a Gaussian process (GP) prior over each high dimensional component of the upward mapping, and optimizes with respect to the set of low dimensional points—which can be thought of as hyper-parameters of the model—the likelihood of the high dimensional points. Hence the GP-LVM constructs a regular (in the sense of regularization, i.e. likely under the GP prior) upward mapping. By doing so, the model guarantees that nearby points in the low dimensional space should be mapped to nearby points in the high dimensional space—an intuitive idea for dimensionality reduction which is also present in the MDS objective (1), above. The converse is not guaranteed in the original GP-LVM however, and this has lead to the more recent development of the so-called back-constrained GP-LVM (Lawrence & Candela, 2006), which essentially places an additional GP prior over the downward mapping. By guaranteeing in this way that (the modes of the posterior distributions over) both the upward and downward mappings are regular, the back constrained GP-LVM induces something reminiscent of a diffeomorphic mapping between the two spaces. This leads us to the present work, in which we derive our new algorithm, Diffeomap, by explicitly casting the dimensionality reduction problem as one of constructing a diffeomorphic mapping between the low dimensional space and the subspace of the high dimensional space on which the data lie. 2 Diffeomorphic Mappings and their Practical Construction In this paper we use the following definition: Definition 2.1. Let U and V be open subsets of Ra and Rb, respectively. The mapping F : U →V is said to be a diffeomorphism if it is bijective (i.e. one to one), smooth (i.e. belonging to C∞), and has a smooth inverse map F −1. We note in passing the connection between this definition, our discussion of the GP-LVM, and dimensionality reduction. The GP-LVM constructs a regular upward mapping (analogous to F −1) which ensures that points nearby in Rb will be mapped to points nearby in Ra, a property referred to as similarity preservation in (Lawrence & Candela, 2006). The back constrained GP-LVM simultaneously ensures that the downward mapping (analogous to F) is regular, thereby additionally implementing what its authors refer to as dissimilarity preservation. Finally, the similarity between smoothness (required of F and F −1 in Definition 2.1) and regularity (imposed on the downward and upward mappings by the GP prior in the back constrained GP-LVM) complete the analogy. There is also an alternative, more direct motivation for diffeomorphic mappings in the context of dimensionality reduction, however. In particular, a diffeomorphic mapping has the property that it does not lose any information. That is, given the mapping itself and the lower dimensional representation of the data set, it is always possible to reconstruct the original data. There has been significant interest from within the image processing community, in the construction of diffeomorphic mappings for the purpose of image warping (Dupuis & Grenander, 1998; Joshi & Miller, 2000; Karac¸ali & Davatzikos, 2003). The reason for this can be understood as follows. Let I : U →R3 represent the RGB values of an image, where U ⊂R2 is the image plane. If we now define the warped version of I to be I ◦W, then we can guarantee that the warp is topology preserving, i.e. that it does not “tear” the image, by ensuring the W be a diffeomorphism U →U. The following two main approaches to constructing such diffeomorphisms have been taken by the image processing community, the first of which we mention for reference, while the second forms the basis of Diffeomap. It is a notable aside that there seem to be no image warping algorithms analogous to the back constrained GP-LVM, in which regular forward and inverse mappings are simultaneously constructed. 1. Enforcement of the constraint that |J(W)|, the determinant of the Jacobian of the mapping, be positive everywhere. This approach has been successfully applied to the problem of warping 3D magnetic resonance images (Karac¸ali & Davatzikos, 2003), for example, but a key ingredient of that success was the fact that the authors defined the mapping W numerically on a regular grid. For the high dimensional cases relevant to dimensionality reduction however, such a numerical grid is highly computationally unattractive. 2. Recasting the problem of constructing W as an Eulerian flow problem (Dupuis & Grenander, 1998; Joshi & Miller, 2000). This approach is the focus of the next section. 2 x φ(x, 1) = ψ(x) 0 s 1 t R (1, v(φ(x, s), s)) (s, φ(x, s)) R Figure 1: The relationship between v(·, ·), φ(·, ·) and ψ(·) for the one dimensional case ψ : R →R. 2.1 Diffeomorphisms via Flow Fields The idea here is to indirectly define the mapping of interest, call it ψ : Ra →Ra, by way of a “time” indexed velocity field v : Ra × R →Ra. In particular we write ψ(x) = φ(x, 1), where φ(x, t) = x + Z t s=0 v(φ(x, s), s)ds. (2) This choice of φ satisfies the following Eulerian transport equation with boundary conditions: ∂φ(x, s) ∂s = v(φ(x, s), s), φ(x, 0) = x. (3) The role of v is to transport a given point x from its original location at time 0 to its mapped location φ(x, 1) by way of a trajectory whose position and tangent vector at time s are given by φ(x, s) and v(φ(x, s), s), respectively (see Figure 1). The point of this construction is that if v satisfies certain regularity properties, then the mapping ψ will be a diffeomorphism. This fact has been proven in a number of places—one particularly accessible example is (Dupuis & Grenander, 1998), where the necessary conditions are provided for the three dimensional case along with a proof that the induced mapping is a diffeomorphism. Generalizing the result to higher dimensions is straightforward—this fact is stated in (Dupuis & Grenander, 1998) along with the basic idea of how to do so. We now offer an intuitive argument for the result. Consider Figure 1, and imagine adding a new starting point x′, along with its associated trajectory. It is clear that for the mapping ψ to be a diffeomorphism, then for any such pair of points x and x′, the associated trajectories must not collide. This is because the two trajectories would be identical after the collision, x and x′ would map to the same point, and hence the mapping would not be invertible. But if v is sufficiently regular then such collisions cannot occur. 3 Diffeomorphic Dimensionality Reduction The framework of Eulerian flow fields which we have just introduced provides an elegant means of constructing diffeomorphic mappings Ra →Ra, but for dimensionality reduction we require additional ingredients, which we now introduce. The basic idea is to construct a diffeomorphic mapping in such a way that it maps our data set near to a subspace of Ra, and then to project onto this subspace. The subspace we use, call it Sb, is the b-dimensional one spanned by the first b canonical basis vectors of Ra. Let P(a→b) : Ra →Rb be the projection operator which extracts the first b components of the vector it is applied to, i.e. P(a→b)x = (I Z) x, (4) where I ∈Ra×a is the identity matrix and Z ∈Ra×b−a is a matrix of zeros. We can now write the mapping ϕ : Ra →Rb which we propose for dimensionality reduction as ϕ(x) = P(a→b)φ(x, 1), (5) 3 where φ is given by (2). We choose each component of v at each time to belong to a reproducing kernel Hilbert Space (RKHS) H, so that v(·, t) ∈Ha, t ∈[0, 1]. If we define the norm1 ∥v(·, t)∥2 Ha ≜ a X j=1 [v(·, t)]j 2 H , (6) then ∥v(·, t)∥2 Ha < ∞, ∀t ∈[0, 1] is a sufficient condition which guarantees that ψ is a diffeomorphism, provided that some technical conditions are satisfied (Dupuis & Grenander, 1998; Joshi & Miller, 2000). In particular v need not be regular in its second argument. For dimensionality reduction we propose to construct v as the minimizer of O = λ Z 1 t=0 ∥v(·, t)∥2 Hd dt + m X j=1 L (ψ(xj)) , (7) where λ ∈R+ is a regularization parameter. Here, L measures the squared distance to our b dimensional linear subspace of interest Sb, i.e. L(x) = a X d=b+1 [x]2 d . (8) Note that this places special importance on the first b dimensions of the input space of interest— accordingly we make the natural and important preprocessing step of applying PCA such that as much as possible of the variance of the data is captured in these first b dimensions. 3.1 Implementation One can show that the minimizer in v of (7) takes the form [v(·, t)]d = m X j=1 [αd(t)]j k(φ(xj, t), ·), d = 1 . . . a, (9) where k is the reproducing kernel of H and αd is a function [0, 1] →Rm. This was proven directly for a similar specific case (Joshi & Miller, 2000), but we note in passing that it follows immediately from the celebrated representer theorem of RKHS’s (Sch¨olkopf et al., 2001), by considering a fixed time t. Hence, we have simplified the problem of determining v to one of determining m trajectories φ(xj, ·). This is because not only does (9) hold, but we can use standard manipulations (in the context of kernel ridge regression, for example) to determine that for a given set of such trajectories, αd(t) = K(t)−1ud(t), d = 1, 2, . . . , a, (10) where t ∈[0, 1], K(t) ∈Rm×m, ud(t) ∈Rm and we have let [K(t)]j,k = k(φ(xj, t), φ(xk, t)) along with [ud(t)]j = ∂tφ(xj, t). Note that the invertibility of K(t) is guaranteed for certain kernel functions (including the Gaussian kernel which we employ in all our Experiments, see Section 4), provided that the set φ(xj, t) are distinct. Hence, one can verify using (9), (10) and the reproducing property of k in H (i.e. the fact that ⟨f, k(x, ·)⟩H = f(x), ∀f ∈H), that for the optimal v, ∥v(·, t)∥2 Ha = a X d=1 ud(t)⊤K(t)−1ud(t). (11) This allows us to write our objective (7) in terms of the m trajectories mentioned above: O = λ Z 1 t=0 a X d=1 ud(t)⊤K(t)−1ud(t) + m X j=1 a X d=b+1 [φ(xj, 1)]2 d . (12) So far no approximations have been made, and we have constructed an optimal finite dimensional basis for v(·, t). The second argument of v is not so easily dealt with however, so as an approximate by discretizing the interval [0, 1]. In particular, we let tk = kδ, k = 0, 1, . . . , p, where δ = 1/p, and make the approximation ∂t=tkφ(xj, t) = (φ(xj, tk) −φ(xj, tk−1)) /δ. By making the further 1Square brackets w/ subscripts denote matrix elements, and colons denote entire rows or columns. 4 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 (b) (c) (d) (a) (b) (c) (d) Figure 2: Dimensionality reduction of motion capture data. (a) The data mapped from 102 to 2 dimensions using Diffeomap (the line shows the temporal order in which the input data were recorded). (b)-(d) Three rendered input points corresponding to the marked locations in (a). approximation R tk t=tk−1 K(t)−1dt = δK(tk−1)−1, and substituting into (12) we obtain the first form of our problem which is finite dimensional and hence readily optimized, i.e. the minimization of λ δ a X d=1 p X k=1 (Φk,d −Φk−1,d)⊤K(tk)−1 (Φk,d −Φk−1,d) + b X d=a+1 ∥Φp,d∥2 (13) with respect to Φk,d ∈Rm for k = 1, 2, . . . , p and d = 1, 2, . . . , a, where [Φk,d]j = [φ(xj, tk)]d. 3.2 A Practical Reduced Set Implementation A practical problem with (13) is the computationally expensive matrix inverse. In practice we reduce this burden by employing a reduced set expansion which replaces the sum over 1, 2, . . . , m in (9) with a sum over a randomly selected subset I, thereby using |I| = n basis functions to represent v(·, t). In this case it is possible to show using the reproducing property of k(·, ·) that the resulting objective function is identical to (13), but with the matrix K(tk)−1 replaced by the expression Km,n (Kn,mKm,n)−1 Kn,n (Kn,mKm,n)−1 Kn,m, (14) where Km,n = K⊤ n,m ∈Rm×n is the sub-matrix of K(tk) formed by taking all of the rows, but only those columns given by I. Similarly, Kn,n ∈Rn×n is the square sub-matrix of K(tk) formed by taking a subset of both the rows and columns, namely those given by I. For optimization we also use the gradients of the above expression, the derivation of which we have omitted for brevity. Note however that by factorizing appropriately, the computation of the objective function and its gradients can be performed with an asymptotic time complexity of n2(m + a). 4 Experiments It is difficult to objectively compare dimensionality reduction algorithms, as there is no universally agreed upon measure of performance. Algorithms which are generalizations or variations of older ones may be compared side by side with their predecessors, but this is not the case with our new algorithm, Diffeomap. Hence, in this section we attempt to convince the reader of the utility of our approach by visually presenting our results on as many and as varied realistic problems as space permits, while providing pointers to comparable results from other authors. For all experiments we fixed the parameters which trade off between computational speed and accuracy, i.e. we set the temporal resolution p = 20, and the number of basis functions n = 300. We used a Gaussian kernel function k(x, y) = exp −∥x −y∥2/(2σ2)  , and tuned the σ parameter manually along with the regularization parameter λ. For optimization we used a conjugate gradient type method2 fixed to 1000 iterations and with starting point [Φk,d]j = [xj]d , k = 1, 2, . . . p. 2Carl Rasmussen’s minimize.m, which is freely available from http://www.kyb.mpg.de/˜carl. 5 (a) (b) (c) a æ " e i 1 o @ u Figure 3: Vowel data mapped from 24 to 2 dimensions using (a) PCA and (b)-(c) Diffeomap. Plots (b) and (c) differ only in the parameter settings of Diffeomap, with (b) corresponding to minimal one nearest neighbor errors in the low dimensional space—see Section 4.2 for details. 4.1 Motion Capture Data The first data set we consider consists of the coordinates in R3 of a set of markers placed on a person breaking into a run, sampled at a constant frequency, resulting in m = 217 data points in a = 102 dimensions, which we mapped to b = 2 dimensions using Diffeomap (see Figure 2). This data set is freely available from http://accad.osu.edu/research/mocap/mocap_data.htm as Figure 1 Run, and was also considered in (Lawrence & Candela, 2006), where it was shown that while the original GP-LVM fails to correctly discover the periodic component of the sequence, the back constrained version maps poses in the same part of the subject’s step cycle nearby to each other, while simultaneously capturing variations in the inclination of the subject. Diffeomap also succeeded in this sense, and produced results which are competitive with those of the back constrained GP-LVM. 4.2 Vowel Data In this next example we consider a data set of a = 24 features (cepstral coefficients and delta cepstral coefficients) of a single speaker performing nine different vowels 300 times per vowel, acquired as training data for a vocal joystick system (Bilmes & et.al., 2006), and publicly available in pre-processed form from http://www.dcs.shef.ac.uk/˜neil/fgplvm/. Once again we used Diffeomap to map the data to b = 2 dimensions, as depicted in Figure 3. We also depict the poor result of linear PCA, in order to rule out the hypothesis that it is merely the PCA based initialization of Diffeomap (mentioned after equation (8) on page 4) which does most of the work. The results in Figure 3 are directly comparable to those provided in (Lawrence & Candela, 2006) for the GP-LVM, back constrained GP-LVM, and Isomap (Tenenbaum et al., 2000). Visually, the Diffeomap result appears to be superior to those of the GP-LVM and Isomap, and comparable to the back constrained GP-LVM. We also measured the performance of a one nearest neighbor classifier applied to the mapped data in R2. For the best choice of the parameters σ and λ, Diffeomap made 140 errors, which is favorable to the figures quoted for Isomap (458), the GP-LVM (226) and the back constrained GP-LVM (155) in (Lawrence & Candela, 2006). We emphasize however that this measure of performance is at best a rough one, since by manually varying our choice of the parameters σ and λ, we were able to obtain a result (Figure 3 (c)) which, although leads to a significantly higher number of such errors (418), is arguably superior from a qualitative perspective to the result with minimal errors (Figure 3 (b)). 4.3 USPS Handwritten Digits We now consider the USPS database of handwritten digits (Hull, 1994). Following the methodology of the stochastic neighbor embedding (SNE) and GP-LVM papers (Hinton & Roweis, 2003; Lawrence, 2004), we take 600 images per class from the five classes corresponding to digits 0, 1, 2, 3, 4. Since the images are in gray scale and a resolution of 16 by 16 pixels, this results in a data set of m = 3000 examples in a = 256 dimensions, which we again mapped to b = 2 dimensions as depicted in Figure 4. The figure shows the individual points color coded according to class, along 6 (a) (b) Figure 4: USPS handwritten digits 0-4 mapped to 2 dimensions using Diffeomap. (a) Mapped points color coded by class label. (b) A composite image of the mapped data—see Section 4.3 for details. with a composite image formed by sequentially drawing each digit in random order at its mapped location, but only if it would not obscure a previously drawn digit. Diffeomap manages to arrange the data in a manner which reveals such image properties as digit angle and stroke thickness. At the same time the classes are reasonably well separated, with the exception of the ones which are split into two clusters depending on the angle. Although unfortunate, we believe that this splitting can be explained by the fact that (a) the left- and right-pointing ones are rather dissimilar in input space, and (b) the number of fairly vertical ones which could help to connect the left- and right-pointing ones is rather small. Diffeomap seems to produce a result which is superior to that of the GP-LVM (Lawrence, 2004), for example, but may be inferior to that of the SNE (Hinton & Roweis, 2003). We believe this is due to the fact that the nearest neighbor graph used by SNE is highly appropriate to the USPS data set. This is indicated by the fact that a nearest neighbor classifier in the 256 dimensional input space is known to perform strongly, with numerous authors having reported error rates of less than 5% on the ten class classification problem. 4.4 NIPS Text Data Finally, we present results on the text data of papers from the NIPS conference proceedings volumes 0-12, which can be obtained from http://www.cs.toronto.edu/˜roweis/data.html. This experiment is intended to address the natural concern that by working in the input space rather than on a nearest neighbor graph, for example, Diffeomap may have difficulty with very high dimensional data. Following (Hinton & Roweis, 2003; Song et al., 2008) we represent the data as a word frequency vs. document matrix in which the author names are treated as words but weighted up by a factor 20 (i.e. an author name is worth 20 words). The result is a data set of m = 1740 papers represented in a = 13649 words + 2037 authors = 15686 dimensions. Note however that the input dimensionality is effectively reduced by the PCA preprocessing step to m −1 = 1739, that being the rank of the centered covariance matrix of the data. As this data set is difficult to visualize without taking up large amounts of space, we have included the results in the supplementary material which accompanies our NIPS submission. In particular, we provide a first figure which shows the data mapped to b = 2 dimensions, with certain authors (or groups of authors) color coded—the choice of authors and their corresponding color codes follows precisely those of (Song et al., 2008). A second figure shows a plain marker drawn at the mapped locations corresponding to each of the papers. This second figure also contains the paper title and authors of the corrsponding papers however, which are revealed when the user moves the mouse over the marked locations. Hence, this second figure allows one to browse the NIPS collection con7 textually. Since the mapping may be hard to judge, we note in passing that the correct classification rate of a one nearest neighbor classifier applied to the result of Diffeomap was 48%, which compares favorably to the rate of 33% achieved by linear PCA (which we use for preprocessing). To compute this score we treated authors as classes, and considered only those authors who were color coded both in our supplementary figure and in (Song et al., 2008). 5 Conclusion We have presented an approach to dimensionality reduction which is based on the idea that the mapping between the lower and higher dimensional spaces should be diffeomorphic. We provided a justification for this approach, by showing that the common intuition that dimensionality reduction algorithms should approximately preserve pairwise distances of a given data set is closely related to the idea that the mapping induced by the algorithm should be a diffeomorphism. This realization allowed us to take advantage of established mathematical machinery in order to convert the dimensionality reduction problem into a so called Eulerian flow problem, the solution of which is guaranteed to generate a diffeomorphism. Requiring that the mapping and its inverse both be smooth is reminiscent of the GP-LVM algorithm (Lawrence & Candela, 2006), but has the advantage in terms of statistical strength that we need not separately estimate a mapping in each direction. We showed results of our algorithm, Diffeomap, on a relatively small motion capture data set, a larger vowel data set, the USPS image data set, and finally the rather high dimensional data set derived from the text corpus of NIPS papers, with successes in all cases. Since our new approach performs well in practice while being significantly different to all previous approaches to dimensionality reduction, it has the potential to lead to a significant new direction in the field. References Bilmes, J., & et.al. (2006). The Vocal Joystick. Proc. IEEE Intl. Conf. on Acoustic, Speech and Signal Processing. Toulouse, France. Cox, T., & Cox, M. (1994). Multidimensional scaling. London, UK: Chapman & Hall. DeMers, D., & Cottrell, G. (1993). Non-linear dimensionality reduction. NIPS 5 (pp. 580–587). Morgan Kaufmann, San Mateo, CA. Duda, R. O., Hart, P. E., & Stork, D. G. (2000). Pattern classification. New York: Wiley. 2nd Edition. Dupuis, P., & Grenander, U. (1998). Variational problems on flows of diffeomorphisms for image matching. Quarterly of Applied Mathematics, LVI, 587–600. Hinton, G., & Roweis, S. (2003). Stochastic neighbor embedding. In S. T. S. Becker and K. Obermayer (Eds.), Advances in neural information processing systems 15, 833–840. Cambridge, MA: MIT Press. Hull, J. J. (1994). A database for handwritten text recognition research. IEEE Trans. Pattern Anal. Mach. Intell., 16, 550–554. Joshi, S. C., & Miller, M. I. (2000). Landmark matching via large deformation diffeomorphisms. IEEE Transactions on Image Processing, 9, 1357–1370. Karac¸ali, B., & Davatzikos, C. (2003). Topology preservation and regularity in estimated deformation fields. Information Processing in Medical Imaging (pp. 426–437). Lawrence, N. D. (2004). Gaussian process latent variable models for visualisation of high dimensional data. In S. Thrun, L. Saul and B. Sch¨olkopf (Eds.), Nips 16. Cambridge, MA: MIT Press. Lawrence, N. D., & Candela, J. Q. (2006). Local distance preservation in the GP-LVM through back constraints. In International conference on machine learning, 513–520. ACM. Sch¨olkopf, B., Herbrich, R., & Smola, A. J. (2001). A generalized representer theorem. Proc. of the 14th Annual Conf. on Computational Learning Theory (pp. 416–426). London, UK: Springer-Verlag. Song, L., Smola, A., Borgwardt, K., & Gretton, A. (2008). Colored maximum variance unfolding. In J. Platt, D. Koller, Y. Singer and S. Roweis (Eds.), Nips 20, 1385–1392. Cambridge, MA: MIT Press. Tenenbaum, J. B., de Silva, V., & Langford, J. C. (2000). A global geometric framework for nonlinear dimensionality reduction. Science, 290, 2319–2323. van der Maaten, L. J. P., Postma, E., & van den Herik, H. (2008). Dimensionality reduction: A comparative review. In T. Ertl (Ed.), Submitted to neurocognition. Elsevier. Venna, J. (2007). Dimensionality reduction for visual exploration of similarity structures. Doctoral dissertation, Helsinki University of Technology. 8
2008
93
3,585
Using Bayesian Dynamical Systems for Motion Template Libraries Silvia Chiappa, Jens Kober, Jan Peters Max-Planck Institute for Biological Cybernetics Spemannstraße 38, 72076 Tübingen, Germany {silvia.chiappa,jens.kober,jan.peters}@tuebingen.mpg.de Abstract Motor primitives or motion templates have become an important concept for both modeling human motor control as well as generating robot behaviors using imitation learning. Recent impressive results range from humanoid robot movement generation to timing models of human motions. The automatic generation of skill libraries containing multiple motion templates is an important step in robot learning. Such a skill learning system needs to cluster similar movements together and represent each resulting motion template as a generative model which is subsequently used for the execution of the behavior by a robot system. In this paper, we show how human trajectories captured as multi-dimensional time-series can be clustered using Bayesian mixtures of linear Gaussian state-space models based on the similarity of their dynamics. The appropriate number of templates is automatically determined by enforcing a parsimonious parametrization. As the resulting model is intractable, we introduce a novel approximation method based on variational Bayes, which is especially designed to enable the use of efficient inference algorithms. On recorded human Balero movements, this method is not only capable of finding reasonable motion templates but also yields a generative model which works well in the execution of this complex task on a simulated anthropomorphic SARCOS arm. 1 Introduction Humans demonstrate a variety and versatility of movements far beyond the reach of current anthropomorphic robots. It is widely believed that human motor control largely relies on a set of “mental templates” [1] better known as motor primitives or motion templates. This concept has gained increasing attention both in the human motor control literature [1, 2] as well as in robot imitation learning [3, 4]. The recent suggestion of Ijspeert et al. [3] to use dynamical systems as motor primitives has allowed this approach to scale in the domain of humanoid robot imitation learning and has yielded a variety of interesting applications as well as follow-up publications. However, up to now, the focus of motion template learning has largely been on single template acquisition and self-improvement. Future motor skill learning systems on the other hand need to be able to observe several different behaviors from human presenters and compile libraries of motion templates directly from these examples with as little predetermined structures as possible. An important part of such a motor skill learning system is the clustering of many presented movements into different motion templates. Human trajectories are recorded as multi-dimensional timeseries of joint angles as well as joint velocities using either a marker-based tracking setup (e.g., a VICONT M setup), a sensing suit (e.g., a SARCOS SenSuit) or a haptic interface (e.g., an anthropomorphic master arm). Inspired by Ijspeert et al. [3], we intend to use dynamical systems as generative models of the presented trajectories, i.e., as motion templates. Our goal is to cluster 1 these multi-dimensional time-series automatically into a small number of motion templates without pre-labeling of the trajectories or assuming an a priori number of templates. Thus, the system has to discover the underlying motion templates, determine the number of templates as well as learn the underlying skill sufficiently well for robot application. In principle, one could use a non-generative clustering approach (e.g., a type of K-means) with a method for selecting an appropriate number of clusters and, subsequently, fit a generative model to each cluster. Here we prefer to take a different approach in which the clustering and learning of the underlying time-series dynamics are performed at the same time. This way we aim at ensuring that each obtained cluster can be modeled well by its representative generative model. To date the majority of the work on time-series clustering using generative models has focused on static mixture models. Clustering long or high-dimensional time-series is hard when approached with static models, such that collapsing the trajectories to a few relevant features is often required. This problem would be severe for a high-dimensional motor learning system where the data needs to be represented at high sampling rates in order to ensure the capturing of all relevant details for motor skill learning. In addition, it is difficult to ensure smoothness when the time-series display high variability and, therefore, to obtain accurate generative models with static approaches. A natural alternative is to use mixtures of temporal models which explicitly model the dynamics of the time-series. In this paper, we use Mixtures of Linear Gaussian State-Space Models (LGSSMs). LGSSMs are probabilistic temporal models which, despite their computational simplicity, can represent many natural dynamical processes [5]. As we will see later in this paper, LGSSMs are powerful enough to model our time-series sufficiently accurately. For determining the number of clusters, most probabilistic approaches in the past used to train a separate model for each possible cluster configuration, and then select the one which would optimize the trade-off between accuracy and complexity, as measured for example by the Bayesian Information Criterion [6, 7]. The drawback of these approaches is that training many separate models can lead to a large computational overhead, such that heuristics are often needed to restrict the number of possible cluster configurations [7]. A less computationally expensive alternative is offered by recent Bayesian approaches where the model parameters are treated as random variables and integrated out yielding the marginal likelihood of the data. An appropriate prior distribution can be used to enforce a sparse representation, i.e., to select the smallest set of parameters that explains the data well by making the remaining parameters inactive. As a result, the structure selection can be achieved within the model, without the need to train and compare several separate models. As a Bayesian treatment of the Mixtures of Linear Gaussian State-Space Models is intractable, we introduce a deterministic approximation based on variational Bayes. Importantly, our approximation is especially designed to enable the use of standard LGSSM inference methods for the hidden state variables, which has the advantage of minimizing numerically instabilities. As a realistically difficult scenario in this first step towards large motor skill libraries, we have selected the game of dexterity Balero (also known as Ball-In-A-Cup or Kendama, see [8]) as an evaluation platform. Several substantially different types of movements exist for performing this task and humans tend to have a large variability in movement execution [9]. From a robotics point of view, Balero can be considered sufficiently complex as it involves movements in all major seven degrees of freedom of a human arm as well as an anthropomorphic robot arm. We are able to show that the presented method gives rise to a reasonable number of clusters representing quite distinct movements and that the resulting generative models can be used successfully as motion templates in physically realistic simulations. In the remainder of the paper, we will proceed as follows. We will first introduce a generative approach for clustering and modeling multi-dimensional time-series with Bayesian Mixtures of LGSSMs and describe how this approach can be made tractable using a variational approximation. We will then show that the resulting model can be used to infer the motion templates underlying a set of human demonstrations, and give evidence that the generative model representing each motion template is sufficiently accurate for control in a mechanically plausible simulation of the SARCOS Master Arm. 2 2 Bayesian Mixtures of Linear Gaussian State-Space Models Our goal is to model both human and robot movements in order to build motion template libraries. In this section, we describe our Bayesian modeling approach and discuss both the underlying assumptions as well as how the structure of the model is selected. As the resulting model is not tractable for analytical solution, we introduce an approximation method based on variational Bayes. 2.1 Modeling Approach In our Bayesian approach to Mixtures of Linear Gaussian State-Space Models (LGSSMs), we are given a set of N time-series1 v1:N 1:T of length T for which we define with the following marginal likelihood p(v1:N 1:T |ˆΘ1:K, γ) = X z1:N Z Θ1:K p(v1:N 1:T |z1:N, Θ1:K)p(Θ1:K|ˆΘ1:K) Z π p(z1:N|π)p(π|γ), where zn ∈{1, . . . , K} indicates which of a set of K LGSSMs generated the sequence vn 1:T . The parameters of LGSSM k are denoted by Θk and have a prior distribution depending on hyperparameters ˆΘk. The K-dimensional vector π includes the prior probabilities of the time-series generation for each LGSSM and has prior distribution hyperparameter γ. The optimal hyperparameters are estimated by type-II maximum likelihood [10], i.e., by maximizing the marginal likelihood over ˆΘ1:K and γ. Clustering can be performed by inferring the LGSSM that most likely generated the sequence vn 1:T by computing arg maxk p(zn = k|v1:N 1:T , ˆΘ1:K, γ). Modeling p(v1:N 1:T |z1:N, ˆΘ1:K). As a generative temporal model for each time-series, we employ a Linear Gaussian State-Space Model [5] that assumes that the observationsv1:T , with vt ∈ℜV , are generated from a latent Markovian linear dynamical system with hidden states h1:T , with ht ∈ℜH, according to2 vt = Bht + ηv t , ηv t ∼N(0V , ΣV ), ht = Aht−1 + ηh t , ηh t ∼N (µt, ΣH) . (1) Standard LGSSMs assume a zero-mean hidden-state noise (µt ≡0H). In our application the use of a time-dependent mean µt ̸= 0H leads to a superior modeling accuracy. A probabilistic formulation of the LGSSM is given by p(v1:T , h1:T |Θ) = p(v1|h1, Θ)p(h1|Θ) T Y t=2 p(vt|ht, Θ)p(ht|ht−1, Θ), with p(ht|ht−1, Θ) = N (Aht−1 + µt, ΣH), p(h1|Θ) = N(µ1, Σ), p(vt|ht, Θ) = N (Bht, ΣV ), and Θ = {A, B, ΣH, ΣV , µ1:T , Σ}. Due to the simple structure of the model, performing inference, that is to compute quantities such as p(ht|v1:T , Θ), can be efficiently achieved in O(T ) operations. In the presented Bayesian approach, we define a prior distribution p(Θ|ˆΘ) over the parameters Θ where ˆΘ are the associated hyperparameters. More specifically, we define zero-mean Gaussians on the elements of A and on the columns of B by3 p A|α, Σ−1 H  = H Y i,j=1 α1/2 ij p 2π [ΣH]ii e− αij 2 [Σ−1 H ]iiA2 ij, p B|β, Σ−1 V  = H Y j=1 βV/2 j p |2πΣV | e− βj 2 BT jΣ−1 V Bj, where α and β are a set of hyperparameters which need to be optimized. We make the assumption that Σ−1 H , Σ−1 V and Σ−1 are diagonal and define Gamma distributions on them. For µ1 we define a zero-mean Gaussian prior, while we formally treat µ2:T as hyperparameters and determine their 1v1:N 1:T is a shorthand for  v1 1, . . . , v1 T , . . . , vN 1 , . . . , vN T . 2Here, N(m, S) denotes a Gaussian with mean m and covariance S, and 0X denotes an X-dimensional zero vector. The initial latent state h1 is drawn from N (µ1, Σ). 3[X]ij and Xj denote the ij-th element and the j-th column of matrix X respectively. The dependency of the priors on ΣH and ΣV is chosen specifically to render a variational implementation feasible. 3 optimal values. These choices are made in order to render our Bayesian treatment feasible and to obtain a sparse parametrization, as discussed in more details below. In the resulting mixture model, we consider a set of K such Bayesian LGSSMs. The joint distribution over all sequences given the indicator variables and hyperparameters is defined as p(v1:N 1:T |z1:N, ˆΘ1:K) = Z Θ1:K ( N Y n=1 p(vn 1:T |zn, Θ1:K) ) K Y k=1 p(Θk|ˆΘk), where p(vn 1:T |zn = k, Θ1:K) ≡p(vn 1:T |Θk) denotes the probability of time-series vn 1:T given that parameters Θk have been employed to generate it. Modeling p z1:N|γ  . As prior for π, we define a symmetric Dirichlet distribution p(π|γ) = Γ (γ) Γ(γ/K)K K Y k=1 πγ/K−1 k , where Γ(·) is the Gamma function and γ denotes a hyperparameter that needs to be optimized. This distribution is conjugate to the multinomial, which greatly simplifies our Bayesian treatment. To model the joint indicator variables, we define p(z1:N|γ) = Z π ( N Y n=1 p(zn|π) ) p(π|γ), where p(zn = k|π) ≡πk. Such Bayesian approach favors simple model structures. In particular, the priors on Ak and Bk enforce a sparse parametrization since, during learning, many αk ij and βk j get close to infinity whereby (the posterior distribution of) Ak ij and Bk j get close to zero (see [11] for an analysis of this pruning effect). This enables us to achieve structure selection within the model. Specifically, this approach ensures that the unnecessary LGSSMs are pruned out from the model during training (for certain k, all elements of Bk are pruned out such that LGSSM k becomes inactive (p(zn = k|v1:N 1:T , ˆΘ1:K, γ) = 0 for all n)). 2.2 Model Intractability and Approximate Solution The Bayesian treatment of the model is non-trivial as the integration over the parameters Θ1:K and π renders the computation of the required posterior distributions intractable. This problem results from the coupling in the posterior distributions between the hidden state variables h1:N 1:T and the parameters Θ1:K as well as between the indicators z1:N and π, Θ1:K. To deal with this intractability, we use a deterministic approximation method based on variational Bayes. Variational Approximation. In our variational approach we introduce a new distribution q and make the following approximation4 p(z1:N, h1:N 1:T , Θ1:K|v1:N 1:T , ˆΘ1:K, γ) ≈q(h1:N 1:T |z1:N)q(z1:N)q(Θ1:K). (2) That is, we approximate the posterior distribution of the hidden variables of the model by one in which the hidden states are decoupled from the parameters given the indicator variables and in which the indicators are decoupled from the parameters. The approximation is achieved with a variational expectation-maximization algorithm which minimizes the KL divergence between the right and left hand sides of Equation (2), or, equivalently, maximizes a tractable lower bound on the log-likelihood log p(v1:N 1:T |ˆΘ1:K, γ) ≥F(ˆΘ1:K, γ, q) with respect to q for fixed ˆΘ1:K and γ and vice-versa. Observation vn t is then placed in the most likely LGSSM by computing arg maxk q(zn = k). 4Here, we describe a collapsed approximation over π [13]. To simplify the notation, we omit conditioning on v1:N 1:T , ˆΘ1:K, γ for the q distribution. 4 Figure 1: This figure shows one of the Balero motion templates found by our clustering method, i.e., the cluster C2 in Figure 2. Here, a sideways movement with a subsequent catch is performed and the uppermost row illustrates this movement with a symbolic sketch. The middle row shows an execution of the movement generated with the LGSSM representing the cluster C2. The lowest row shows a recorded human movement which was attributed to cluster C2 by our method. Note that movements generated from LGSSMs representing other clusters differ significantly. Resulting Updates. While the space does not suffice for complete derivation, we will briefly sketch the updates for q. Additional details and the updates for the hyperparameters can be found in [12]. The updates consist of a parameter update, an indicator variable update and a latent state update. First, the approximate parameter posterior is given by q(Θk) ∝p(Θk|ˆΘk)e PN n=1 q(zn=k)⟨log p(vn 1:T ,hn 1:T |Θk)⟩q(hn 1:T |zn=k), where ⟨·⟩q denotes expectation with respect to q. The specific choice for p(Θk|ˆΘk) makes the computation of this posterior relatively straightforward, since q(Θk) is a distribution of the same type. Second, the approximate posterior over the indicator variables is given by q(zn = k) ∝e Hq(hn 1:T |zn=k)+⟨log p(zn=k|z¬n,γ)⟩ Q m̸=n q(zm)e⟨log p(vn 1:T ,hn 1:T |Θk)⟩q(hn 1:T |zn=k)q(Θk ), where Hq(x) denotes the entropy of the distribution q(x) and z¬n includes all indicator variables except for zn. Due to the choice of a Dirichlet prior, the term p(zn = k|z¬n, γ) = R π p(zn = k|z¬n, π)p(π, γ) can be determined analytically. However, the required average over this term is computationally expensive, and, thus, we approximate it using a second order expansion [13]. The third and most challenging update is the one of the hidden states q (hn 1:T |zn = k) ∝e⟨log p(vn 1:T ,hn 1:T |Θk)⟩q(Θk). (3) Whilst computing this joint density is relatively straightforward, the parameter and indicator variable updates require the non-trivial estimation of the posterior averages ⟨hn t ⟩and hn t hn t−1 with respect to this distribution. Following a similar approach to the one proposed in [14] for the Bayesian LGSSM, we reformulate the rhs of Equation (3) as proportional to the distribution of an augmented LGSSM such that standard inference routines for the LGSSM can be used. 3 Results In this section we show that the model presented in Section 2 can be used effectively both for inferring the motion templates underlying a set of human trajectories and for approximating motion templates with dynamical systems. For doing so, we take the difficult task of Balero, also known as Ball-In-A-Cup or Kendama, and collect human executions of this task using a motion capture 5 0 0.4 −0.6 −0.3 −0.2 0.3 X C1 Y Z 0 0.4 −0.6 −0.3 −0.2 0.3 X C2 Y Z 0 0.4 −0.6 −0.3 −0.2 0.3 X C3 Y Z −0.5 0.7 −0.6 1 −1 0.3 X C4 Y Z −0.5 0.7 −0.6 1 −1 0.3 X C5 Y Z 0 0.4 −0.6 −0.3 −0.2 0.3 X C6 Y Z 0 0.4 −0.6 −0.3 −0.2 0.3 X C7 Y Z 0 0.4 −0.6 −0.3 −0.2 0.3 X C8 Y Z 0 0.4 −0.6 −0.3 −0.2 0.3 X C9 Y Z Figure 2: In this figure, we show nine plots where each plot represents one cluster found by our method. Each of the five shown trajectories in the respective clusters represents a different recorded Balero movement. For better visualization, we do not show joint trajectories here but rather the trajectories of the cup which have an easier physical interpretation and, additionally, reveal the differences between the isolated clusters. All axes show units in meters. setup. We show that the presented model successfully extracts meaningful human motion templates underlying Balero, and that the movements generated by the model are successful in simulation of the Balero task on an anthropomorphic SARCOS arm. 3.1 Data Generation of Balero Motions In the Balero game of dexterity, a human is given a toy consisting of a cup with a ball attached by a string. The goal of the human is to toss the ball into the cup. Humans perform a wide variety of different movements in order to achieve this task [9]. For example, three very distinct movements are: (i) swing the hand slightly upwards to the side and then go back to catch the ball, (ii) hold the cup high and then move very fast to catch the ball, and (iii) jerk the cup upwards and catch the ball in a fast downwards movement. Whilst the difference in these three movements is significant and can be easily detected visually, there exist many other movements for which this is not the case. We collected 124 different Balero trajectories where the subject was free to select the employed movement. For doing so, we used a VICONT M data collection system which samples the trajectories at 200Hz to track both the cup as well as all seven major degrees of freedom of the human arm. For the evaluation of our method, we considered the seven joint angles of the human presenter as well as the corresponding seven estimated joint velocities. In the lowest row of Figure 1, we show how the human motion is collected with a VICONT M motion tracking setup. As we will see later, this specific movement is assigned by our method to cluster C2 whose representative generative LGSSM can be used successfully for imitating this motion (middle row). A sketch of the represented movement is shown in the top row of Figure 1. 3.2 Clustering and Imitation of Motion Templates We trained the variational method with different initial conditions, hidden dimension H = 35 and a number of clusters K which varied from 20 to 50 in order to avoid suboptimal results due to local maxima. The resulting clustering contains nine active motion templates. These are plotted in Figure 2, where, instead of the 14-dimensional joint angles and velocities, we show the three-dimensional cup trajectories resulting from these joint movements, as it is easier for humans to make sense of cartesian trajectories. Clusters C1, C2 and C3 are movements to the side which subsequently catch the ball. Here, C1 is a short jerk, C3 appears to have a circular movement similar to a jerky movement, while C2 uses a longer but smoother movement to induce kinetic energy in the ball. Motion templates C4 and C5 are dropping movements where the cup moves down fast for more than 1.2m and then 6 Execution 1 Execution 2 Execution 1 Execution 2 Velocities[rad/s] Positions[rad] 0.16 0.32 0.48 0.64 −0.4 0 0.5 0.16 0.32 0.48 0.64 −4 0 5 0.16 0.32 0.48 0.64 −0.4 0 0.5 0.16 0.32 0.48 0.64 −4 0 5 0.16 0.32 0.48 0.64 −0.4 0 0.5 0.16 0.32 0.48 0.64 −4 0 5 0.16 0.32 0.48 0.64 −0.4 0 0.5 0.16 0.32 0.48 0.64 −4 0 5 Time[s] Time[s] Time[s] Time[s] (a) (b) Figure 3: (a) Time-series recorded from two executions of the Balero movement assigned by our model to cluster C1. In the first and second rows are plotted the positions and velocities respectively (for better visualization each time-series component is plotted with its mean removed). (b) Two executions of the Balero movement generated by our trained model using probability distributions of cluster C1. catches the ball. The template C5 is a smoother movement than C4 with a wider catching movement. For C6 and C7, we observe a significantly different movement where the cup is jerked upwards dragging the ball in this direction and then catches the ball on the way down. Clusters C8 and C9 exhibit the most interesting movement where the main motion is forward-backwards and the ball swings into the cup. In C8 this task is achieved by moving upwards at the same time while in C9 there is little loss of height. To generate Balero movements with our trained model, we can use the recursive formulation of the LGSSM given by Equation 1 where, for each cluster k, Ak, Bk and µk 1 are replaced by the mean values of their inferred Gaussian q distributions, while the noise covariances are replaced by the modes of their Gamma q distributions. The initial hidden state h1 and the noise elements ηh t and ηv t are sampled from their respective q distributions, whist the inferred optimal values are used for µk 2:T . In Figure 3 (a) we plotted two recorded executions of the Balero task assigned by our model to cluster C1. As we can see, the two executions have similar dynamics but also display some differences due to human variability in performing the same type of movement. In Figure 3 (b) we plotted two executions generated by our model using the learned distributions representing cluster C1. Our model can generate time-series with very similar dynamics to the ones of the recorded time-series. To investigate the accuracy of the obtained motion templates, we used them for executing Balero movements on a simulated anthropomorphic SARCOS arm. Inspired by Miyamoto et al. [15], a small visual feedback term based on a Jacobian transpose method was activated when the ball was within 3cm in order to ensure task-fulfillment. We found that our motion templates are accurate enough to generate successful task executions. This can be seen in Figure 1 for cluster C2 (middle row) and in the video on the author’s website. 4 Conclusions In this paper, we addressed the problem of automatic generation of skill libraries for both robot learning and human motion analysis as a unsupervised time-series clustering and learning problem based on human trajectories. We have introduced a novel Bayesian temporal mixture model based on a variational approximation method which is especially designed to enable the use of efficient inference algorithms. We demonstrated that our model gives rise to a meaningful clustering of human executions of the difficult game of dexterity Balero and is able to generate time-series which are very close to the recorded ones. Finally, we have shown that the model can be used to obtain successful executions of the Balero movements on a physically realistic simulation of the SARCOS Master Arm. 7 5 Acknowledgments The authors would like to thank David Barber for useful discussions and Betty Mohler for help with data collection. References [1] T. Flash and B. Hochner. Motor primitives in vertebrates and invertebrates. Current Opinion in Neurobiology, 15(6):660–666, 2005. [2] B. Williams, M. Toussaint, and A. Storkey. Modelling motion primitives and their timing in biologically executed movements. In Advances in Neural Information Processing Systems 20, pages 1609–1616, 2008. [3] A. Ijspeert, J. Nakanishi, and S. Schaal. Learning attractor landscapes for learning motor primitives. In Advances in Neural Information Processing Systems 15, pages 1547–1554, 2003. [4] S. Calinon, F. Guenter, and A. Billard. On learning, representing and generalizing a task in a humanoid robot. IEEE Transactions on Systems, Man and Cybernetics, Part B, 37(2):286–298, 2007. [5] J. Durbin and S. J. Koopman. Time Series Analysis by State Space Methods. Oxford Univ. Press, 2001. [6] Y. Xiong and D-Y. Yeung. Mixtures of ARMA models for model-based time series clustering. In Proceedings of the IEEE International Conference on Data Mining, pages 717–720, 2002. [7] C. Li and G. Biswas. A Bayesian approach to temporal data clustering using hidden Markov models. In Proceedings of the International Conference on Machine Learning, pages 543–550, 2000. [8] J. Kober, B. Mohler and J. Peters. Learning perceptual coupling for motor primitives. International Conference on Intelligent Robots and Systems, pages 834–839, 2008. [9] S. Fogel, J. Jacob, and C. Smith. Increased sleep spindle activity following simple motor procedural learning in humans. Actas de Fisiologia, 7(123), 2001. [10] D. J. C. MacKay. Information Theory, Inference and Learning Algorithms. Cambridge Univ. Press, 2003. [11] D. Wipf and J. Palmer and B. Rao. Perspectives on Sparse Bayesian Learning. In Advances in Neural Information Processing Systems 16, 2004. [12] S. Chiappa and D. Barber. Dirichlet Mixtures of Bayesian Linear Gaussian State-Space Models: a Variational Approach. Technical Report no. 161, MPI for Biological Cybernetics, Tübingen, Germany, 2007. [13] K. Kurihara, M. Welling, and Y. W. Teh. Collapsed variational Dirichlet process mixture models. In Proceedings of the International Joint Conference on Artificial Intelligence, pages 2796–2801, 2007. [14] D. Barber and S. Chiappa. Unified inference for variational Bayesian linear Gaussian statespace models. In Advances in Neural Information Processing Systems 19, pages 81–88, 2007. [15] H. Miyamoto and S. Schaal and F. Gandolfo and Y. Koike and R. Osu and E. Nakano and Y. Wada and M. Kawato. A Kendama learning robot based on bi-directional theory. Neural Networks, 9(8): 1281–1302, 1996 8
2008
94
3,586
An Efficient Sequential Monte Carlo Algorithm for Coalescent Clustering Dilan G¨or¨ur Gatsby Unit University College London dilan@gatsby.ucl.ac.uk Yee Whye Teh Gatsby Unit University College London ywteh@gatsby.ucl.ac.uk Abstract We propose an efficient sequential Monte Carlo inference scheme for the recently proposed coalescent clustering model [1]. Our algorithm has a quadratic runtime while those in [1] is cubic. In experiments, we were surprised to find that in addition to being more efficient, it is also a better sequential Monte Carlo sampler than the best in [1], when measured in terms of variance of estimated likelihood and effective sample size. 1 Introduction Algorithms for automatically discovering hierarchical structure from data play an important role in machine learning. In many cases the data itself has an underlying hierarchical structure whose discovery is of interest, examples include phylogenies in biology, object taxonomies in vision or cognition, and parse trees in linguistics. In other cases, even when the data is not hierarchically structured, such structures are still useful simply as a statistical tool to efficiently pool information across the data at different scales; this is the starting point of hierarchical modelling in statistics. Many hierarchical clustering algorithms have been proposed in the past for discovering hierarchies. In this paper we are interested in a Bayesian approach to hierarchical clustering [2, 3, 1]. This is mainly due to the appeal of the Bayesian approach being able to capture uncertainty in learned structures in a coherent manner. Unfortunately, inference in Bayesian models of hierarchical clustering are often very complex to implement, and computationally expensive as well. In this paper we build upon the work of [1] who proposed a Bayesian hierarchical clustering model based on Kingman’s coalescent [4, 5]. [1] proposed both greedy and sequential Monte Carlo (SMC) based agglomerative clustering algorithms for inferring hierarchical clustering which are simpler to implement than Markov chain Monte Carlo methods. The algorithms work by starting with each data item in its own cluster, and iteratively merge pairs of clusters until all clusters have been merged. The SMC based algorithm has computational cost O(n3) per particle, where n is the number of data items. We propose a new SMC based algorithm for inference in the coalescent clustering of [1]. The algorithm is based upon a different perspective on Kingman’s coalescent than that in [1], where the computations required to consider whether to merge each pair of clusters at each iteration is not discarded in subsequent iterations. This improves the computational cost to O(n2) per particle, allowing this algorithm to be applied to larger datasets. In experiments we show that our new algorithm achieves improved costs without sacrificing accuracy or reliability. Kingman’s coalescent originated in the population genetics literature, and there has been significant interest there on inference, including Markov chain Monte Carlo based approaches [6] and SMC approaches [7, 8]. The SMC approaches have interesting relationship to our algorithm and to that of [1]. While ours and [1] integrate out the mutations on the coalescent tree and sample the coalescent 1 times, [7, 8] integrate out the coalescent times, and sample mutations instead. Because of this difference, ours and that of [1] will be more efficient in higher dimensional data, as well as other cases where the state space is too large and sampling mutations will be inefficient. In the next section, we review Kingman’s coalescent and the existing SMC algorithms for inference on this model. In Section 3, we describe a cheaper SMC algorithm. We compare our method with that of [1] in Section 4 and conclude with a discussion in Section 5. 2 Hierarchical Clustering using Kingman’s Coalescent Kingman’s coalescent [4, 5] describes the family relationship between a set of haploid individuals by constructing the genealogy backwards in time. Ancestral lines coalesce when the individuals share a common ancestor, and the genealogy is a binary tree rooted at the common ancestor of all the individuals under consideration. We briefly review the coalescent and the associated clustering model as presented in [1] before presenting a different formulation more suitable for our proposed algorithm. Let π be the genealogy of n individuals. There are n−1 coalescent events in π, we order these events with i = 1 being the most recent one, and i = n −1 for the last event when all ancestral lines are coalesced. Event i occurs at time Ti < 0 in the past, and involves the coalescing of two ancestors, denoted ρli and ρri, into one denoted ρi. Let Ai be the set of ancestors right after coalescent event i, and A0 be the full set of individuals at the present time T0 = 0. To draw a sample π from Kingman’s coalescent we sample the coalescent events one at a time starting from the present. At iteration i we pick the pair of individuals ρli, ρri uniformly at random from the n −i + 1 individuals available in Ai−1, pick a waiting time δi ∼Exp( n−i+1 2  ) from an exponential distribution with rate n−i+1 2  equal to the number of pairs available, and set Ai = Ai−1 −{ρli, ρri} + {ρi}, Ti = Ti−1 −δi. The probability of π is thus: p(π) = n−1 Y i=1 exp  −  n −i + 1 2  δi  . (1) The coalescent can be used as a prior over binary trees in a model where we have a tree-structured likelihood function for observations at the leaves. Let θi be the subtree rooted at ρi and xi be the observations at the leaves of θi. [1] showed that by propagating messages up the tree the likelihood function can be written in a sequential form: p(x | π) = Z0(x) n−1 Y i=1 Zρi(xi|θi), (2) where Zρi is a function only of the coalescent times associates with ρli, ρri, ρi and of the local messages sent from ρli, ρri to ρi, and Z0(x) is an easily computed normalization constant in eq. (2). Each function has the form (see [1] for further details): Zρi(xi|θi) = Z p0(yi) Y c=li,ri Z p(yc|yi, θi)Mρc(yc) dyc dyi (3) where Mρc is the message from child ρc to ρi. The posterior is proportional to the product of eq. (1) and eq. (2) and our aim is to have an efficient way to compute the posterior. For this purpose, we will give a different perspective to constructing the coalescent in the following and describe our sequential Monte Carlo algorithm in Section 3. 2.1 A regenerative race process In this section we describe a different formulation of the coalescent based on the fact that each stage of the coalescent can be interpreted as a race between the n−i+1 2  pairs of individuals to coalesce. Each pair proposes a coalescent time, the pair with most recent coalescent time “wins” the race and gets to coalesce, at which point the next stage starts with n−i 2  pairs in the race. Na¨ıvely this race process would require a total of O(n3) pairs to propose coalescent times. We show that using the regenerative (memoryless) property of exponential distributions allows us to reduce this to O(n2). 2 Algorithm 1 A regenerative race process for constructing the coalescent inputs: number of individuals n, set starting time T0 = 0 and A0 the set of n individuals for all pairs of existing individuals ρl, ρr ∈A0 do propose coalescent time tlr using eq. (4) end for for all coalescence events i = 1 : n −1 do find the pair to coalesce (ρli, ρri) using eq. (5) set coalescent time Ti = tliri and update Ai = Ai−1 −{ρli, ρri} + {ρi} remove pairs with ρl ∈{ρli, ρri}, ρr ∈Ai−1\{ρli, ρri} for all new pairs with ρl = ρi, ρr ∈Ai\{ρi} do propose coalescent time using eq. (4) end for end for The same idea will allow us to reduce the computational cost of our SMC algorithm from O(n3) to O(n2). At stage i of the coalescent we have n −i + 1 individuals in Ai−1, and n−i+1 2  pairs in the race to coalesce. Each pair ρl, ρr ∈Ai−1, ρl ̸= ρr proposes a coalescent time tlr|Ti−1 ∼Ti−1 −Exp(1), (4) that is, by subtracting from the last coalescent time a waiting time drawn from an exponential distribution of rate 1. The pair ρli, ρri with most recent coalescent time wins the race: (ρli, ρri) = argmax (ρl,ρr) n tlr, ρl, ρr ∈Ai−1, ρl ̸= ρr o (5) and coalesces into a new individual ρi at time Ti = tliri. At this point stage i + 1 of the race begins, with some pairs dropping out of the race (specifically those with one half of the pair being either ρli or ρri) and new ones entering (specifically those formed by pairing the new individual ρi with an existing one). Among the pairs (ρl, ρr) that did not drop out nor just entered the race, consider the distribution of tlr conditioned on the fact that tlr < Ti (since (ρl, ρr) did not win the race at stage i). Using the memoryless property of the exponential distribution, we see that tlr|Ti ∼Ti −Exp(1), thus eq. (4) still holds and we need not redraw tlr for the stage i + 1 race. In other words, once tlr is drawn once, it can be reused for subsequent stages of the race until it either wins a race or drops out. The generative process is summarized in Algorithm 1. We obtain the probability of the coalescent π as a product over the i = 1, . . . , n −1 stages of the race, of the probability of each event “ρli, ρri wins stage i and coalesces at time Ti” given more recent stages. The probability at stage i is simply the probability that tliri = Ti, and that all other proposed coalescent times tlr < Ti, conditioned on the fact that the proposed coalescent times tlr for all pairs at stage i are all less than Ti−1. This gives: p(π) = n−1 Y i=1  p(tliri = Ti | tliri < Ti−1) Y (ρl,ρr)̸=(ρli,ρri) p(tl′r′ < Ti | tl′r′ < Ti−1)  (6) = n−1 Y i=1  p(tliri = Ti) p(tliri < Ti−1) Y (ρl,ρr)̸=(ρli,ρri) p(tlr < Ti) p(tlr < Ti−1)  (7) where the second product runs over all pairs in stage i except the winning pair. Each pair that participated in the race has corresponding terms in eq. (7), starting at the stage when the pair entered the race, and ending with the stage when the pair either dropped out or wins the stage. As these terms cancel, eq. (7) simplifies to, p(π) = n−1 Y i=1  p(tliri = Ti) Y ρl∈{ρli,ρri},ρr∈Ai−1\{ρli,ρri} p(tlr < Ti)  , (8) 3 where the second product runs only over those pairs that dropped out after stage i. The first term is the probability of pair (ρli, ρri) coalescing at time Ti given its entrance time, and the second term is the probability of pair (ρl, ρr) dropping out of the race at time Ti given its entrance time. We can verify that this expression equals eq. (1) by plugging in the probabilities for exponential distributions. Finally, multiplying the prior eq. (8) and the likelihood eq. (2) we have, p(x, π) = Z0(x) n−1 Y i=1  Zρi(xi|θi)p(tliri = Ti) Y ρl∈{ρli,ρri}, ρr∈Ai−1\{ρli,ρri} p(tlr < Ti)  . (9) 3 Efficient SMC Inference on the Coalescent Our sequential Monte Carlo algorithm for posterior inference is directly inspired by the regenerative race process described above. In fact the algorithm is structurally exactly as in Algorithm 1, but with each pair ρl, ρr proposing a coalescent time from a proposal distribution tlr ∼Qlr instead of from eq. (4). The idea is that the proposal distribution Qlr is constructed taking into account the observed data, so that Algorithm 1 produces better approximate samples from the posterior. The overall probability of proposing π under the SMC algorithm can be computed similarly to eq. (6)-(8), and is, q(π) = n−1 Y i=1  qliri(tliri = Ti) Y ρl∈{ρli,ρri},ρr∈Ai−1\{ρli,ρri} qlr(tlr < Ti)  , (10) where qlr is the density of Qlr. As both eq. (9) and eq. (10) can be computed sequentially, the weight w associated with each sample π can be computed “on the fly” as the coalescent tree is constructed: w0 = Z0(x) wi = wi−1 Zρi(xi|θi)p(tliri = Ti) qliri(tliri = Ti) Y ρl∈{ρli,ρri}, ρr∈Ai−1\{ρli,ρri} p(tlr < Ti) qlr(tlr < Ti). (11) Finally we address the choice of proposal distribution Qlr to use. [1] noted that Zρi(xi|θi) acts as a “local likelihood” term in eq. (9). We make use of this observation and use eq. (4) as a “local prior”, i.e. the following density for the proposal distribution Qlr: qlr(tlr) ∝Zρlr(xlr|tlr, ρl, ρr, θi−1)p(tlr | Tc(lr)) (12) where ρlr is a hypothetical individual resulting from coalescing l and r, Tc(lr) denotes the time when the pair (ρl, ρr) enters the race, xlr are the data under ρl and ρr, and p(tlr | Tc(lr)) = etlr−Tc(lr)I(tlr < Tc(lr)) is simply an exponential density with rate 1 that has been shifted and reflected. I(·) is an indicator function returning 1 if its argument is true, and 0 otherwise. The proposal distribution in [1] also has a form similar to eq. (12), but with the exponential rate being n−i+1 2  instead, if the proposal was in stage i of the race. This dependence means that at each stage of the race the coalescent times proposal distribution needs to be recomputed for each pair, leading to an O(n3) computation time. On the other hand, similar to the prior process, we need to propose a coalescent time for each pair only once when it is first created. This results in O(n2) computational complexity per particle1. Note that it may not always be possible (or efficient) to compute the normalizing constant of the density in eq. (12) (even if we can sample from it efficiently). This means that the weight updates eq. (11) cannot be computed. In that case, we can use an approximation ˜Zρlr to Zρlr instead. In the following subsection we describe the independent-sites parent-independent model we used in the experiments, and how to construct ˜Zρlr. 1Technically the time cost is O(n2(m + log n)), where n is the number of individuals, and m is the cost of sampling from and evaluating eq. (12). The additional log n factor comes about because a priority queue needs to be maintained to determine the winner of each stage efficiently, but this is negligible compared to m. 4 3.1 Independent-Sites Parent-Independent Likelihood Model In our experiments we have only considered coalescent clustering of discrete data, though our approach can be applied more generally. Say each data item consists of a D dimensional vector where each entry can take on one of K values. We use the independent-sites parent-independent mutation model over multinomial vectors in [1] as our likelihood model. Specifically, this model assumes that each point on the tree is associated with a D dimensional multinomial vector, and each entry of this vector on each branch of the tree evolves independently (thus independent-sites), forward in time, and with mutations occurring at rate λd on entry d. When a mutation occurs, a new value for the entry is drawn from a distribution φd, independently of the previous value at that entry (thus parentindependent). When a coalescent event is encountered, the mutation process evolves independently down both branches. Some calculations show that the transition probability matrix of the mutation process associated with entry d on a branch of length t is e−λdtIK + (1 −e−λdt)φ⊤ d 1K, where IK is the identity matrix, 1K is a vector of 1’s, and we have implicitly represented the multinomial distribution φd as a vector of probabilities. The message for entry d from node ρi on the tree to its parent is a vector M d ρi = [M d1 ρi , . . . , M dK ρi ]⊤, normalized so that φ⊤ d M d ρi = 1. The local likelihood term is then: Zd ρlr(xlr|tlr, ρl, ρr, θi−1) = 1 −eλd(2tlr−tl−tr) 1 − K X k=1 φdkM dk ρl M dk ρr ! (13) The logarithm of the proposal density is then: log qlr(tlr) = constant + (tlr −Tc(lr)) + D X d=1 log Zd ρlr(xlr|tlr, ρl, ρr, θi−1) (14) This is not of standard form, and we use an approximation log ˜qlr(tlr) instead. Specifically, we use a piecewise linear log ˜qlr(tlr), which can be easily sampled from, and for which the normalization term is easy to compute. The approximation is constructed as follows. Note that log Zd ρlr(xlr|tlr, ρl, ρr, θi−1), as a function of tlr, is concave if the term inside the parentheses in eq. (13) is positive, convex if negative, and constant if zero. Thus eq. (14) is a sum of linear, concave and convex terms. Using the upper and lower envelopes developed for adaptive rejection sampling [9], we can construct piecewise linear upper and lower envelopes for log qlr(tlr) by upper and lower bounding the concave and convex parts separately. The upper and lower envelopes give exact bounds on the approximation error introduced, and we can efficiently improve the envelopes until a given desired approximation error is achieved. Finally, we used the upper bound as our approximate log ˜qlr(tlr). Note that the same issue arises in the proposal distribution for SMC-PostPost, and we used the same piecewise linear approximation. The details of this algorithm can be found in [10]. 4 Experiments The improved computational cost of inference makes it possible to do Bayesian inference for the coalescence models on larger datasets. The SMC samplers converge to the exact solution in the limit of infinite particles. However, it is not enough to be more efficient per particle, the crucial point is how efficient the algorithm is overall. An important question is how many particles we need in practice. To address this question, we compared the performance of our algorithm SMC1 to SMC-PostPost on the synthetic data shown in Figure 12. There are 15 binary 12-dimensional vectors in the dataset. There is overlap between the features of the data points however the data does not obey a tree structure, which will result in a multimodal posterior. Both SMC1 and SMC-PostPost recover the structure with only a few particles. However there is room for improvement as the variance in the likelihood obtained from multiple runs decreases with increasing number of particles. Since both SMC algorithms are exact in the limit, the values should converge as we add more particles. We can check convergence by observing the variance of likelihood estimates of multiple runs. The variance 2The comparison is done in the importance sampling setting, i.e. without using resampling for comparison of the proposal distributions. 5 (a) (b) 2 4 6 8 10 12 333344555511122 0 0.5 1 1.5 Figure 1: Synthetic data features is shown on the left; each data point is a binary column vector. A sample tree from the SMC1 algorithm demonstrate that the algorithm could capture the similarity structure. The true covariance of the data (a) and the distance on the tree learned by the SMC1 algorithm averaged over particles (b) are shown, showing that the overall structure was corerctly captured. The results obtained from SMC-PostPost were very similar to SMC1 therefore are not shown here. should shrink as we increase the number of particles. Figure 2 shows the change in the estimated likelihood as a function of number of particles. From this figure, we can conclude that the computationally cheaper algorithm SMC1 is more efficient also in the number of particles as it gives more accurate answers with less particles. 0 200 400 600 800 1000 −1 0 1 2 3 4 5 6 7 x 10 −30 likelihood number of particles, 7 runs each 0 200 400 600 800 1000 50 100 150 200 effective sample size number of particles, 7 runs each Figure 2: The change in the likelihood (left) and the effective sample size (right) as a function of number of particles for SMC1 (solid) and SMC-PostPost (dashed). The mean estimate of both algorithms are very close, with the SMC1 having a much tighter variance. The variance of both algorithms shrink and the effective sample size increases as the number of particles increase. A quantity of interest in genealogical studies is the time to the most recent common ancestor (MRCA), which is the time of the last coalescence event. Although there is not a physical interpretation of this quantity for hierarchical clustering, it gives us an indication about the variance of the particles. We can observe the variation in the time to MRCA to assess convergence. Similar to the variance behaviour in the likelihood, with small number of particles SMC-PostPost has higher variance than SMC1 . However, as there are more particles, results of the two algorithms almost overlap. The mean time for each step of coalescence together with its variance for 7250 particles for both algorithms is depicted in Figure 3. It is interesting that the first few coalescence times of SMC1 are shorter than those for SMC-PostPost. The distribution of the particle weights is important for the efficiency of the importance sampler. Ideally, the weights would be uniform such that each particle contributes equally to the posterior estimation. If there is only a few particles that come from a high probability region, the weights of those particles would be much larger than 6 2 4 6 8 10 12 14 10 −3 10 −2 10 −1 10 0 smc1 smcPostPost Figure 3: Times for each coalescence step averaged over 7250 particles. Note that both algorithms almost converged at the same distribution when given enough resources. There is a slight difference in the mean coalescence time. It is interesting that the SMC1 algorithm proposes shorter times for the initial coalescence events. the rest, resulting in a low effective sample size. We will discuss this point more in the next section. Here, we note that for the synthetic dataset, the effective sample size of SMC-PostPost is very poor, and that of SMC1 is much higher, see Figure 2. 5 Discussion We described an efficient Sequential Monte Carlo algorithm for inference on hierarchical clustering models that use Kingman’s coalescent as a proir. Our method makes use of a regenerative perspective to construct the coalescent tree. Using this construction, we achieve quadratic run time per particle. By employing a tight upper bound on the local likelihood term, the proposed algorithm is applicable to general data generation processes. We also applied our algorithm for inferring the structure in the phylolinguistic data used in [1]. We used the same Indo-European subset of the data, with the same subset of features, that is 44 languages with 100 binary features. Three example trees with the largest weights out of 7750 samples are depicted in Figure 4. Unfortunately, on this dataset, the effective sample size of both algorithms is close to one. A usual method to circumvent the low effective sample size problem in sequential Monte Carlo algorithms is to do resampling, that is, detecting the particles that will not contribute much to the posterior from the partial samples and prune them away, multiplying the promising samples. There are two stages to doing resampling. We need to decide at what point to prune away samples, and how to select which samples to prune away. As shown by [11], different problems may require different resampling algorithms. We tried resampling using Algorithm 5.1 of [12], however this only had a small improvement in the final performance for both algorithms on this data set. Note that both algorithms use ”local likelihoods” for calculating the weights, therefore the weights are not fully informative about the actual likelihood of the partial sample. Furthermore, in the recursive calculation of the weights in SMC1 , we are including the effect of a pair only when they either coalesce or cease to exist for the sake of saving computations. Therefore the partial weights are even less informative about the state of the sample and the effective sample size cannot really give full explanation about whether the current sample is good or not. In fact, we did observe oscillations on the effective sample size calculated on the weights along the iterations, i.e. starting off with a high value, decreasing to virtually 1 and increasing later before the termination, which also indicates that it is not clear which of the particles will be more effective eventually. An open question is how to incorporate a resampling algorithm to improve the efficiency. References [1] Y. W. Teh, H. Daume III, and D. M. Roy. Bayesian agglomerative clustering with coalescents. In Advances in Neural Information Processing Systems, volume 20, 2008. 7 0 1 2 Romance −Catalan Romance −Italian Romance −French Romance −Portuguese Romance −Spanish Albanian−Albanian Celtic −Cornish Romance −Romanian Baltic −Lithuanian Slavic −Russian Slavic −Ukrainian Slavic −Slovene Slavic −Serbian Germanic−Danish Germanic−Swedish Germanic−Norwegian Germanic−Dutch Germanic−German Germanic−English Germanic−Icelandic Armenian−Armenian E Armenian−Armenian W Indic −Hindi Indic −Panjabi Indic −Maithili Indic −Marathi Indic −Nepali Iranian −Pashto Iranian −Ossetic Indic −Bengali Indic −Kashmiri Indic −Sinhala Iranian −Kurdish Iranian −Persian Iranian −Tajik Slavic −Bulgarian Greek −Greek Celtic −Breton Celtic −Welsh Celtic −Gaelic Celtic −Irish Slavic −Czech Baltic −Latvian Slavic −Polish 0.0151504 c) with resampling 0 1 2 Slavic −Serbian Slavic −Slovene Slavic −Russian Baltic −Lithuanian Slavic −Czech Slavic −Ukrainian Slavic −Polish Baltic −Latvian Albanian−Albanian Romance −Romanian Romance −French Romance −Portuguese Slavic −Bulgarian Greek −Greek Romance −Catalan Romance −Italian Romance −Spanish Germanic−Danish Germanic−Norwegian Germanic−Swedish Germanic−Dutch Germanic−German Germanic−English Germanic−Icelandic Armenian−Armenian W Indic −Sinhala Indic −Bengali Indic −Kashmiri Indic −Hindi Indic −Panjabi Iranian −Pashto Indic −Maithili Indic −Marathi Indic −Nepali Iranian −Ossetic Armenian−Armenian E Celtic −Breton Celtic −Cornish Celtic −Welsh Celtic −Gaelic Celtic −Irish Iranian −Kurdish Iranian −Persian Iranian −Tajik 0.000379939 b) no resampling 0 1 2 Armenian−Armenian E Armenian−Armenian W Iranian −Kurdish Iranian −Persian Iranian −Tajik Indic −Bengali Indic −Marathi Indic −Sinhala Indic −Maithili Iranian −Ossetic Indic −Nepali Indic −Hindi Indic −Panjabi Iranian −Pashto Indic −Kashmiri Albanian−Albanian Slavic −Bulgarian Greek −Greek Romance −Catalan Romance −Italian Romance −Romanian Romance −French Romance −Portuguese Romance −Spanish Germanic−Danish Germanic−Swedish Germanic−Norwegian Germanic−Dutch Germanic−German Germanic−English Germanic−Icelandic Celtic −Breton Celtic −Cornish Celtic −Welsh Celtic −Gaelic Celtic −Irish Slavic −Czech Baltic −Latvian Baltic −Lithuanian Slavic −Serbian Slavic −Slovene Slavic −Polish Slavic −Ukrainian Slavic −Russian 0.998921 a) no resampling Figure 4: Tree structure infered from WALS data. (a),(b) Samples from a run with 7750 particles without resampling. (c) Sample from a run with resampling. The values above the trees are normalized weights. Note that the weight of (a) is almost one, which means that the contribution from the rest of the particles is infinitesimal although the tree structure in (b) also seem to capture the similarities between languages. [2] R. M. Neal. Defining priors for distributions using Dirichlet diffusion trees. Technical Report 0104, Department of Statistics, University of Toronto, 2001. [3] C. K. I. Williams. A MCMC approach to hierarchical mixture modelling. In Advances in Neural Information Processing Systems, volume 12, 2000. [4] J. F. C. Kingman. On the genealogy of large populations. Journal of Applied Probability, 19:27–43, 1982. Essays in Statistical Science. [5] J. F. C. Kingman. The coalescent. Stochastic Processes and their Applications, 13:235–248, 1982. [6] J. Felsenstein. Evolutionary trees from DNA sequences: a maximum likelihood approach. Journal of Molecular Evolution, 17:368–376, 1981. [7] R. C. Griffiths and S. Tavare. Simulating probability distributions in the coalescent. Theoretical Population Biology, 46:131–159, 1994. [8] M. Stephens and P. Donnelly. Inference in molecular population genetics. Journal of the Royal Statistical Society, 62:605–655, 2000. [9] W.R. Gilks and P. Wild. Adaptive rejection sampling for Gibbs sampling. Applied Statistics, 41:337–348, 1992. [10] D. G¨or¨ur and Y.W. Teh. Concave convex adaptive rejection sampling. Technical report, Gatsby Computational Neuroscience Unit, 2008. [11] Y. Chen, J. Xie, and J. Liu. Stopping-time resampling for sequential monte carlo methods. Journal of the Royal Statistical Society, 67, 2005. [12] P. Fearnhead. Sequential Monte Carlo Method in Filter Theory. PhD thesis, Merton College, University of Oxford, 1998. 8
2008
95
3,587
Bounding Performance Loss in Approximate MDP Homomorphisms Jonathan J. Taylor Dept. of Computer Science University of Toronto Toronto, Canada, M5S 3G4 jonathan.taylor@utoronto.ca Doina Precup School of Computer Science McGill University Montreal, Canada, H3A 2A7 dprecup@cs.mcgill.ca Prakash Panangaden School of Computer Science McGill University Montreal, Canada, H3A 2A7 prakash@cs.mcgill.ca Abstract We define a metric for measuring behavior similarity between states in a Markov decision process (MDP), which takes action similarity into account. We show that the kernel of our metric corresponds exactly to the classes of states defined by MDP homomorphisms (Ravindran & Barto, 2003). We prove that the difference in the optimal value function of different states can be upper-bounded by the value of this metric, and that the bound is tighter than previous bounds provided by bisimulation metrics (Ferns et al. 2004, 2005). Our results hold both for discrete and for continuous actions. We provide an algorithm for constructing approximate homomorphisms, by using this metric to identify states that can be grouped together, as well as actions that can be matched. Previous research on this topic is based mainly on heuristics. 1 Introduction Markov Decision Processes (MDPs) are a very popular formalism for decision making under uncertainty (Puterman, 1994). A significant problem is computing the optimal strategy when the state and action space are very large and/or continuous. A popular approach is state abstraction, in which states are grouped together in partitions, or aggregates, and the optimal policy is computed over these. Li et al. (2006) provide a nice comparative survey of approaches to state abstraction. The work we present in this paper bridges two such methods: bisimulation-based approaches and methods based on MDP homomorphisms. Bisimulation is a well-known, well-studied notion of behavioral equivalence between systems (Larsen & Skou, 1991; Milner, 1995) which has been specialized for MDPs by Givan et al (2003). In recent work, Ferns et al. (2004, 2005, 2006) introduced (pseudo)metrics for measuring the similarity of states, which provide approximations to bisimulation. One of the disadvantages of bisimulation and the corresponding metrics is that they require that the behavior matches for exactly the same actions. However, in many cases of practical interest, actions with the exact same label may not match, but the environment may contain symmetries and other types of special structure, which may allow correspondences between states by matching their behavior with different actions. This idea was formalized by (Ravindran & Barto, 2003) with the concept of MDP homomorphisms. MDP homomorphisms specify a map matching equivalent states as well as equivalent actions in such states. This matching can then be used to transfer policies between different MDPs. However, like any equivalence relations in probabilistic systems, MDP homomorphisms are brittle: a small change in the transition probabilities or the rewards can cause two previously equivalent state-action pairs to become distinct. This implies that such approaches do not work well in situations in which the model of the system is estimated from data. As a solution to this problem, Ravindran & Barto (2004) proposed using approximate homomorphisms, which allow aggregating states that are not exactly equivalent. They define an MDP over these partitions and quantify the approximate loss resulting from using this MDP, compared to the original system. As expected, the bound depends on the quality of the partition. Subsequent work (e.g. Wolfe & Barto, 2006) constructs such partitions heuristically. In this paper, we attempt to construct provably good, approximate MDP homomorphisms from first principles. First, we relate the notion of MDP homomorphisms to the concept of lax bisimulation, explored recently in the process algebra literature (Arun-Kumar, 2006). This allows us to define a metric on states, similarly to existing bisimulation metrics. Interestingly, this approach works both for discrete and for continuous actions. We show that the difference in the optimal value function of two states is bounded above by this metric. This allows us to provide a state aggregation algorithm with provable approximation guarantees. We illustrate empirically the fact that this approach can provide much better state space compression than the use of existing bisimulation metrics. 2 Background A finite Markov decision process (MDP) is a tuple ⟨S,A,P,R⟩, where S is a finite set of states, A is a set of actions, P : S ×A×S →[0,1] is the transition model, with P(s,a,s′) denoting the probability of transition from state s to s′ under action a, and R : S ×A →R is the reward function with R(s,a) being the reward for performing action a in state s. For the purpose of this paper, the state space S is assumed to be finite, but the action set A could be finite or infinite (as will be detailed later). We assume without loss of generality that rewards are bounded in [0,1]. A deterministic policy π : S →A specifies which action should be taken in every state. By following policy π from state s, an agent can expect a value of V π(s) = E(∑∞ t=1 γt−1rt|s0 = s,π) where γ ∈(0,1) is a discount factor and rt is the sample reward received at time t. In a finite MDP, the optimal value function V ∗is unique and satisfies the following formulas, known as the Bellman optimality equations: V ∗(s) = max a∈A R(s,a)+γ∑ s′ P(s,a,s′)V ∗(s′) ! ,∀s ∈S If the action space is continuous, we will assume that it is compact, so the max can be taken and the above results still hold (Puterman, 1994). Given the optimal value function, an optimal policy is easily inferred by simply taking at every state the greedy action with respect to the one-steplookahead value. It is well known that the optimal value function can be computed by turning the above equation into an update rule, which can be applied iteratively. Ideally, if the state space is very large, “similar” states should be grouped together in order to speed up this type of computation. Bisimulation for MDPs (Givan et al., 2003) is a notion of behavioral equivalence between states. A relation E ⊆S ×S is a bisimulation relation if: sEu ⇔∀a.(R(s,a) = R(u,a) and ∀X ∈S/E.Pr(X|s,a) = Pr(X|u,a)) where S/E denotes the partition of S into E-equivalent subsets of states. The relation ∼is the union of all bisimulation relations and two states in an MDP are said to be bisimilar if s ∼u. From this definition, it follows that bisimilar states can match each others’ actions to achieve the same returns. Hence, bisimilar states have the same optimal value (Givan et al., 2003). However, bisimulation is not robust to small changes in the rewards or the transition probabilities. One way to avoid this problem is to quantify the similarity between states using a (pseudo)-metric. Ferns et al. (2004) proposed a bisimulation metric, defined as the least fixed point of the following operator on the lattice of 1-bounded metrics d : S ×S →[0,1]: G(d)(s,t) = max a (cr|R(s,a)−R(u,a)|+cpK(d)(P(s,a,·),P(u,a,·)) (1) The first term above measures reward similarity. The second term is the Kantorovich metric between the probability distributions of the two states. Given probability distributions P and Q over the state space S, and a semimetric d on S, the Kantorovich metric K(d)(P,Q) is defined by the following linear program: max vi |S| ∑ i=1 (P(si)−Q(si))vi subject to: ∀i, j.vi −vj ≤d(si,sj) and ∀i.0 ≤vi ≤1 which has the following equivalent dual program: min λk j |S| ∑ k,j=1 λk jd(sk,sj) subject to: ∀k.∑ j λk j = P(sk), ∀j.∑ k λk j = Q(sj) and ∀k, j.λk j ≥0 Ferns et al. (2004) showed that by applying (1) iteratively, the least fixed point efix can be obtained, and that s and u are bisimilar if and only if efix(s,u) = 0. In other words, bisimulation is the kernel of this metric. 3 Lax bisimulation In many cases of practical interest, actions with exactly the same label may not match, but the environment may contain symmetries and other types of special structure, which may allow correspondences between different actions at certain states. For example, consider the environment in Figure 1. Because of symmetry, going south in state N6 is “equivalent” to going north in state S6. However, no two states are bisimilar. Recent work in process algebra has rethought the definition of bisimulation to allow certain distinct actions to be essentially equivalent (Arun-Kumar, 2006). Here, we define lax bisimulation in the context of MDPs. Definition 1. A relation B is a lax (probabilistic) bisimulation relation if whenever sBu we have that: ∀a ∃b such that R(s,a) = R(u,b) and for all B-closed sets X we have that Pr(X|s,a) = P(X|u,b), and vice versa. The lax bisimulation ∼is the union of all the lax bisimulation relations. It is easy to see that B is an equivalence relation and we denote the equivalence classes of S by S/B. Note that the definition above assumes that any action can be matched by any other action. However, the set of actions that can be used to match another action can be restricted based on prior knowledge. Lax bisimulation is very closely related to the idea of MDP homomorphisms (Ravindran & Barto, 2003). We now formally establish this connection. Definition 2. (Ravindran & Barto, 2003) A MDP homomorphism h from M = ⟨S,A,P,R⟩to M′ = ⟨S′,A′,P′,R′⟩is a tuple of surjections ⟨f,{gs : s ∈S}⟩with h(s,a) = (f(s),gs(a)), where f : S →S′ and gs : A →A′ such that R(s,a) = R′(f(s),gs(a)) and P(s,a, f −1(f(s′))) = P′(f(s),gs(a), f(s′)) Hence, a homomorphism puts in correspondence states, and has a state-dependent mapping between actions as well. We now show that homomorphisms are identical to lax probabilistic bisimulation. Theorem 3. Two states s and u are bisimilar if and only if they are related by some MDP homomorphism ⟨f,{gs : s ∈S}⟩in the sense that f(s) = f(u). Proof: For the first direction, let h be a MDP homomorphism and define the relation B such that sBu iff f(s) = f(u). Since gu is a surjection to A, there must be some b ∈A with gu(b) = gs(a). Hence, R(s,a) = R′(f(s),gs(a)) = R′(f(u),gu(b)) = R(u,b) Let X be a non-empty B-closed set such that f −1(f(s′)) = X for some s′. Then: P(s,a,X) = P′(f(s),gs(a), f(s′)) = P′(f(u),gu(b), f(s′)) = P(u,b,X) so B is a lax bisimulation relation. For the other direction, let B be a lax bisimulation relation. We will construct an MDP homomorphism in which sBu =⇒f(s) = f(u). Consider the partition S/B induced by the equivalence relation B on set S. For each equivalence class X ∈S/B, we choose a representative state sX ∈X and define f(sX) = sX and gsX (a) = a,∀a ∈A. Then, for any s ∼sX, we define f(s) = sX. From definition 1, we have that ∀a∃b s.t. Pr(X′|s,a) = Pr(X′|sX,b),∀X′ ∈S/B. Hence, we set gs(a) = b. Then, we have: P′(f(s),gs(a), f(s′)) = P′(f(sX),b′, f −1(f(s′)) = P(sX,b, f −1(f(s′)) = P(s,a, f −1(f(s′)) Also, R′(f(s),gs(a)) = R′(f(sX),b) = R(sX,a). Hence, we constructed a homomorphism. ⋄ 4 A metric for lax bisimulation We will now define a lax bisimulation metric for measuring similarity between state-action pairs, following the approach used by Ferns et al. (2004) for defining the bisimulation metric between states. We want to say that states s and u are close exactly when every action of one state is close to some action available in the other state. In order to capture this meaning, we first define similarity between state-action pairs, then we lift this to states using the Hausdorff metric (Munkres, 1999). Definition 4. Let cr,cp ≥0 be constants with cr +cp ≤1. Given a 1-bounded semi-metric d on S, the metric δ(d) : S ×A →[0,1] is defined as follows: δ(d)((s,a),(u,b)) = cr|R(s,a)−R(u,b)|+cpK(d)(P(s,a,·),P(u,b,·)) We now have to measure the distance between the set of of actions at state s and the set of actions at state u. Given a metric between pairs of points, the Hausdorff metric can be used to measure the distance between sets of points. It is defined as follows. Definition 5. Given a finite 1-bounded metric space (M ,d), let P (M ) be the set of compact spaces (e.g. closed and bounded in R). The Hausdorff metric H(d) : P (M )×P (M ) →[0,1] is defined as: H(d)(X,Y) = max(sup x∈X inf y∈Y d(x,y),sup y∈Y inf x∈X d(x,y)) Definition 6. Denote Xs = {(s,a)|a ∈A}. Let M be the set of all semimetrics on S. We define the operator F : M →M as F(d)(s,u) = H(δ(d))(Xs,Xu) We note that the same definition can be applied both for discrete and for compact continuous action spaces. If the action set is compact then Xs = {s} × A is also compact, so the Hausdorff metric is still well defined. For simplicity, we consider the discrete case, so that max and min are defined. Theorem 7. F is monotonic and has a least fixed point dfix in which d fix(s,u) = 0 iff s ∼u. The proof is similar in flavor to (Ferns et al., 2004) and we omit it for lack of space. As both efix and d fix quantify the difference in behaviour between states, it is not surprising to see that they constrain the difference in optimal value. Indeed, the bound below has previously been shown in (Ferns et al., 2004) for efix, but we also show that our metric d fix is tighter. Theorem 8. Let efix be the metric defined in (Ferns et al., 2004). Then we have: cr|V ∗(s)−V ∗(u)| ≤d fix(s,u) ≤efix(s,u) Proof: We show via induction on n that for the sequence of iterates Vn encountered during value iteration, cr|Vn(s) −Vn(u)| ≤d fix(s,u) ≤efix(s,u), and then the result follows by merely taking limits. For the base case note that cr|V0(s)−V0(u)| = d0(s,u) = e0(s,u) = 0. Assume this holds for n. By the monotonicity of F, we have that F(dn)(s,u) ≤F(en)(s,u). Now, for any a, δ(en)((s,a),(u,a)) ≤G(en)(s,u), which implies: F(en)(s,u) ≤ max(max a δ(en)((s,a),(u,a)),max b δ(en)((s,b),(u,b)) ≤ max(max a G(en)(s,u),G(en)(s,u)) = G(en)(s,u) so dn+1 ≤en+1 Without loss of generality, assume that Vn+1(s) > Vn+1(u). Then: cr|Vn+1(s)−Vn+1(u)|=cr|max a (R(s,a)+γ∑ s′ P(s,a,s′)Vn(s′))−max b (R(u,b)+γ∑ s′ P(u,b,s′)Vn(s′))| =cr|(R(s,a′)+γ∑ s′ P(s,a′,s′)Vn(s′))−(R(t,b′)+γ∑ s′ P(u,b′,s′)Vn(s′))| =cr min b |(R(s,a′)+γ∑ s′ P(s,a′,s′)Vn(s′))−(R(u,b)+γ∑ s′ P(u,b,s′)Vn(s′))| ≤cr max a min b |(R(s,a)+γ∑ s′ P(s,a,s′)Vn(s′))−(R(t,b)+γ∑ s′ P(u,b,s′)Vn(s′))| ≤max a min b (cr|R(s,a)−R(u,b)|+cp|∑ s′ (P(s,a,s′)−P(u,b,s′))crγ cp Vn(s′)|) Now since γ ≤cp, we have 0 ≤crγ cp Vi(s′) ≤(1−cp)γ cp(1−γ) ≤1 and by the induction hypothesis crγ cp Vn(s)−crγ cp Vn(u) ≤cr|Vn(s)−Vn(u)| ≤dn(s,u) So { crγ cp Vn(s′) : s′ ∈S} is a feasible solution to the LP for K(dn)(P(s,a),P(t,b)). We then continue the inequality: cr|Vn+1(s) −Vn+1(u)| ≤maxa minb(cr|R(s,a) −R(u,b)| + cpK(dn)(P(s,a),P(u,b))) = F(dn)(s,u) = dn+1(s,u)⋄ 5 State aggregation We now show how we can use this notion of lax bisimulation metrics to construct approximate MDP homomorphisms. First, if we have an MDP homomorphism, we can use it to provide a state space aggregation, as follows. Definition 9. Given a MDP M and a homomorphism, an aggregated MDP M′ is given by (S′,A,{P(C,a,D) : a ∈A;C,D ∈S′},{R(C,a) : a ∈A,C ∈S′},ρ,gs : s ∈S) where S′ is a partition of S, ρ : S →S′ maps states to their aggregates, each gs : A →A relabels the action set and we have that ∀C,D ∈S′ and a ∈A, P(C,a,D) = 1 |C| ∑ s∈C P(s,gs(a),D) and R(C,a) = 1 |C| ∑ s∈C R(s,gs(a)) Note that all the states in a partition have actions that are relabelled specifically so they can exactly match each other’s behaviour. Thus, a policy in the aggregate MDP can be lifted to the original MDP by using this relabeling. Definition 10. If M′ is an aggregation of MDP M and π′ is a policy in M′, then the lifted policy is defined by π(s) = gs(π′(s′)). Using a lax bisimulation metric, it is possible to choose appropriate re-labelings so that states within a partition can approximately match each other’s actions. Definition 11. Given a lax bisimulation metric d and a MDP M, we say that an aggregated MDP M′ is d-consistent if each aggregated class C has a state s ∈C, called the representative of C, such that: ∀u ∈C,δ(d)((s,gs(a)),(u,gu(a))) ≤F(d)(s,u) When the re-labelings are chosen in this way, we can solve for the optimal value function of the aggregated MDP and be assured that for each state, its true optimal value is close to the optimal value of the partition in which it is contained. Theorem 12. If M′ is a dζ-consistent aggregation of a MDP M and n ≤ζ, then ∀s ∈S we have: cr|Vn(ρ(s))−Vn(s)| ≤m(ρ(s))+M n−1 ∑ k=1 γn−k. where m(C) = 2maxu∈C dζ(s′,u), s′ denotes the representative state of C and M = maxC m(C). Furthermore, if π′ is a policy in M′ and π is the corresponding lifted policy in M, then: cr|V π′ n (ρ(s))−V π n (s)| ≤m(ρ(s))+M n−1 ∑ k=1 γn−k Proof: |Vn+1(ρ(s))−Vn+1(s)| = =|max a (R(ρ(s),a)+γ ∑ D∈S′ P(ρ(s),a,D)Vn(D))−max a (R(s,a)+γ∑ s′ P(s,a,s′)Vn(s′))| ≤ 1 |ρ(s)| ∑ u∈ρ(s) max a |R(u,gu(a))−R(s,gs(a))|+γ| ∑ D∈S′ P(u,gu(a),D)Vn(D)−∑ s′ P(s,gs(a),s′)Vn(s′)| ! ≤ 1 |ρ(s)| ∑ u∈ρ(s) max a |R(u,gu(a))−R(s,gs(a))|+γ|∑ s′ (P(u,gu(a),s′)Vn(ρ(s′))−P(s,gs(a),s′)Vn(s′))| ! ≤ 1 |ρ(s)| ∑ u∈ρ(s) max a (|R(u,gu(a))−R(s,gs(a))|+γ|∑ s′ (P(u,gu(a),s′)−P(s,gs(a),s′))Vn(s′) + γ|∑ s′ P(u,gu(a),s′)(Vn(ρ(s′))−Vn(s′))|) ≤ 1 cr|ρ(s)| ∑ u∈ρ(s) max a (cr|R(s,gs(a))−R(u,gu(a))| +cp|∑ s′ (P(u,gu(a),s′)−P(s,gs(a),s′))crγ cp Vn(s′)|)+ γ |ρ(s)| ∑ u∈ρ(s) max a ∑ s′ P(u,gu(a),s′)|Vn(ρ(s′))−Vn(s′)| From Theorem 8, we know that { crγ cp Vn(s′) : s′ ∈S} is a feasible solution to the primal LP for K(dn)(P(s,gs(a)),P(u,gu(a))). Let z be the representative used for ρ(s). Then we can continue as follows: ≤cr|R(s,gs(a)−R(u,gu(a))|+cpK(dn)(P(s,gs(a)),P(u,gu(a))) ≤cr|R(s,gs(a))−R(u,gu(a))|+cpK(dζ)(P(s,gs(a)),P(u,gu(a))) ≤cr|R(s,gs(a))−R(z,gz(a))|+cpK(dζ)(P(s,gs(a)),P(z,gz(a))) + cr|R(z,gz(a))−R(u,gu(a))|+cpK(dζ)(P(z,gz(a)),P(u,gu(a))) = dζ(s,z)+dζ(z,u) ≤m(ρ(s)) We continue with the original inequality using these two results: ≤ 1 cr ∑ u∈ρ(s) (cr|R(s,gs(a))−R(u,gu(a))|+cpK(dn)(P(s,gs(a)),P(u,gu(a)))) + γ |ρ(s)| ∑ u∈ρ(s) max a ∑ s′ P(u,gu(a),s′)max s′′ |Vn(ρ(s′′))−Vn(s′′)| ≤ 1 cr|ρ(s)| ∑ u∈ρ(s) m(ρ(s))+γmax s′ |Vn(ρ(s′))−Vn(s′)| ≤m(ρ(s)) cr +γmax s′′ m(ρ(s)) cr +M n−1 ∑ k=1 γn−k ! ≤ 1 cr m(ρ(s))+γmax s′ m(ρ(s′))+M n−1 ∑ k=1 γn+1−k ! ≤1 cr m(ρ(s))+M n ∑ k=1 γ(n+1)−k ! The second proof is nearly identical except that instead of maximizing over actions, the action selected by the policy, a = π′(ρ(s)), and the lifted policy, gs(a) = π(s) are used. ⋄ By taking limits we get the following theorem: Theorem 13. If M′ is a d fix-consistent aggregation of a MDP M, then ∀s ∈S we have: cr|V ∗(ρ(s))−V ∗(s)| ≤m(ρ(s))+ γ 1 −γM Furthermore, if π′ is any policy in M′ and π is the lifted policy to M then cr|V π′(ρ(s))−V π(s)| ≤m(ρ(s))+ γ 1 −γM where m(C) = 2maxu∈C d fix(s′,u), s′ is the representative state of C and M = maxC m(C). One appropriate way to aggregrate states is to choose some desired error bound ε > 0 and ensure that the states in each partition are within an ε-ball. A simple way to do this is to pick states and random and add to a partition each state within the ε-ball. Of course, better clustering heuristics can be used here as well. It has been noted that when the above condition holds, then under the unlaxed bisimulation metric efix, we can be assured that for each state s, |V ∗(ρ(s))−V(s)| is bounded by 2ε cr(1−γ). The theorem above shows that under the lax bisimulation metric d fix this difference is actually bounded by 4ε cr(1−γ). However, as we illustrate in the next section. a massive reduction in the size of the state space can be achieved by moving from efix to d fix, even when using ε′ = ε 2. For large systems, it might not be feasible to compute the metric efix in the original MDP. In this case, we might want to use some sort of heuristic or prior knowledge to create an aggregation. Ravindran & Barto (2003) provided, based on a result from Whitt (1978), a bound on the difference in values between the optimal policy in the aggregated MDP and the lifted policy in the original MDP. We now show that our metric can be used to tighten this bound. Theorem 14. If M′ is an aggregation of a MDP M, π′ is an optimal policy in M′, π is the policy lifted from π′ to M and d′ fix corresponds to our metric computed on M′, then |V π(s)−V π′(ρ(s))| ≤ 2 1 −γ max s,a |R(s,gs(a))−R(ρ(s),a)|+ γ cr max s,a K(d′ fix)(P(s,gs(a)),P(ρ(s),a)) C E4 W1 W2 W3 W4 E6 W5 E5 E3 E2 E1 W6 N1 N2 N3 N4 N5 N6 S1 S2 S3 S4 S5 S6 0.0 0.2 0.4 0.6 0.8 1.0 Epsilon 0 5 10 15 20 25 30 Lumped States Comparison of Laxed and Unlaxed Lumping Performance Unlaxed Metric Laxed Metric Figure 1: Example environment exhibiting symmetries (left). Aggregation performance (right) Proof: We have: |V π(s)−V π′(ρ(s))|) ≤2 1 −γ max s,a |R(s,gs(a))−R(ρ(s),a)+γ∑ C (P(s,gs(a),C)−P(ρ(s),a,C))V π′(C)| ≤ 2 1 −γ max s,a |R(s,gs(a))−R(ρ(s),a)|+γmax s,a |∑ C (P(s,gs(a),C)−P(ρ(s),a,C))V π′(C)| ≤ 2 1 −γ max s,a |R(s,gs(a))−R(ρ(s),a)|+max s,a γ cr K(d′ fix)(P(s,gs(a)),P(ρ(s),a)) The first inequality originally comes from (Whitt, 1978) and is applied to MDPs in (Ravindran & Barto, 2003). The last inequality holds since π′ is an optimal policy and thus by Theorem 8 we know that {Vπ′(C) cr : C ∈S′} is a feasible solution. ⋄ As a corrolary, we can get the same bound as in (Ravindran & Barto, 2003) by bounding the Kantorovich by the total variation metric. Definition 15. Given two finite distributions P and Q, the total variation metric TV(P,Q) is defined as: TV(P,Q) = ∑s 1 2|P(s)−Q(s)| Corollary 16. Let ∆= maxC,a R(C,a)−minC,a R(C,a) be the maximum difference in rewards in the aggregated MDP. Then: |V π(s)−V π(ρ(s))| ≤ 2 1 −γ  max s,a |R(s,gs(a))−R(ρ(s),a)|+ γ 1 −γ∆·TV(P(s,gs(a)),P(ρ(s),a))  Proof: This follows from the fact that: max C,D d′ fix(C,D) ≤cr∆+cpmax C,D d′ fix(C,D)··· ≤ cr∆ 1 −cp ≤cr∆ 1 −γ and using the total variation as an approximation (Gibbs & Su, 2002), we have: K(d′ fix)(P(s,gs(a)),P(ρ(s),a)) ≤max C,D d′ fix(C,D)·TV(P(s,gs(a)),P(ρ(s),a)) ⋄ 6 Illustration Consider the cross-shaped MDP displayed in Figure 1. There is a reward of 1 in the center and the probability of the agent moving in the intended direction is 0.8. For a given ε, we used the random partitioning algorithm outlined earlier to create a state aggregation. The graph plots the size of the aggregated MDPs obtained against ε, using the lax and the non-lax bisimulation metrics. In the case of the lax metric, we used ε′ = ε/2 to compensate for the factor of 2 difference in the error bound. It is very revealing that the number of partitions drops very quickly and levels at around 6 or 7 for our algorithm. This is because the MDP is collapsing to a state space close to the natural choice of {{C}}∪{{Ni,Si,Wi,Ei} : i ∈{1,2,3,4,5,6}}. Under the unlaxed metric, this is not likely to occur, and thus the first states to be partitioned together are the ones neighbouring each other (which can actually have quite different behaviours). 7 Discussion and future work We defined a metric for measuring the similarity of state-action pairs in a Markov Decision Process and used it in an algorithm for constructing approximate MDP homomorphisms. Our approach works significantly better than the bisimulation metrics of Ferns et al., as it allows capturing different regularities in the environment. The theoretical bound on the error in the value function presented in (Ravindran & Barto, 2004) can be derived using our metric. Although the metric is potentially expensive to compute, there are domains in which having an accurate aggregation is worth it. For example, in mobile device applications, one may have big computational resources initially to build an aggregation, but may then insist on a very coarse, good aggregation, to fit on a small device. The metric can also be used to find subtasks in a larger problem that can be solved using controllers from a pre-supplied library. For example, if a controller is available to navigate single rooms, the metric might be used to lump states in a building schematic into “rooms”. The aggregate MDP can then be used to solve the high level navigational task using the controller to navigate specific rooms. An important avenue for future work is reducing the computational complexity of this approach. Two sources of complexity include the quadratic dependence on the number of actions, and the evaluation of the Kantorovich metric. The first issue can be addressed by sampling pairs of actions, rather than considering all possibilities. We are also investigating the possibility of replacing the Kantorovich metric (which is very convenient from the theoretical point of view) with a more practical approximation. Finally, the extension to continuous states is very important. We currently have preliminary results on this issue, using an approach similar to (Ferns et al, 2005), which assumes lower-semi-continuity of the reward function. However, the details are not yet fully worked out. Acknowledgements: This work was funded by NSERC and CFI. References Arun-Kumar, S. (2006). On bisimilarities induced by relations on actions. SEFM ’06: Proceedings of the Fourth IEEE International Conference on Software Engineering and Formal Methods (pp. 41–49). Washington, DC, USA: IEEE Computer Society. Ferns, N., Castro, P. S., Precup, D., & Panangaden, P. (2006). Methods for computing state similarity in Markov Decision Processes. Proceedings of the 22nd UAI. Ferns, N., Panangaden, P., & Precup, D. (2004). Metrics for finite markov decision processes. Proceedings of the 20th UAI (pp. 162–169). Ferns, N., Panangaden, P., & Precup, D. (2005). Metrics for markov decision processes with infinite state spaces. Proceedings of the 21th UAI (pp. 201–209). Gibbs, A., & Su, F. (2002). On choosing and bounding probability metrics. Givan, R., Dean, T., & Greig, M. (2003). Equivalence notions and model minimization in Markov Decision Processes. Artificial Intelligence, 147, 163–223. Larsen, K. G., & Skou, A. (1991). Bisimulation through probabilistic testing. Inf. Comput., 94, 1–28. Li, L., Walsh, T. J., & Littman, M. L. (2006). Towards a unified theory of state abstraction for MDPs. Proceedings of the International Symposium on Artificial Intelligence and Mathematics. Milner, R. (1995). Communication and concurrency. Prentice Hall International (UK) Ltd. Munkres, J. (1999). Topology. Prentice Hall. Puterman, M. L. (1994). Markov decision processes: discrete stochastic dynamic programming. Wiley. Ravindran, B., & Barto, A. G. (2003). Relativized options: Choosing the right transformation. Proceedings of 20th ICML (pp. 608–615). Ravindran, B., & Barto, A. G. (2004). Approximate homomorphisms: A framework for non-exact minimization inn Markov Decision Processes. Proceedings of the Fifth International Conference on Knowledge Based Computer Systems. Whitt, W. (1978). Approximations of dynamic programs i. Mathematics of Operations Research, 3, 231–243. Wolfe, A. P., & Barto, A. G. (2006). Decision tree methods for finding reusable MDP homomorphisms. Proceedings of AAAI.
2008
96
3,588
Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks Alex Graves TU Munich, Germany graves@in.tum.de J¨urgen Schmidhuber IDSIA, Switzerland and TU Munich, Germany juergen@idsia.ch Abstract Offline handwriting recognition—the automatic transcription of images of handwritten text—is a challenging task that combines computer vision with sequence learning. In most systems the two elements are handled separately, with sophisticated preprocessing techniques used to extract the image features and sequential models such as HMMs used to provide the transcriptions. By combining two recent innovations in neural networks—multidimensional recurrent neural networks and connectionist temporal classification—this paper introduces a globally trained offline handwriting recogniser that takes raw pixel data as input. Unlike competing systems, it does not require any alphabet specific preprocessing, and can therefore be used unchanged for any language. Evidence of its generality and power is provided by data from a recent international Arabic recognition competition, where it outperformed all entries (91.4% accuracy compared to 87.2% for the competition winner) despite the fact that neither author understands a word of Arabic. 1 Introduction Offline handwriting recognition is generally observed to be harder than online handwriting recognition [14]. In the online case, features can be extracted from both the pen trajectory and the resulting image, whereas in the offline case only the image is available. Nonetheless, the standard recognition process is essentially the same: a sequence of features are extracted from the data, then matched to a sequence of labels (usually characters or sub-character strokes) using either a hidden Markov model (HMM) [9] or an HMM-neural network hybrid [10]. The main drawback of this approach is that the input features must meet the stringent independence assumptions imposed by HMMs (these assumptions are somewhat relaxed in the case of hybrid systems, but long-range input dependencies are still problematic). In practice this means the features must be redesigned for every alphabet, and, to a lesser extent, for every language. For example it would be impossible to use the same system to recognise both English and Arabic. Following our recent success in transcribing raw online handwriting data with recurrent networks [6], we wanted to build an offline recognition system that would work on raw pixels. As well as being alphabet-independent, such a system would have the advantage of being globally trainable, with the image features optimised along with the classifier. The online case was relatively straightforward, since the input data formed a 1D sequence that could be fed directly to a recurrent network. The long short-term memory (LSTM) network architecture [8, 3] was chosen for its ability to access long-range context, and the connectionist temporal classification [5] output layer allowed the network to transcribe the data with no prior segmentation. The offline case, however, is more challenging, since the input is no longer one-dimensional. A naive approach would be to present the images to the network one vertical line at a time, thereby transforming them into 1D sequences. However such a system would be unable to handle distor1 Figure 1: Two dimensional MDRNN. The thick lines show connections to the current point (i, j). The connections within the hidden layer plane are recurrent. The dashed lines show the scanning strips along which previous points were visited, starting at the top left corner. tions along the vertical axis; for example the same image shifted up by one pixel would appear completely different. A more flexible solution is offered by multidimensional recurrent neural networks (MDRNNs) [7]. MDRNNs, which are a special case of directed acyclic graph networks [1], generalise standard RNNs by providing recurrent connections along all spatio-temporal dimensions present in the data. These connections make MDRNNs robust to local distortions along any combination of input dimensions (e.g. image rotations and shears, which mix vertical and horizontal displacements) and allow them to model multidimensional context in a flexible way. We use multidimensional LSTM because it is able to access long-range context. The problem remains, though, of how to transform two-dimensional images into one-dimensional label sequences. Our solution is to pass the data through a hierarchy of MDRNN layers, with blocks of activations gathered together after each level. The heights of the blocks are chosen to incrementally collapse the 2D images onto 1D sequences, which can then be labelled by the output layer. Such hierarchical structures are common in computer vision [15], because they allow complex features to be built up in stages. In particular our multilayered structure is similar to that used by convolution networks [11], although it should be noted that because convolution networks are not recurrent, they cannot be used for cursive handwriting recognition without presegmented inputs. The method is described in detail in Section 2, experimental results are given in Section 3, and conclusions and directions for future work are given in Section 4. 2 Method The three components of our recognition system are: (1) multidimensional recurrent neural networks, and multidimensional LSTM in particular; (2) the connectionist temporal classification output layer; and (3) the hierarchical structure. In what follows we describe each component in turn, then show how they fit together to form a complete system. For a more detailed description of (1) and (2) we refer the reader to [4] 2.1 Multidimensional Recurrent Neural Networks The basic idea of multidimensional recurrent neural networks (MDRNNs) [7] is to replace the single recurrent connection found in standard recurrent networks with as many connections as there are spatio-temporal dimensions in the data. These connections allow the network to create a flexible internal representation of surrounding context, which is robust to localised distortions. An MDRNN hidden layer scans through the input in 1D strips, storing its activations in a buffer. The strips are ordered in such a way that at every point the layer has already visited the points one step back along every dimension. The hidden activations at these previous points are fed to the current point through recurrent connections, along with the input. The 2D case is illustrated in Fig. 1. One such layer is sufficient to give the network access to all context against the direction of scanning from the current point (e.g. to the top and left of (i, j) in Fig. 1). However we usually want surrounding context in all directions. The same problem exists in 1D networks, where it is often useful to have information about the future as well as the past. The canonical 1D solution is bidi2 rectional recurrent networks [16], where two separate hidden layers scan through the input forwards and backwards. The generalisation of bidirectional networks to n dimensions requires 2n hidden layers, starting in every corner of the n dimensional hypercube and scanning in opposite directions. For example, a 2D network has four layers, one starting in the top left and scanning down and right, one starting in the bottom left and scanning up and right, etc. All the hidden layers are connected to a single output layer, which therefore receives information about all surrounding context. The error gradient of an MDRNN can be calculated with an n-dimensional extension of backpropagation through time. As in the 1D case, the data is processed in the reverse order of the forward pass, with each hidden layer receiving both the output derivatives and its own n ‘future’ derivatives at every timestep. Let ap j and bp j be respectively the input and activation of unit j at point p = (p1, . . . , pn) in an ndimensional input sequence x with dimensions (D1, . . . , Dn). Let p− d = (p1, . . . , pd −1, . . . , pn) and p+ d = (p1, . . . , pd + 1, . . . , pn). Let wij and wd ij be respectively the weight of the feedforward connection from unit i to unit j and the recurrent connection from i to j along dimension d. Let θh be the activation function of hidden unit h, and for some unit j and some differentiable objective function O let δp j = ∂O ∂ap j . Then the forward and backward equations for an n-dimensional MDRNN with I input units, K output units, and H hidden summation units are as follows: Forward Pass ap h = I X i=1 xp i wih + n X d=1: pd>0 H X ˆh=1 b p− d ˆh wd ˆhh bp h = θh(ap h) Backward Pass δp h = θ′ h(ap h) 0 B @ K X k=1 δp k whk + n X d=1: pd<Dd−1 H X ˆh=1 δ p+ d ˆh wd hˆh 1 C A 2.1.1 Multidimensional LSTM Long Short-Term Memory (LSTM) [8, 3] is an RNN architecture designed for data with long-range interdependencies. An LSTM layer consists of recurrently connected ‘memory cells’, whose activations are controlled by three multiplicative gate units: the input gate, forget gate and output gate. The gates allows the cells to store and retrieve information over time, giving them access to long-range context. The standard formulation of LSTM is explicitly one-dimensional, since each cell contains a single recurrent connection, whose activation is controlled by a single forget gate. However we can extend this to n dimensions by using instead n recurrent connections (one for each of the cell’s previous states along every dimension) with n forget gates. Consider an MDLSTM memory cell in a hidden layer of H cells, connected to I input units and K output units. The subscripts c, ι, φ and ω refer to the cell, input gate, forget gate and output gate respectively. bp h is the output of cell h in the hidden layer at point p in the input sequence, and sp c is the state of cell c at p. f1 is the activation function of the gates, and f2 and f3 are respectively the cell input and output activation functions. The suffix φ, d denotes the forget gate corresponding to recurrent connection d. The input gate ι is connected to previous cell c along all dimensions with the same weight (wcι) whereas the forget gates are connected to cell c with a separate weight wc(φ,d) for each dimension d. Then the forward and backward equations are as follows: Forward Pass Input Gate: bp ι = f1 0 B @ I X i=1 xp i wiι + n X d=1: pd>0 wcιs p− d c + H X h=1 b p− d h wd hι !1 C A Forget Gate: bp φ,d = f1 0 B B @ I X i=1 xp i wi(φ,d) + n X d′=1: pd′ >0 H X h=1 b p− d′ h wd′ h(φ,d) + ( wc(φ,d)s p− d c if pd > 0 0 otherwise 1 C C A 3 Cell: ap c = I X i=1 xp i wic + n X d=1: pd>0 H X h=1 b p− d h wd hc State: sp c = bp ι f2(ap c ) + n X d=1: pd>0 s p− d c bp φ,d Output Gate: bp ω = f1 0 B @ I X i=1 xp i wiω + n X d=1: pd>0 H X h=1 b p− d h wd hω + wcωsp c 1 C A Cell Output: bp c = bp ωf3(sp c ) Backward Pass Cell Output: ϵp c def = ∂O ∂bp c = K X k=1 δp k wck + n X d=1: pd<Dd−1 H X h=1 δ p+ d h wd ch Output Gate: δp ω = f ′ 1(ap ω)ϵp c f3(sp c ) State: ϵp s def = ∂O ∂sp c = bp ωf ′ 3(sp c )ϵp c + δp ωwcω + n X d=1: pd<Dd−1 „ ϵ p+ d s b p+ d φ,d + δ p+ d ι wcι + δ p+ d φ,dwc(φ,d) « Cell: δp c = bp ι f ′ 2(ap c )ϵp s Forget Gate: δp φ,d = ( f ′ 1(ap φ,d)s p− d c ϵp s if pd > 0 0 otherwise Input Gate: δp ι = f ′ 1(ap ι )f2(ap c )ϵp s 2.2 Connectionist Temporal Classification Connectionist temporal classification (CTC) [5] is an output layer designed for sequence labelling with RNNs. Unlike other neural network output layers it does not require pre-segmented training data, or postprocessing to transform its outputs into transcriptions. Instead, it trains the network to directly estimate the conditional probabilities of the possible labellings given the input sequences. A CTC output layer contains one more unit than there are elements in the alphabet L of labels for the task. The output activations are normalised at each timestep with the softmax activation function [2]. The first |L| outputs estimate the probabilities of observing the corresponding labels at that time, and the extra output estimates the probability of observing a ‘blank’, or no label. The combined output sequence estimates the joint probability of all possible alignments of the input sequence with all sequences of labels and blanks. The probability of a particular labelling can then be estimated by summing over the probabilities of all the alignments that correspond to it. More precisely, for a length T input sequence x, the CTC outputs define a probability distribution over the set L′T of length T sequences over the alphabet L′ = L ∪{blank}. To distinguish them from labellings, we refer to the elements of L′T as paths. Since the probabilities of the labels at each timestep are conditionally independent given x, the conditional probability of a path π ∈L′T is given by p(π|x) = QT t=1 yt πt. where yt k is the activation of output unit k at time t. Paths are mapped onto labellings l ∈L≤T by an operator B that removes first the repeated labels, then the blanks. So for example, both B(a, −, a, b, −) and B(−, a, a, −, −, a, b, b) yield the labelling (a, a, b). Since the paths are mutually exclusive, the conditional probability of some labelling l ∈ L≤T is the sum of the probabilities of all paths corresponding to it: p(l|x) = P π∈B−1(l) p(π|x). Although a naive calculation of this sum is unfeasible, it can be efficiently evaluated with a dynamic programming algorithm, similar to the forward-backward algorithm for HMMs. To allow for blanks in the output paths, for each labelling l ∈L≤T consider a modified labelling l′ ∈L′≤T , with blanks added to the beginning and the end and inserted between every pair of labels. The length |l′| of l′ is therefore 2|l| + 1. For a labelling l, define the forward variable αt(s) as the summed probability of all path beginnings reaching index s of l′ at time t, and the backward variables βt(s) as the summed probability of all path endings that would complete the labelling l if the path beginning had reached s at time t. Both 4 the forward and backward variables are calculated recursively [5]. The label sequence probability is given by the sum of the products of the forward and backward variables at any timestep, i.e. p(l|x) = P|l′| s=1 αt(s)βt(s). Let S be a training set, consisting of pairs of input and target sequences (x, z), where |z| ≤|x|. Then the objective function O for CTC is the negative log probability of the network correctly labelling all of S: O = −P (x,z)∈S ln p(z|x). The network can be trained with gradient descent by first differentiating O with respect to the outputs, then using backpropagation through time to find the derivatives with respect to the weights. Note that the same label (or blank) may be repeated several times for a single labelling l. We define the set of positions where label k occurs as lab(l, k) = {s : l′ s = k}, which may be empty. Setting l = z and differentiating O with respect to the network outputs, we obtain: −∂O ∂at k = −∂ln p(z|x) ∂at k = yt k − 1 p(z|x) X s∈lab(z,k) αt(s)βt(s), where at k and yt k are respectively the input and output of CTC unit k at time t for some (x, z) ∈S. Once the network is trained, we can label some unknown input sequence x by choosing the labelling l∗with the highest conditional probability, i.e. l∗= arg maxl p(l|x). In cases where a dictionary is used, the labelling can be constrained to yield only sequences of complete words by using the CTC token passing algorithm [6]. For the experiments in this paper, the labellings were further constrained to give single word sequences only, and the ten most probable words were recorded. 2.3 Network Hierarchy Many computer vision systems use a hierarchical approach to feature extraction, with the features at each level used as input to the next level [15]. This allows complex visual properties to be built up in stages. Typically, such systems use subsampling, with the feature resolution decreased at each stage. They also generally have more features at the higher levels. The basic idea is to progress from a small number of simple local features to a large number of complex global features. We created a hierarchical structure by repeatedly composing MDLSTM layers with feedforward layers. The basic procedure is as follows: (1) the image is divided into small pixel blocks, each of which is presented as a single input to the first set of MDLSTM layers (e.g. a 4x3 block is reduced to a length 12 vector). If the image does not divide exactly into blocks, it is padded with zeros. (2) the four MDLSTM layers scan through the pixel blocks in all directions. (3) the activations of the MDLSTM layers are collected into blocks. (4) these blocks are given as input to a feedforward layer. Note that all the layers have a 2D array of activations: e.g. a 10 unit feedforward layer with input from a 5x5 array of MDLSTM blocks has a total of 250 activations. The above process is repeated as many times as required, with the activations of the feedforward layer taking the place of the original image. The purpose of the blocks is twofold: to collect local contextual information, and to reduce the area of the activation arrays. In particular, we want to reduce the vertical dimension, since the CTC output layer requires a 1D sequence as input. Note that the blocks themselves do not reduce the overall amount of data; that is done by the layers that process them, which are therefore analogous to the subsampling steps in other approaches (although with trainable weights rather than a fixed subsampling function). For most tasks we find that a hierarchy of three MDLSTM/feedforward stages gives the best results. We use the standard ‘inverted pyramid’ structure, with small layers at the bottom and large layers at the top. As well as allowing for more features at higher levels, this leads to efficient networks, since most of the weights are concentrated in the upper layers, which have a smaller input area. In general we cannot assume that the input images are of fixed size. Therefore it is difficult to choose block heights that ensure that the final activation array will always be one-dimensional, as required by CTC. A simple solution is to collapse the final array by summing over all the inputs in each vertical line, i.e. the input at time t to CTC unit k is given by at k = P x a(x,t) k , where a(x,y) k is the uncollapsed input to unit k at point (x, y) in the final array. 5 Figure 2: The complete recognition system. First the input image is collected into boxes 3 pixels wide and 4 pixels high which are then scanned by four MDLSTM layers. The activations of the cells in each layer are displayed separately, and the arrows in the corners indicates the scanning direction. Next the MDLSTM activations are gathered into 4 x 3 boxes and fed to a feedforward layer of tanh summation units. This process is repeated two more times, until the final MDLSTM activations are collapsed to a 1D sequence and transcribed by the CTC layer. In this case all characters are correctly labelled except the second last one, and the correct town name is chosen from the dictionary. 3 Experiments To see how our method compared to the state of the art, we applied it to data from the ICDAR 2007 Arabic handwriting recognition competition [12]. Although we were too late to enter the competition itself, the organisers kindly agreed to evaluate our system according to the competition criteria. We did not receive the test data at any point, and all evaluations were carried out by them. The goal of the competition was to identify the postcodes of Tunisian town and village names. The names are presented individually, so it is an isolated word recognition task. However we would like to point out that our system is equally applicable to unconstrained handwriting, and has been successfully applied to complete lines of English text. 3.1 Data The competition was based on the IFN/ENIT database of handwritten Arabic words [13]. The publically available data consists of 32,492 images of handwritten Tunisian town names, of which we used 30,000 for training, and 2,492 for validation. The images were extracted from artificial 6 Table 1: Results on the ICDAR 2007 Arabic handwriting recognition contest. All scores are percentages of correctly identified postcodes. The systems are ordered by the ‘top 1’ results on test set ‘f’. The best score in each column is shown in bold. SET f SET s SYSTEM top 1 top 5 top 10 top 1 top 5 top 10 CACI-3 14.28 29.88 37.91 10.68 21.74 30.20 CACI-2 15.79 21.34 22.33 14.24 19.39 20.53 CEDAR 59.01 78.76 83.70 41.32 61.98 69.87 MITRE 61.70 81.61 85.69 49.91 70.50 76.48 UOB-ENST-1 79.10 87.69 90.21 64.97 78.39 82.20 PARIS V 80.18 91.09 92.98 64.38 78.12 82.13 ICRA 81.47 90.07 92.15 72.22 82.84 86.27 UOB-ENST-2 81.65 90.81 92.35 69.61 83.79 85.89 UOB-ENST-4 81.81 88.71 90.40 70.57 79.85 83.34 UOB-ENST-3 81.93 91.20 92.76 69.93 84.11 87.03 SIEMENS-1 82.77 92.37 93.92 68.09 81.70 85.19 MIE 83.34 91.67 93.48 68.40 80.93 83.73 SIEMENS-2 87.22 94.05 95.42 73.94 85.44 88.18 Ours 91.43 96.12 96.75 78.83 88.00 91.05 forms filled in by over 400 Tunisian people. The forms were designed to simulate writing on a letter, and contained no lines or boxes to constrain the writing style. Each image was supplied with a ground truth transcription for the individual characters1. There were 120 distinct characters in total. A list of 937 town names and postcodes was provided. Many of the town names had transcription variants, giving a total of 1,518 entries in the complete dictionary. The test data (which is not published) was divided into sets ‘f’ and ‘s’. The main competition results were based on set ‘f’. Set ‘s’ contains data collected in the United Arab Emirates using the same forms; its purpose was to test the robustness of the recognisers to regional writing variations. The systems were allowed to choose up to 10 postcodes for each image, in order of preference. The test set performance using the top 1, top 5, and top 10 answers was recorded by the organisers. 3.2 Network Parameters The structure shown in Figure 2 was used, with each layer fully connected to the next layer in the hierarchy, all MDLSTM layers connected to themselves, and all units connected to a bias weight. There were 159,369 weights in total. This may sound like a lot, but as mentioned in Section 2.3, the ‘inverted pyramid’ structure greatly reduces the actual number of weight operations. In effect the higher up networks (where the vast majority of the weights are concentrated) are processing much smaller images than those lower down. The squashing function for the gates was the logistic sigmoid f1(x) = 1/(1 + e−x), while tanh was used for f2 and f3. Each pass through the training set took about an hour on a desktop computer, and the network converged after 85 passes. The complete system was trained with online gradient descent, using a learning rate of 10−4 and a momentum of 0.9. The character error rate was evaluated on the validation set after every pass through the training set, and training was stopped after 50 evaluations with no improvement. The weights giving the lowest error rate on the validation set were passed to the competition organisers for assessment on the test sets. 3.3 Results Table 1 clearly shows that our system outperformed all entries in the 2007 ICDAR Arabic recognition contest. The other systems, most of which are based on hidden Markov models, are identified by the names of the groups that submitted them (see [12] for more information). 1At first we forgot that Arabic reads right to left and presented the transcriptions backwards. The system performed surprisingly well, with a character error rate of 17.8%, compared to 10.7% for the correct targets. 7 4 Conclusions and Future Work We have combined multidimensional LSTM with connectionist temporal classification and a hierarchical layer structure to create a powerful offline handwriting recogniser. The system is very general, and has been successfully applied to English as well as Arabic. Indeed, since the dimensionality of the networks can be changed to match that of the data, it could in principle be used for almost any supervised sequence labelling task. Acknowledgements We would like to thank Haikal El Abed for giving us access to the ICDAR competition data, and for persisting in the face of technical despair to install and evaluate our software. This work was supported by the excellence cluster “Cognition for Technical Systems” (CoTeSys) from the German Research Foundation (DFG). References [1] P. Baldi and G. Pollastri. The principled design of large-scale recursive neural network architectures–dagrnns and the protein structure prediction problem. J. Mach. Learn. Res., 4:575–602, 2003. [2] J. S. Bridle. Probabilistic interpretation of feedforward classification network outputs, with relationships to statistical pattern recognition. In F. Fogleman-Soulie and J.Herault, editors, Neurocomputing: Algorithms, Architectures and Applications, pages 227–236. Springer-Verlag, 1990. [3] F. Gers, N. Schraudolph, and J. Schmidhuber. Learning precise timing with LSTM recurrent networks. Journal of Machine Learning Research, 3:115–143, 2002. [4] A. Graves. Supervised Sequence Labelling with Recurrent Neural Networks. PhD thesis. [5] A. Graves, S. Fern´andez, F. Gomez, and J. Schmidhuber. Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks. In Proceedings of the International Conference on Machine Learning, ICML 2006, Pittsburgh, USA, 2006. [6] A. Graves, S. Fern´andez, M. Liwicki, H. Bunke, and J. Schmidhuber. Unconstrained online handwriting recognition with recurrent neural networks. In J. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20. MIT Press, Cambridge, MA, 2008. [7] A. Graves, S. Fern´andez, and J. Schmidhuber. Multidimensional recurrent neural networks. In Proceedings of the 2007 International Conference on Artificial Neural Networks, Porto, Portugal, September 2007. [8] S. Hochreiter and J. Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735–1780, 1997. [9] J. Hu, S. G. Lim, and M. K. Brown. Writer independent on-line handwriting recognition using an HMM approach. Pattern Recognition, 33:133–147, 2000. [10] S. Jaeger, S. Manke, J. Reichert, and A. Waibel. On-line handwriting recognition: the NPen++ recognizer. International Journal on Document Analysis and Recognition, 3:169–180, 2001. [11] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, November 1998. [12] V. Margner and H. E. Abed. Arabic handwriting recognition competition. In ICDAR ’07: Proceedings of the Ninth International Conference on Document Analysis and Recognition (ICDAR 2007) Vol 2, pages 1274–1278, Washington, DC, USA, 2007. IEEE Computer Society. [13] M. Pechwitz, S. S. Maddouri, V. Mrgner, N. Ellouze, and H. Amiri. IFN/ENIT-database of handwritten arabic words. In 7th Colloque International Francophone sur l’Ecrit et le Document (CIFED 2002), Hammamet, Tunis, 2002. [14] R. Plamondon and S. N. Srihari. On-line and off-line handwriting recognition: a comprehensive survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000. [15] M. Reisenhuber and T. Poggio. Hierarchical models of object recognition in cortex. Nature Neuroscience, 2(11):1019–1025, 1999. [16] M. Schuster and K. K. Paliwal. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45:2673–2681, November 1997. 8
2008
97
3,589
Robust Near-Isometric Matching via Structured Learning of Graphical Models Julian J. McAuley NICTA/ANU julian.mcauley @nicta.com.au Tib´erio S. Caetano NICTA/ANU tiberio.caetano @nicta.com.au Alexander J. Smola Yahoo! Research∗ alex@smola.org Abstract Models for near-rigid shape matching are typically based on distance-related features, in order to infer matches that are consistent with the isometric assumption. However, real shapes from image datasets, even when expected to be related by “almost isometric” transformations, are actually subject not only to noise but also, to some limited degree, to variations in appearance and scale. In this paper, we introduce a graphical model that parameterises appearance, distance, and angle features and we learn all of the involved parameters via structured prediction. The outcome is a model for near-rigid shape matching which is robust in the sense that it is able to capture the possibly limited but still important scale and appearance variations. Our experimental results reveal substantial improvements upon recent successful models, while maintaining similar running times. 1 Introduction Matching shapes in images has many applications, including image retrieval, alignment, and registration [1, 2, 3, 4]. Typically, matching is approached by selecting features for a set of landmark points in both images; a correspondence between the two is then chosen such that some distance measure between these features is minimised. A great deal of attention has been devoted to defining complex features which are robust to changes in rotation, scale etc. [5, 6].1 An important class of matching problems is that of near-isometric shape matching. In this setting, it is assumed that shapes are defined up to an isometric transformation (allowing for some noise), and therefore distance features are typically used to encode the shape. Recent work has shown how the isometric constraint can be exploited by a particular type of graphical model whose topology encodes the necessary properties for obtaining optimal matches in polynomial time [11]. Another line of work has focused on structured learning to optimize graph matching scores, however no explicit exploitation of the geometrical constraints involved in shape modeling are made [12]. In this paper, we combine the best of these two approaches into a single model. We produce an exact, efficient model to solve near-isometric shape matching problems using not only isometryinvariant features, but also appearance and scale-invariant features. By doing so we can learn the relative importances of variations in appearance and scale with regard to variations in shape per se. Therefore, even knowing that we are in a near-isometric setting, we will capture the eventual variations in appearance and scale into our matching criterion in order to produce a robust nearisometric matcher. In terms of learning, we introduce a two-stage structured learning approach to address the speed and memory efficiency of this model. ∗Alexander J. Smola was with NICTA at the time of this work. 1We restrict our attention to this type of approach, i.e. that of matching landmarks between images. Some notable approaches deviate from this norm – see (for example) [7, 8, 9, 10]. 1 Figure 1: The graphical model introduced in [11]. 2 Background 2.1 Shape Matching ‘Shape matching’ can mean many different things, depending on the precise type of query one is interested in. Here we study the case of identifying an instance of a template shape (S ⊆T ) in a target scene (U) [1].2 We assume that we know S, i.e. the points in the template that we want to query in the scene. Typically both T and U correspond to a set of ‘landmark’ points, taken from a pair of images (common approaches include [6, 13, 14]). For each point t ∈T and u ∈U, a certain set of unary features are extracted (here denoted by φ(t), φ(u)), which contain local information about the image at that point [5, 6]. If y : S →U is a generic mapping representing a potential match, the goal is then to find a mapping ˆy which minimises the aggregate distance between corresponding features, i.e. ˆy = f(S, U) = argmin y |S| X i=1 c1(si, y(si)), where c1(si, y(si)) = ∥φ(si) −φ(y(si))∥2 2. (1) (here ∥·∥2 denotes the L2 norm). For injective y eq. (1) is a linear assignment problem, efficiently solvable in cubic time. In addition to unary or first-order features, pairwise or second-order features can be induced from the locations of the unary features. In this case eq. (1) would be generalised to minimise an aggregate distance between pairwise features. This however induces an NP-hard problem (quadratic assignment). Discriminative structured learning has recently been applied to models of both linear and quadratic assignment in [12]. 2.2 Graphical Models In isometric matching settings, one may suspect that it may not be necessary to include all pairwise relations in quadratic assignment. In fact a recent paper [11] has shown that if only the distances as encoded by the graphical model depicted in figure 1 are taken into account (nodes represent points in S and states represent points in U), exact probabilistic inference in such a model can solve the isometric problem optimally. That is, an energy function of the following form is minimised:3 |S| X i=1 c2(si, si+1, y(si), y(si+1)) + c2(si, si+2, y(si), y(si+2)). (2) In [11], it is shown that loopy belief propagation using this model converges to the optimal assignment, and that the number of iterations required before convergence is small in practice. We will extend this model by adding a unary term, c1(si, y(si)) (as in (eq. 1)), and a third-order term, c3(si, si+1, si+2, y(si), y(si+1), y(si+2)). Note that the graph topology remains the same. 2Here T is the set of all points in the template scene, whereas S corresponds to those points in which we are interested. It is also important to note that we treat S as an ordered object in our setting. 3si+1 should be interpreted as s(i+1) mod |S| (i.e. the points form a loop). 2 2.3 Discriminative Structured Learning In practice, feature vectors may be very high-dimensional, and which components are ‘important’ will depend on the specific properties of the shapes being matched. Therefore, we introduce a parameter, θ, which controls the relative importances of the various feature components. Note that θ is parameterising the matching criterion itself. Hence our minimisation problem becomes ˆy = f(S, U; θ) = argmax y ⟨h(S, U, y), θ⟩ (3) where h(S, U, y) = − |S| X i=1 Φ(si, si+1, si+2, y(si), y(si+1), y(si+2)). (4) (y is a mapping from S to U, Φ is a third-order feature vector – our specific choice is shown in section 3).4 In order to measure the performance of a particular weight vector, we use a loss function, ∆(ˆy, yi), which represents the cost incurred by choosing the assignment ˆy when the correct assignment is yi (our specific choice of loss function is described in section 4). To avoid overfitting, we also desire that θ is sufficiently ‘smooth’. Typically, one uses the squared L2 norm, ∥θ∥2 2, to penalise non-smooth choices of θ [15]. Learning in this setting now becomes a matter of choosing θ such that the empirical risk (average loss on all training instances) is minimised, but which is also sufficiently ‘smooth’ (to prevent overfitting). Specifically, if we have a set of training pairs,  S1 . . . SN ,  U1 . . . UN , with labelled matches  y1 . . . yN , then we wish to minimise 1 N N X i=1 ∆(f(Si, Ui; θ), yi) | {z } empirical risk + λ 2 ∥θ∥2 2 | {z } regulariser . (5) Here λ (the regularisation constant) controls the relative importance of minimising the empirical risk against the regulariser. In our case, we simply choose λ such that the empirical risk on our validation set is minimised. Solving (eq. 5) exactly is an extremely difficult problem and in practice is not feasible, since the loss is piecewise constant on the parameter θ. Here we capitalise on recent advances in large-margin structured estimation [15], which consist of obtaining convex relaxations of this problem. Without going into the details of the solution (see, for example, [15, 16]), it can be shown that a convex relaxation of this problem can be obtained, which is given by min θ 1 N N X i=1 ξi + λ 2 ∥θ∥2 2 (6a) subject to ⟨h(Si, Ui, yi) −h(Si, Ui, y), θ⟩≥∆(y, yi) −ξi for all i and y ∈Y (6b) (where Y is the space of all possible mappings). It can be shown that for the solution of the above problem, we have that ξ∗ i ≥∆(f(Si, Ui; θ), yi). This means that we end up minimising an upper bound on the loss, instead of the loss itself. Solving (6) requires only that we are able, for any value of θ, to find argmax y ⟨h(Si, Ui, y), θ⟩+ ∆(y, yi)  . (7) In other words, for each value of θ, we are able to identify the mapping which is consistent with the model (eq. 3), yet incurs a high loss. This process is known as ‘column generation’ [15, 16]. As we will define our loss as a sum over the nodes, solving (eq. 7) is no more difficult than solving (eq. 3). 4We have expressed (eq. 3) as a maximisation problem as a matter of convention; this is achieved simply by negating the cost function in (eq. 4). 3 Figure 2: Left: the (ordered) set of points in our template shape (S). Centre: connections between immediate neighbours. Right: connections between neighbour’s neighbours (our graphical model). 3 Our Model Although the model of [11] solves isometric matching problems optimally, it provides no guarantees for near-isometric problems, as it only considers those compatibilities which form cliques in our graphical model. However, we are often only interested in the boundary of the object: if we look at the instance of the model depicted in figure 2, it seems to capture exactly the important dependencies; adding additional dependencies between distant points (such as the duck’s tail and head) would be unlikely to contribute to this model. With this in mind, we introduce three new features (for brevity we use the shorthand yi = y(si)): Φ1(s1, s2, y1, y2) = (d1(s1, s2) −d1(y1, y2))2 , where d1(a, b) is the Euclidean distance between a and b, scaled according to the width of the target scene. Φ2(s1, s2, s3, y1, y2, y3) = (d2(s1, s2, s3) −d2(y1, y2, y3))2 , where d2(a, b, c) is the Euclidean distance between a and b scaled by the average of the distances between a, b, and c. Φ3(s1, s2, s3, y1, y2, y3) = (∠(s1, s2, s3) −∠(y1, y2, y3))2 , where ∠(a, b, c) is the angle between a and c, w.r.t. b.5 We also include the unary features Φ0(s1, y1) = (φ(s1)−φ(y1))2 (i.e. the pointwise squared difference between φ(s1) and φ(y1)). Φ1 is exactly the feature used in [11], and is invariant to isometric transformations (rotation, reflection, and translation); Φ2 and Φ3 capture triangle similarity, and are thus also invariant to scale. In the context of (eq. 4), we have Φ(s1, s2, s3, y1, y2, y3) :=  Φ0(s1, y1), Φ1(s1, s2, y1, y2) + Φ1(s1, s3, y1, y3), Φ2(s1, s2, s3, y1, y2, y3) + Φ2(s1, s3, s2, y1, y3, y2), Φ3(s1, s2, s3, y1, y2, y3)  . (8) In practice, landmark detectors often identify several hundred points [6, 17], which is clearly impractical for an O(|S||U|3) method (|U| is the number of landmarks in the target scene). To address this, we adopt a two stage learning approach: in the first stage, we learn only unary compatibilities, exactly as is done in [12]. During the second stage of learning, we collapse the first-order feature vector into a single term, namely Φ′ 0(s1, y1) = ⟨θ0, Φ0(s1, y1)⟩ (9) (θ0 is the weight vector learned during the first stage). We now perform learning for the third-order model, but consider only the p ‘most likely’ matches for each node, where the likelihood is simply determined using Φ′ 0(s1, y1). This reduces the performance and memory requirements to O(|S|p3). A consequence of using this approach is that we must now tune two regularisation constants; this is not an issue in practice, as learning can be performed quickly using this approach.6 5Using features of such different scales can be an issue for regularisation – in practice we adjusted these features to have roughly the same scale. For full details, our implementation is available at (not included for blind review). 6In fact, even in those cases where a single stage approach was tractable (such as the experiment in section 4.1), we found that the two stage approach worked better. Typically, we required much less regularity during the second stage, possibly because the higher order features are heterogeneous. 4 Figure 3: Left: The adjacency structure of the graph (top); the boundary of our ‘shape’ (centre); the topology of our graphical model (bottom). Right: Example matches using linear assignment (top, 6/30 mismatches), quadratic assignment (centre, 4/30 mismatches), and the proposed model (bottom, no mismatches). The images shown are the 12th and 102nd frames in our sequence. Correct matches are shown in green, incorrect matches in red. All matches are reported after learning. 4 Experiments 4.1 House Data In our first experiment, we compare our method to those of [11] and [12]. Both papers report the performance of their methods on the CMU ‘house’ sequence – a sequence of 111 frames of a toy house, with 30 landmarks identified in each frame.7 As in [12], we compute the Shape Context features for each of the 30 points [5]. In addition to the unary model of [12], a model based on quadratic assignment is also presented, in which pairwise features are determined using the adjacency structure of the graphs. Specifically, if a pair of points (p1, p2) in the template scene is to be matched to (q1, q2) in the target, there is a feature which is 1 if there is an edge between p1 and p2 in the template, and an edge between q1 and q2 in the target (and 0 otherwise). We also use such a feature for this experiment, however our model only considers matchings for which (p1, p2) forms an edge in our graphical model (see figure 3, bottom left). The adjacency structure of the graphs is determined using the Delaunay triangulation, (figure 3, top left). As in [11], we compare pairs of images with a fixed baseline (separation between frames). For our loss function, ∆(ˆy, yi), we used the normalised Hamming loss, i.e. the proportion of mismatches. Figure 4 shows our performance on this dataset, as the baseline increases. On the left we show the performance without learning, for which our model exhibits the best performance by a substantial margin.8 Our method is also the best performing after learning – in fact, we achieve almost zero error for all but the largest baselines (at which point our model assumptions become increasingly violated, and we have less training data). In figure 5, we see that the running time of our method is similar to the quadratic assignment method of [12]. To improve the running time, we also show our results with p = 10, i.e. for each point in the template scene, we only consider the 10 ‘most likely’ matches, using the weights from the first stage of learning. This reduces the running time by more than an order of 7http://vasc.ri.cmu.edu/idb/html/motion/house/index.html 8Interestingly, the quadratic method of [12] performs worse than their unary method; this is likely because the relative scale of the unary and quadratic features is badly tuned before learning, and is indeed similar to what the authors report. Furthermore, the results we present for the method of [12] after learning are much better than what the authors report – in that paper, the unary features are scaled using a pointwise exponent (−exp(−|φa −φb|2)), whereas we found that scaling the features linearly (|φa −φb|2) worked better. 5 0 0.2 0.4 0.6 0.8 1 0 10 20 30 40 50 60 70 80 90 Normalised Hamming loss on test set Baseline House data, no learning point matching linear quadratic higher order 0 0.05 0.1 0.15 0.2 0.25 0.3 0 10 20 30 40 50 60 70 80 90 Normalised Hamming loss on test set Baseline House data, learning linear (learning) quadratic (learning) higher order (learning, 10 points) higher order (learning) Figure 4: Comparison of our technique against that of [11] (‘point matching’), and [12] (‘linear’, ‘quadratic’). The performance before learning is shown on the left, the performance after learning is shown on the right. Our method exhibits the best performance both before and after learning (note the different scales of the two plots). Error bars indicate standard error. 0 0.05 0.1 0.15 0.2 0.25 0.3 0.0001 0.001 0.01 0.1 1 Normalised Hamming loss on test set Average running time (seconds, logarithmic scale) House (baseline = 60) linear (learning) quadratic (learning) higher order (learning, 10 points) higher order (learning) Figure 5: The running time and performance of our method, compared to those of [12] (note that the method of [11] has running time identical to our method). Our method is run from 1 to 20 iterations of belief propagation, although the method appears to converge in fewer than 5 iterations. magnitude, bringing it closer to that of linear assignment; even this model achieves approximately zero error up to a baseline of 50. Finally, figure 6 (left) shows the weight vector of our model, for a baseline of 60. The first 60 weights are for the Shape Context features (determined during the first stage of learning), and the final 5 show the weights from our second stage of learning (the weights correspond to the first-order features, distances, adjacencies, scaled distances, and angles, respectively – see section 3). We can provide some explanation of the learned weights: the Shape Context features are separated into 5 radial, and 12 angular bins – the fact that there are peaks around the 16th and 24th, features indicates that some particular radial bins are more important than the others; the fact that several consecutive bins have low weight indicates that some radial bins are unimportant (etc.). It is much more difficult to reason about the second stage of learning, as the features have different scales, and cannot be compared directly – however, it appears that all of the higher-order features are important to our model. 4.2 Bikes Data For our second experiment, we used images of bicycles from the Caltech 256 Dataset [18]. Bicycles are reasonably rigid objects, meaning that matching based on their shape is logical. Although the images in this dataset are fairly well aligned, they are subject to reflections as well as some scaling and shear. For each image in the dataset, we detected landmarks automatically, and six points on the frame were hand-labelled (see figure 7). Only shapes in which these interest points were not occluded were used, and we only included images that had a background; in total, we labelled 44 6 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 Importance Index House data first/higher order weight vector (baseline = 60) -0.2 -0.1 0 0.1 0.2 -8 -6 -4 -2 0 2 4 6 8 Importance Index Bikes data first/higher order weight vector -3 -2 -1 0 1 2 3 Figure 6: Left: The weight vector of our method after learning, for the ‘house’ data. The first 60 weights are for the Shape Context features from the first stage of of learning; the final 5 weights are for the second stage of learning. Right: The same plot, for the ‘bikes’ data. Figure 7: Top: A selection of our training images. Bottom: An example match from our test set. Left: The template image (with the shape outlined in green, and landmark points marked in blue). Centre: The target image, and the match (in red) using unary features with the affine invariant/SIFT model of [17] after learning (endpoint error = 0.27). Right: the match using our model after learning (endpoint error = 0.04). images. The first image was used as the ‘template’, the other 43 were used as targets. Thus we are learning to match bicycles similar to the chosen template. Initially, we used the SIFT landmarks and features as described in [6]. Since this approach typically identifies several hundred landmarks, we set p = 20 for this experiment (i.e. we consider the 20 most likely points). Since we cannot hope to get exact matches, we use the endpoint error instead of the normalised Hamming loss, i.e. we reward points which are close to the correct match.9 Table 1 reveals that the performance of this method is quite poor, even with the higher-order model, and furthermore reveals no benefit from learning. This may be explained by the fact that although the SIFT features are invariant to scale and rotation, they are not invariant to reflection. In [17], the authors report that the SIFT features can provide good matches in such cases, as long as landmarks are chosen which are locally invariant to affine transformations. They give a method for identifying affine-invariant feature points, whose SIFT features are then computed.10 We achieve much better performance using this method, and also observe a significant improvement after learning. Figure 7 shows an example match using both the unary and higher-order techniques. Finally, figure 6 (right) shows the weights learned for this model. Interestingly, the first-order term during the second stage of learning has almost zero weight. This must not be misinterpreted: during the second stage, the response of each of the 20 candidate points is so similar that the first-order features are simply unable to convey any new information – yet they are still very useful in determining the 20 candidate points. 9Here the endpoint error is just the average Euclidean distance from the correct label, scaled according to the width of the image. 10We used publicly available implementations of both methods. 7 Table 1: Performance on the ‘bikes’ dataset. The endpoint error is reported, with standard errors in parentheses (note that the second-last column, ‘higher-order’ uses the weights from the first stage of learning, but not the second). Detector/descriptor unary + learning higher-order + learning SIFT [6] Training: 0.335 (0.038) 0.319 (0.034) 0.234 (0.047) 0.182 (0.031) Validation: 0.343 (0.027) 0.329 (0.019) 0.236 (0.031) 0.257 (0.033) Testing: 0.351 (0.024) 0.312 (0.015) 0.302 (0.045) 0.311 (0.039) Affine invariant/SIFT [17] Training: 0.322 (0.018) 0.280 (0.016) 0.233 (0.042) 0.244 (0.042) Validation: 0.337 (0.015) 0.298 (0.019) 0.245 (0.028) 0.229 (0.032) Testing: 0.332 (0.024) 0.339 (0.028) 0.277 (0.035) 0.231 (0.034) 5 Conclusion We have presented a model for near-isometric shape matching which is robust to typical additional variations of the shape. This is achieved by performing structured learning in a graphical model that encodes features with several different types of invariances, so that we can directly learn a “compound invariance” instead of taking for granted the exclusive assumption of isometric invariance. Our experiments revealed that structured learning with a principled graphical model that encodes both the rigid shape as well as non-isometric variations gives substantial improvements, while still maintaining competitive performance in terms of running time. Acknowledgements: We thank Marconi Barbosa and James Petterson for proofreading. NICTA is funded by the Australian Government’s Backing Australia’s Ability initiative, and the Australian Research Council’s ICT Centre of Excellence program. References [1] Belongie, S., Malik, J., Puzicha, J.: Shape matching and object recognition using shape contexts. PAMI 24 (2002) 509–522 [2] Mori, G., Belongie, S., Malik, J.: Shape contexts enable efficient retrieval of similar shapes. In: CVPR. (2001) 723–730 [3] Mori, G., Malik, J.: Estimating human body configurations using shape context matching. In: ECCV. (2002) 666–680 [4] Frome, A., Huber, D., Kolluri, R., Bulow, T., Malik, J.: Recognizing objects in range data using regional point descriptors. In: ECCV. (2004) [5] Belongie, S., Malik, J.: Matching with shape contexts. In: CBAIVL00. (2000) 20–26 [6] Lowe, D.G.: Object recognition from local scale-invariant features. In: ICCV. (1999) 1150–1157 [7] Felzenszwalb, P.F., Huttenlocher, D.P.: Pictorial structures for object recognition. IJCV 61 (2005) 55–79 [8] Felzenszwalb, P.F., Schwartz, J.D.: Hierarchical matching of deformable shapes. In: CVPR. (2007) [9] LeCun, Y., Huang, F.J., Bottou, L.: Learning methods for generic object recognition with invariance to pose and lighting. CVPR (2004) 97–104 [10] Carmichael, O., Hebert, M.: Shape-based recognition of wiry objects. PAMI 26 (2004) 1537–1552 [11] McAuley, J.J., Caetano, T.S., Barbosa, M.S.: Graph rigidity, cyclic belief propagation and point pattern matching. PAMI 30 (2008) 2047–2054 [12] Caetano, T., Cheng, L., Le, Q., Smola, A.: Learning graph matching. In: ICCV. (2007) 1–8 [13] Canny, J.: A computational approach to edge detection. In: RCV. (1987) 184–203 [14] Smith, S.: A new class of corner finder. In: BMVC. (1992) 139–148 [15] Tsochantaridis, I., Hofmann, T., Joachims, T., Altun, Y.: Support vector machine learning for interdependent and structured output spaces. In: ICML. (2004) [16] Teo, C., Le, Q., Smola, A., Vishwanathan, S.: A scalable modular convex solver for regularized risk minimization. In: KDD. (2007) [17] Mikolajczyk, K., Schmid, C.: Scale and affine invariant interest point detectors. 60 (2004) 63–86 [18] Griffin, G., Holub, A., Perona, P.: Caltech-256 object category dataset. Technical Report 7694, California Institute of Technology (2007) 8
2008
98
3,590
Fast Prediction on a Tree Mark Herbster, Massimiliano Pontil, Sergio Rojas-Galeano Department of Computer Science University College London Gower Street, London WC1E 6BT, England, UK {m.herbster, m.pontil,s.rojas}@cs.ucl.ac.uk Abstract Given an n-vertex weighted tree with structural diameter S and a subset of m vertices, we present a technique to compute a corresponding m × m Gram matrix of the pseudoinverse of the graph Laplacian in O(n + m2 + mS) time. We discuss the application of this technique to fast label prediction on a generic graph. We approximate the graph with a spanning tree and then we predict with the kernel perceptron. We address the approximation of the graph with either a minimum spanning tree or a shortest path tree. The fast computation of the pseudoinverse enables us to address prediction problems on large graphs. We present experiments on two web-spam classification tasks, one of which includes a graph with 400,000 vertices and more than 10,000,000 edges. The results indicate that the accuracy of our technique is competitive with previous methods using the full graph information. 1 Introduction Classification methods which rely upon the graph Laplacian (see [3, 20, 13] and references therein), have proven to be useful for semi-supervised learning. A key insight of these methods is that unlabeled data can be used to improve the performance of supervised learners. These methods reduce to the problem of labeling a graph whose vertices are associated to the data points and the edges to the similarity between pairs of data points. The labeling of the graph can be achieved either in a batch [3, 20] or in an online manner [13]. These methods can all be interpreted as different kernel methods: ridge regression in the case of [3], minimal semi-norm interpolation in [20] or the perceptron algorithm in [13]. This computation scales in the worst case cubically with the quantity of unlabeled data, which may prevent the use of these methods on large graphs. In this paper, we propose a method to improve the computational complexity of Laplacian-based learning algorithms. If an n-vertex tree is given, our method requires an O(n) initialization step and after that any m×m block of the pseudoinverse of the Laplacian may be computed in O(m2 +mS) time, where S is the structural diameter of the tree. The pseudoinverse of the Laplacian may then be used as a kernel for a variety of label prediction methods. If a generic graph is given, we first approximate it with a tree and then run our method on the tree. The use of a minimum spanning tree and shortest path tree is discussed. It is important to note that prediction is also possible using directly the graph Laplacian, without computing its pseudoinverse. For example, this may be achieved by solving a linear system of equations [3, 20] involving the Laplacian, and a solution may be computed in O(|E| logO(1) n) time [18], where E is the edge set. However, computation via the graph kernel allows for multiple prediction problems on the same graph to be computed more efficiently. The advantage is even more striking if the data come sequentially and we need to predict in an online fashion. To illustrate the advantage of our approach consider the case in which we are provided with a small subset of ℓlabeled vertices of a large graph and we wish to predict the label of a different subset of p vertices. Let m = ℓ+ p and assume that m ≪n (typically we will also have ℓ≪p). A practical application is the problem of detecting “spam” hosts in the internet. Although the number of hosts in the internet is in the millions we may only need to detect spam hosts from some limited domain. If the graph is a tree the total time required to predict with the kernel perceptron using our method will be O(n + m2 + mS). The promise of our technique is that, if m + S ≪n and a tree is given, it requires O(n) time versus O(n3) for standard methods. To the best of our knowledge this is the first paper which addresses the problem of fast prediction in semi-supervised learning using tree graphs. Previous work has focused on special prediction methods and graphs. The work in [5] presents a non-Laplacian-based method for predicting the labeling of a tree, based on computing the exact probabilities of a Markov random field. The issue of computation time is not addressed there. In the case of unbalanced bipartite graphs [15] presents a method which significantly improves the computation time of the pseudoinverse to Θ(k2(n −k)), where k is the size of a minority partition. Thus, in the case of a binary tree the computation is still Θ(n3) time. The paper is organized as follows. In Section 2 we review the notions which are needed in order to present our technique in Section 3, concerning the fast computation of a tree graph kernel. In Section 4 we address the issue of tree selection, commenting in particular on a potential advantage of shortest path trees. In Section 5 we present the experimental results and draw our conclusions in Section 6. 2 Background In this paper any graph G is assumed to be connected, to have n vertices, and to have edge weights. The set of vertices of G is denoted V = {1, . . . , n}. Let A = (Aij)n i,j=1 be the n × n symmetric weight matrix of the graph, where Aij ≥0, and define the edge set E(G) := {(i, j) : Aij > 0, i < j}. We say that G is a tree if it is connected and has n −1 edges. The graph Laplacian is the n × n matrix defined as G = D −A, where D is the diagonal matrix with i-th diagonal element Dii = Pn j=1 Aij, the weighted degree of vertex i. Where it is not ambiguous, we will use the notation G to denote both the graph G and the graph Laplacian and the notation T to denote both a Laplacian of a tree and the tree itself. The Laplacian is positive semi-definite and induces the semi-norm ∥w∥2 G := w⊤Gw = P (i,j)∈E(G) Aij(wi −wj)2. The kernel associated with the above semi-norm is G+, the pseudoinverse of matrix G, see e.g. [14] for a discussion. As the graph is connected, it follows from the definition of the semi-norm that the null space of G is spanned by the constant vector 1 only. The analogy between graphs and networks of resistors plays an important role in this paper. That is, the weighted graph may be seen as a network of resistors where edge (i, j) is a resistor with resistance πij = A−1 ij . Then the effective resistance rG(i, j) may be defined as the resistance measured between vertex i and j in this network and may be calculated using Kirchoff’s circuit laws or directly from G+ using the formula [16] rG(i, j) = G+ ii + G+ jj −2G+ ij . (2.1) The effective resistance is a metric distance on the graph [16] as well as the geodesic and structural distances. The structural distance between vertices i, j ∈V is defined as sG(i, j) := min {|P(i, j)| : P(i, j) ∈P} where P is the set of all paths in G and P(i, j) is the set of edges in a particular path from i to j. Whereas, the geodesic distance is defined as dG(i, j) := min{P (p,q)∈P (i,j) πpq : P(i, j) ∈P}. The diameter is the maximum distance between any two points on the graph, hence the resistance, structural, and, geodesic diameter are defined as RG = maxi,j∈V rG(i, j) SG = maxi,j∈V sG(i, j), and DG = maxi,j∈V dG(i, j), respectively. Note that, by Kirchoff’s laws, rG(i, j) ≤dG(i, j) and, so, RG ≤DG. 3 Computing the Pseudoinverse of a Tree Laplacian Quickly In this section we describe our method to compute the pseudoinverse of a tree. 3.1 Inverse Connectivity Let us begin by noting that the effective resistance is a better measure of connectivity than the geodesic distance, as for example if there are k edge disjoint paths of geodesic distance d between two vertices, then the effective resistance is no more than d k. Thus, the more paths, the closer the vertices. In the following, we will introduce three more global measures of connectivity built on top of the effective resistance, which are useful for our computation below. The first quantity is the total resistance Rtot = P i>j rG(i, j), which is a measure of the inverse connectivity of the graph: the smaller Rtot the more connected the graph. The second quantity is R(i) = Pn j=1 rG(i, j), which is used as a measure of inverse centrality of vertex i [6, Def. 3] (see also [17]). The third quantity is G+ ii, which provides an alternate notion of inverse centrality. Summing both sides of equation (2.1) over j gives R(i) = nG+ ii + n X j=1 G+ jj, (3.1) where we used the fact that Pn j=1 G+ ij = 0, which is true because the null space of G is spanned by the constant vector. Summing again over i yields Rtot = n n X i=1 G+ ii, (3.2) where we have used Pn i=1 R(i) = 2Rtot. Combing the last two equations we obtain G+ ii = R(i) n −Rtot n2 . (3.3) 3.2 Method Throughout this section we assume that G is a tree with corresponding Laplacian matrix T. The principle of the method to compute T+ is that, on a tree there is a unique path between any two vertices and, so, the effective resistance is simply the sum of resistances along that path, see e.g. [16, 13] (for the same reason, on a tree the geodesic distance is the same as the resistance distance). We assume that the root vertex is indexed as 1. The parent and the children of vertex i are denoted by ↑(i) and ↓(i), respectively. The descendants of vertex i are denoted by ↓*(i) := ( ↓(i) S j∈↓(i) ↓*(j) ↓(i) ̸= ∅ ∅ ↓(i) = ∅. We also let κ(i) be the number of descendants of vertex i and i itself, that is, κ(i) = 1 + | ↓*(i)|. The method is outlined as follows. We initially compute R(1), . . . , R(n) in O(n) time. This in turn gives us Rtot = 1 2 Pn i=1 R(i) and G+ 11, . . . , G+ nn via equation (3.3), also in O(n) time. As we shall see, with these precomputed values, we may obtain off-diagonal elements G+ ij from equation (2.1) by computing individually rT(i, j) in O(ST) or an m × m block in O(m2 + mST) time. Initialization We split the computation of the inverse centrality R(i) into two terms, namely R(i) = T(i) + S(i), where T(i) and S(i) are the sum of the resistances of vertex i to each descendant and nondescendant, respectively. That is, T(i) = X j∈↓*(i) rT(i, j) , S(i) = X j̸∈↓*(i) rT(i, j) . We compute κ(i) and T(i), i = 1, . . . , n with the following leaves-to-root recursions κ(i) := ( 1 + P j∈↓(i) κ(j) ↓(i) ̸= ∅ 1 ↓(i) = ∅, T(i) := (P j∈↓(i)(T(j) + πijκ(j)) ↓(i) ̸= ∅ 0 ↓(i) = ∅ by computing κ(1) then T(1) and caching the intermediate values. We next descend the tree caching each calculated S(i) with the root-to-leaves recursion S(i) := S(↑(i)) + T(↑(i)) −T(i) + (n −2κ(i))πi ↑(i) i ̸= 1 0 i = 1 . It is clear that the time complexity of the above recursions is O(n). 1. Input: {v1, . . . , vm} ⊆V 2. Initialization: visited(all) = ∅ 3. for i = 1, . . . , m do 4. p = −1; c = vi; rT(c, c) = 0 5. Repeat 6. for w ∈visited(c) ∩{p} ∪↓*(p) do 7. rT(vi, w) = rT(w, vi) = rT(vi, c) + rT(c, w) 8. end 9. visited(c) = visited(c) ∪vi 10. p = c; c = ↑(c) 11. rT(vi, c) = rT(c, vi) = rT(vi, p) + πp,c 12. until (“p is the root”) 13. end Figure 1: Computing an m × m block of a tree Laplacian pseudoinverse. Computing an m × m block of the Laplacian pseudoinverse Our algorithm (see Figure 1) computes the effective resistance matrix of an m × m block which effectively gives the kernel (via equation (2.1)). The motivating idea is that a single effective resistance rT(i, j) is simply the sum of resistances along the path from i to j. It may be computed by separately ascending the path from i–to–root and j–to–root in O(ST) time and summing the resistances along each edge that is either in the i–to–root or j–to–root path but not in both. However we may amortize the computation of an m × m block to O(m2 + mST) time, saving a factor of min(m, ST). This is realized by additionally caching the cumulative sums of resistances along the path to the root during each ascent from a vertex. We outline in further detail the algorithm as follows: for each vertex vi in the set Vm = {v1, . . . , vm} we perform an ascent to the root (see line 3 in Figure 1). As we ascend, we cache each cumulative resistance (from the starting vertex vi to the current vertex c) along the path on the way to the root (line 11). If, while ascending from vi we enter a vertex c which has previously been visited during the ascent from another vertex w (line 6) then we compute rT(vi, w) as rT(vi, c)+rT(c, w). For example, during the ascent from vertex vk ∈Vm to the root we will compute {rT(v1, vk), . . . , rT(vk, vk)}. The computational complexity is obtained by noting that every ascent to the root requires O(ST) steps and along each ascent we must compute up to max(m, ST) resistances. Thus, the total complexity is O(m2 + mST), assuming that each step of the algorithm is efficiently implemented. For this purpose, we give two implementation notes. First, each of the effective resistances computed by the algorithm should be stored on the tree, preventing creation of an n × n matrix. When the computation is completed the desired m × m Gram matrix may then be directly computed by gathering the cached values via an additional set of ascents. Second, it should be ensured that the “for loop” (line 6) is executed in Θ(|visited(c) ∩{p} ∪↓*(p)|) time by a careful but straightforward implementation of the visited predicate. Finally, this algorithm may be generalized to compute a p × ℓblock in O(pℓ+ (p + ℓ)ST) time or to operate fully “online.” Let us return to the practical scenario described in the introduction, in which we wish to predict p vertices of the tree based on ℓlabeled vertices. Let m = ℓ+p. By the above discussion, computation of an m × m block of the kernel matrix T+ requires O(n + m2 + mST) time. In many practical applications m ≪n and SG will typically be no more than logarithmic in n, which leads to an appealing O(n) time complexity. 4 Tree Construction In the previous discussion, we have considered that a tree has already been given. In the following, we assume that a graph G or a similarity function is given and the aim is to construct an approximating tree. We will consider both the minimum spanning tree (MST) as a “best” in norm approximation; and the shortest path tree (SPT) as an approximation which maintains a mistake bound [13] guarantee. Given a graph with a “cost” on each edge, an MST is a connected n-vertex subgraph with n −1 edges such that the total cost is minimized. In our set-up the cost of edge (i, j) is the resistance πij = 1 Aij , therefore, a minimum spanning tree of G solves the problem min    X (i,j)∈E(T) πij : T ∈T (G)   , (4.1) where T (G) denotes the set of spanning trees of G. An MST is also a tree whose Laplacian best approximates the Laplacian of the given graph according to the trace norm, that is, it solves the problem min {tr(G −T) : T ∈T (G)} . (4.2) Indeed, we have tr(G −T) = Pn i,j=1 Aij −P (i,j)∈E(T) −π−1 ij . Then, our claim that the problems (4.1) and (4.2) have the same solution follows by noting that the edges in a minimum spanning tree are invariant with respect to any strictly increasing function of the “costs” on the edges in the original graph [8] and the function −π−1 is increasing in π. The above observation suggests another approximation criterion which we may consider for finding a spanning tree. We may use the trace norm between the pseudoinverse of the Laplacians, rather than the Laplacians themselves as in (4.2). This seems a more natural criterion, since our goal is to approximate well the kernel (it is the kernel which is directly involved in the prediction problem). It is interesting to note that the quantity tr(T+ −G+) is related to the total resistance. Specifically, we have by equation (3.2) that tr(T+ −G+) = Rtot(T) n −Rtot(G) n . As noted in [10], the total resistance is a convex function of the graph Laplacian. However, we do not know how to minimize Rtot(T) over the set of spanning trees of G. We thus take a different route, which leads us to the notion of shortest path trees. We choose a vertex i and look for a spanning tree which minimizes the inverse centrality R(i) of vertex i, that is we solve the problem min {R(i) : T ∈T (G)} . (4.3) Note that R(i) is the contribution of vertex i to the total resistance of T and that, by equations (3.1) and (3.2), R(i) = nT + ii + Rtot n . The above problem can then be interpreted as minimizing a tradeoff between inverse centrality of vertex i and inverse connectivity of the tree. In other words, (4.3) encourages trees which are centered at i and, at the same time have a small diameter. It is interesting to observe that the solution of problem (4.3) is a shortest path tree (SPT) centered at vertex i, namely a spanning tree for which the geodesic distance in “costs” is minimized from i to every other vertex in the graph. This is because the geodesic distance is equivalent to the resistance distance on a tree and any SPT of G is formed from a set of shortest paths connecting the root to any other vertex in G [8, Ch. 24.1]. Let us observe a fundamental difference between MST and SPT, which provides a justification for approximating the given graph with an SPT. It relies upon the analysis in [13, Theorem 4.2], where the cumulative number of mistakes of the kernel perceptron with the kernel K = G+ + 11⊤was upper bounded by (∥u∥2 G + 1)(RG + 1) for consistent labelings [13] u ∈{−1, 1}n. To explain our argument, first we note that when we approximate the graph with a tree T the term ∥u∥2 G is always decreasing, while the term RG is always increasing by Rayleigh’s monotonicity law (see for example [13, Corollary 3.1]). Now, note that the resistance diameter RT of an SPT of a graph G is bounded by twice the geodesic diameter of the original graph, RT ≤2DG. (4.4) Indeed, as an SPT is formed from a set of shortest paths between the root and any other vertex in G, for any pair of vertices p, q in the graph there is in the SPT a path from p to the root and then to q which can be no longer than 2DG. To further discuss, consider the case that G consists of a few dense clusters each uniquely labeled and with only a few cross-cluster edges. The above mistake bound and the bound (4.4), imply that a tree built with an SPT would still have a non-vacuous mistake bound. No such bound as (4.4) holds for an MST subgraph. For example, consider a bicycle wheel graph whose edge set is the union of n spoke edges {(0, i) : i = 1, . . . , n} and n rim edges {(i, i + 1 mod n) : i = 1, . . . , n} with costs on the spoke edges of 2 and on the rim edges of 1. The MST diameter is then n + 1 while any SPT diameter is ≤8. At last, let us comment on the time and space complexity of constructing such trees. The MST and SPT trees may be constructed with Prim and Dijkstra algorithms [8] respectively in O(n log n+|E|) time. Prim’ algorithm may be further speeded up to O(n + |E|) time in the case of small integer weights [12]. In the general case of a non-sparse graph or similarity function the time complexity is Θ(n2), however as both Prim and Dijkstra are “greedy” algorithms their space complexity is O(n) which may be a dominant consideration in a large graph. 5 Web-spam Detection Experiments In this section, we present an experimental study of the feasibility of our method on large graphs (400,000 vertices). The motivation for our methodology is that on graphs with already 10,000 vertices it is computationally challenging to use standard graph labeling methods such as [3, 20, 13], as they require the computation of the full graph Laplacian kernel. This computational burden makes the use of such methods prohibitive when the number of vertices is in the millions. On the other hand, in the practical scenario described in the introduction the computational time of our method scales linearly in the number of vertices in the graph and can be run comfortably on large graphs (see Figure 2 below) and at worst quadratically if the full graph needs to be labeled. The aims of the experiments are: (i) to see whether there is a significant performance loss when using a tree sub-graph rather than the original graph, (ii) to compare tree construction methods, specifically the MST and the SPT and (iii) to exploit the possibility of improving performance through ensembles of trees. The initial results are promising in that the performance of the predictor with a single SPT or MST is competitive with that of the existing methods, some of which use the full graph information. We shall also comment on the computational time of the method. 5.1 Datasets and previous methods We applied the Fast Prediction on a Tree (FPT) method to the 2007 web-spam challenge developed at the University of Paris VI1. Two graphs are provided. The first one is formed by 9,072 vertices and 464,959 edges, which represent computer hosts – we call this the host-graph. In this graph, one host is “connected” to another host if there is at least one link from a web-page in the first host to a web-page in the other host. The second graph consists of 400,000 vertices (corresponding to web-pages) and 10,455,545 edges – we call this the web-graph. Again, a web-page is “connected” to another web-page if there is at least one hyperlink from the former to the latter. Note that both graphs are directed. In our experiments we discarded directional information and assigned a weight of either 1 to unidirectional edges and of w ∈{1, 2} to the bidirectional edges. Each vertex is either labeled as spam or as non-spam. In both graphs there are about 80% of non-spam vertices and 20% of spam ones. Additional tf-idf feature vectors (determined by the web-pages’ html content) are provided for each vertex in the graph, but we have discarded this information for simplicity. Following the web-spam protocol, for both graphs we used 10% of labeled vertices for training and 90% for testing. We briefly discuss some previous methods which participated in the web-spam challenge. Abernathy et al. [1] used an SVM variant on the tf-idf features with an additional graph-based regularization term, which penalizes predictions with links between non-spam to spam vertices. Tang et al. (see [7]) used a linear and Gaussian SVM combined with Random Forests on the feature vectors, plus new features obtained from link information. The method of Witschel and Biemann [4] consisted of iteratively selecting vertices and classifying them with the predominant class in their neighborhood (hence this is very similar to label propagation method of [20]). Bencz´ur et al. (see [7]) used Naive Bayes, C4.5 and SVM’s with a combination of content and/or graph-based features. Finally, Filoche et al. (see [7]) applied html preprocessing to obtain web-page fingerprints, which were used to obtain clusters; these clusters along with link and content-based features were then fed to a modified Naive Bayes classifier. 5.2 Results Experimental results are shown in Table 1. We report the following performance measures: (i) average accuracy when predicting with a single tree, (ii) average accuracy when each predictor is optimized over a threshold in the range of [−1, 1], (iii) area under the curve (AUC) and (iv) 1See http://webspam.lip6.fr/wiki/pmwiki.php for more information. Method Agg. Agg.-Best AUC Single Single-Best AUC Host-graph MST 0.907 0.907 0.950 0.857±0.022 0.865±0.017 0.841±0.045 SPT 0.889 0.890 0.952 0.850±0.026 0.857±0.018 0.804±0.063 MST (bidir) 0.912 0.915 0.944 0.878±0.033 0.887±0.027 0.851±0.100 SPT (bidir) 0.913 0.913 0.960 0.873±0.028 0.877±0.026 0.846±0.065 Abernathy et al. 0.896 0.906 0.952 . . . . . . . . . Tang et al. 0.906 0.907 0.951 . . . . . . . . . Filoche et al. 0.889 0.890 0.927 . . . . . . . . . Bencz´ur et al. 0.829 0.847 0.877 . . . . . . . . . Web-graph MST (bidir) 0.991 0.992 1.000 0.976±0.011 0.980±0.009 0.993±0.005 SPT (bidir) 0.994 0.994 0.999 0.985±0.002 0.985±0.002 0.992±0.003 Witschel et al. 0.995 0.996 0.998 . . . . . . . . . Filoche et al. 0.973 0.974 0.991 . . . . . . . . . Bencz´ur et al. 0.942 0.942 0.973 . . . . . . . . . Tang et al. 0.296 0.965 0.989 . . . . . . . . . Table 1: Results of our FPT method and other competing methods. 5 11 21 41 81 0.9 0.91 0.92 0.93 0.94 0.95 0.96 0.97 Num.of trees AUC Host−graph unweighted_MST unweighted_SPT biweighted_MST biweighted_SPT 5 11 21 41 81 0.86 0.87 0.88 0.89 0.9 0.91 0.92 0.93 Num.of trees Accuracy Host−graph unweighted_MST unweighted_SPT biweighted_MST biweighted_SPT 20 100 200 400 4 4.5 5 5.5 6 6.5 7 7.5 8 Labeled nodes Runtime (secs) Web−graph Initialization Init+Prediction Figure 2: AUC and Accuracy vs. number of trees (left and middle) and Runtime vs. number of labeled vertices (right). aggregate predictive value given by each tree. In the case of the host-graph, predictions for the aggregate method were made using 81 trees. MST and SPT were obtained for the weighted graphs with Prim and Dijkstra algorithms, respectively. For the unweighted graphs, every tree is an MST, so we simply used trees generated by a randomized unweighted depth-first traversal of the graph and SPT’s may be generated by using the breadth-first-search algorithm, all in O(|E|) time. In the table, the tag “Agg.” stands for aggregate and the “bidir” tag indicates that the original graph was modified by setting w = 2 for bidirectional edges. In the case of the larger web-graph, we used 21 trees and the modified graph with bidirectional weights. In all experiments we used a kernel perceptron which was trained for three epochs (e.g. [13]). It is interesting to note that some of the previous methods [1, 4] take the full graph information into account. Thus, the above results indicate that our method is statistically competitive (in fact better than most of the other methods) even though the full graph structure is discarded. Remarkably, in the case of the large web-graph, using just a single tree gives a very good accuracy, particularly in the case of SPT. On this graph SPT is also more stable in terms of variance than MST. In the case of the smaller host-graph, just using one tree leads to a decrease in performance. However, by aggregating a few trees our result improves over the state of the art results. In order to better understand the role of the number of trees on the aggregate prediction, we also ran additional experiments on the host-graph with t = 5, 11, 21, 41, 81 randomly chosen MST or SPT trees. We averaged the accuracy and AUC over 100 trials each. Results are shown in Figure 2. As it can be seen, using as few as 11 trees already gives competitive performance. SPT works better than MST in term of AUC (left plot), whereas the result is less clear in the case of accuracy (middle plot). Finally, we report on an experiment evaluating the running time of our method. We choose the webgraph (n = 400, 000). We then fixed p = 1000 predictive vertices and let the number of labeled vertices ℓvary in the set {20, 40, 60, 80, 100, 200, 400}. Initialization time (tree construction plus computation of the diagonal elements of the kernel) and initialization plus prediction times were measured in seconds on a dual core 1.8GHz machine with 8Gb memory. As expected, the solid curve, corresponding to initialization time, is the dominant contribution to the computation time. 6 Conclusions We have presented a fast method for labeling of a tree. The method is simple to implement and, in the practical regime of small labeled and testing sets and diameters, scales linearly in the number of vertices in the tree. When we are presented with a generic undirected weighted graph, we first extract a spanning tree from it and then run the method. We have studied minimum spanning trees and shortest path trees, both of which can be computed efficiently with standard algorithms. We have tested the method on a web-spam classification problem involving a graph of 400,000 vertices. Our results indicate that the method is competitive with the state of the art. We have also shown how performance may be improved by averaging the predictors obtained by a few spanning trees. Further improvement may involve learning combinations of different trees. This may be obtained following ideas in [2]. At the same time it would be valuble to study connections between our work and other approximation methods such as those in in the context of kernel-methods [9], Gaussian processes [19] and Bayesian learning [11]. Acknowledgments. We wish to thank A. Argyriou and J.-L. Balc´azar for valuable discussions, D. Athanasakis and S. Shankar Raman for useful preliminary experimentation, D. Fernandez-Reyes for both useful discussions and computing facility support, and the anonymous reviewers for useful comments. This work was supported in part by the IST Programme of the European Community, under the PASCAL Network of Excellence, IST-2002-506778, by EPSRC Grant EP/D071542/1 and by the DHPA Research Councils UK Scheme. References [1] J. Abernethy, O. Chapelle and C. Castillo. Webspam Identification Through Content and Hyperlinks. Proc. Adversarial Information Retrieval on Web, 2008. [2] A. Argyriou, M. Herbster, and M. Pontil. Combining graph Laplacians for semi-supervised learning. Advances in Neural Information Processing Systems 17. MIT Press, Cambridge, MA, 2005. [3] M. Belkin, I. Matveeva, P. Niyogi. Regularization and Semi-supervised Learning on Large Graphs. Proceedings of the 17-th Conference on Learning Theory (COLT’ 04), pages 624–638, 2004. [4] C. Biemann. Chinese Whispers – an Efficient Graph Clustering Algorithm and its Application to Natural Language Processing Problems. Proc. HLT-NAACL-06 Workshop on Textgraphs-06, 2006. [5] A. Blum, J. Lafferty, M. R. Rwebangira, and R. Reddy. Semi-supervised learning using randomized mincuts. Proc. 21-st International Conference on Machine Learning, page 13, 2004. [6] U. Brandes and D. Fleischer. Centrality measures based on current flow. Proc. 22-nd Annual Symposium on Theoretical Aspects of Computer Science, pages 533–544, 2005. [7] C. Castillo, B. D. Davison, L. Denoyer and P. Gallinari. Proc. of the Graph Labelling Workshop and Web-spam Challenge (ECML Workshop), 2007. [8] T. H. Cormen, C. E. Leiserson, and R. L. Rivest. Introduction to Algorithms. MIT Press, 1990. [9] P. Drineas and M. W. Mahoney, On the Nystr¨om Method for Approximating a Gram Matrix for Improved Kernel-Based Learning. J. Mach. Learn. Res., 6:2153–2175, 2005. [10] A. Ghosh, S. Boyd and A. Saberi. Minimizing Effective Resistance of a Graph. SIAM Review, problems and techniques section, 50(1):37-66, 2008. [11] T. Jebara. Bayesian Out-Trees. Proc. Uncertainty in Artifical Intelligence, 2008. [12] R. E. Haymond, J. Jarvis and D. R. Shier. Algorithm 613: Minimum Spanning Tree for Moderate Integer Weights. ACM Trans. Math. Softw., 10(1):108–111, 1984. [13] M. Herbster and M. Pontil. Prediction on a graph with a perceptron. Advances in Neural Information Processing Systems 19, pages 577–584. MIT Press, 2007. [14] M. Herbster, M. Pontil, and L. Wainer. Online learning over graphs. In ICML ’05: Proceedings of the 22nd international conference on Machine learning, pages 305–312, 2005. [15] N.-D. Ho and P. V. Dooren. On the pseudo-inverse of the Laplacian of a bipartite graph. Appl. Math. Lett., 18(8):917–922, 2005. [16] D. Klein and M. Randi´c. Resistance distance. J. of Mathematical Chemistry, 12(1):81–95, 1993. [17] M. E. J. Newman. A measure of betweenness centrality based on random walks. Soc. Networks, 27:39– 54, 2005. [18] D. A. Spielman and S.-H. Teng. Nearly-linear time algorithms for graph partitioning, graph sparsification, and solving linear systems. Proc. 36-th Annual ACM Symposium Theory of Computing, 2004. [19] C.K.I. Williams and M. Seeger. Using the Nystr¨om Method to Speed Up Kernel Machines. Neural Information Processing Systems 13, pages 682–688, MIT Press, 2001 [20] X. Zhu, J. Lafferty, and Z. Ghahramani. Semi-Supervised Learning Using Gaussian Fields and Harmonic Functions. Proc of the the 20-th International Conference on Machine Learning, pages 912–919, 2003.
2008
99
3,591
Monte Carlo Sampling for Regret Minimization in Extensive Games Marc Lanctot Department of Computing Science University of Alberta Edmonton, Alberta, Canada T6G 2E8 lanctot@ualberta.ca Kevin Waugh School of Computer Science Carnegie Mellon University Pittsburgh PA 15213-3891 waugh@cs.cmu.edu Martin Zinkevich Yahoo! Research Santa Clara, CA, USA 95054 maz@yahoo-inc.com Michael Bowling Department of Computing Science University of Alberta Edmonton, Alberta, Canada T6G 2E8 bowling@cs.ualberta.ca Abstract Sequential decision-making with multiple agents and imperfect information is commonly modeled as an extensive game. One efficient method for computing Nash equilibria in large, zero-sum, imperfect information games is counterfactual regret minimization (CFR). In the domain of poker, CFR has proven effective, particularly when using a domain-specific augmentation involving chance outcome sampling. In this paper, we describe a general family of domain-independent CFR sample-based algorithms called Monte Carlo counterfactual regret minimization (MCCFR) of which the original and poker-specific versions are special cases. We start by showing that MCCFR performs the same regret updates as CFR on expectation. Then, we introduce two sampling schemes: outcome sampling and external sampling, showing that both have bounded overall regret with high probability. Thus, they can compute an approximate equilibrium using self-play. Finally, we prove a new tighter bound on the regret for the original CFR algorithm and relate this new bound to MCCFR’s bounds. We show empirically that, although the sample-based algorithms require more iterations, their lower cost per iteration can lead to dramatically faster convergence in various games. 1 Introduction Extensive games are a powerful model of sequential decision-making with imperfect information, subsuming finite-horizon MDPs, finite-horizon POMDPs, and perfect information games. The past few years have seen dramatic algorithmic improvements in solving, i.e., finding an approximate Nash equilibrium, in two-player, zero-sum extensive games. Multiple techniques [1, 2] now exist for solving games with up to 1012 game states, which is about four orders of magnitude larger than the previous state-of-the-art of using sequence-form linear programs [3]. Counterfactual regret minimization (CFR) [1] is one such recent technique that exploits the fact that the time-averaged strategy profile of regret minimizing algorithms converges to a Nash equilibrium. The key insight is the fact that minimizing per-information set counterfactual regret results in minimizing overall regret. However, the vanilla form presented by Zinkevich and colleagues requires the entire game tree to be traversed on each iteration. It is possible to avoid a full game-tree traversal. In their accompanying technical report, Zinkevich and colleagues discuss a poker-specific CFR 1 variant that samples chance outcomes on each iteration [4]. They claim that the per-iteration cost reduction far exceeds the additional number of iterations required, and all of their empirical studies focus on this variant. The sampling variant and its derived bound are limited to poker-like games where chance plays a prominent role in the size of the games. This limits the practicality of CFR minimization outside of its initial application of poker or moderately sized games. An additional disadvantage of CFR is that it requires the opponent’s policy to be known, which makes it unsuitable for online regret minimization in an extensive game. Online regret minimization in extensive games is possible using online convex programming techniques, such as Lagrangian Hedging [5], but these techniques can require costly optimization routines at every time step. In this paper, we present a general framework for sampling in counterfactual regret minimization. We define a family of Monte Carlo CFR minimizing algorithms (MCCFR), that differ in how they sample the game tree on each iteration. Zinkevich’s vanilla CFR and a generalization of their chancesampled CFR are both members of this family. We then introduce two additional members of this family: outcome-sampling, where only a single playing of the game is sampled on each iteration; and external-sampling, which samples chance nodes and the opponent’s actions. We show that under a reasonable sampling strategy, any member of this family minimizes overall regret, and so can be used for equilibrium computation. Additionally, external-sampling is proven to require only a constantfactor increase in iterations yet achieves an order reduction in the cost per iteration, thus resulting an asymptotic improvement in equilibrium computation time. Furthermore, since outcome-sampling does not need knowledge of the opponent’s strategy beyond samples of play from the strategy, we describe how it can be used for online regret minimization. We then evaluate these algorithms empirically by using them to compute approximate equilibria in a variety of games. 2 Background An extensive game is a general model of sequential decision-making with imperfect information. As with perfect information games (such as Chess or Checkers), extensive games consist primarily of a game tree: each non-terminal node has an associated player (possibly chance) that makes the decision at that node, and each terminal node has associated utilities for the players. Additionally, game states are partitioned into information sets where a player cannot distinguish between two states in the same information set. The players, therefore, must choose actions with the same distribution at each state in the same information set. We now define an extensive game formally, introducing the notation we use throughout the paper. Definition 1 [6, p. 200] a finite extensive game with imperfect information has the following components: • A finite set N of players. A finite set H of sequences, the possible histories of actions, such that the empty sequence is in H and every prefix of a sequence in H is also in H. Define h ⊑h′ to mean h is a prefix of h′. Z ⊆H are the terminal histories (those which are not a prefix of any other sequences). A(h) = {a : ha ∈H} are the actions available after a non-terminal history, h ∈H \ Z. • A function P that assigns to each non-terminal history a member of N ∪{c}. P is the player function. P(h) is the player who takes an action after the history h. If P(h) = c then chance determines the action taken after history h. • For each player i ∈N ∪{c} a partition Ii of {h ∈H : P(h) = i} with the property that A(h) = A(h′) whenever h and h′ are in the same member of the partition. For Ii ∈Ii we denote by A(Ii) the set A(h) and by P(Ii) the player P(h) for any h ∈Ii. Ii is the information partition of player i; a set Ii ∈Ii is an information set of player i. • A function fc that associates with every information set I where P(I) = c a probability measure fc(·|I) on A(h) (fc(a|I) is the probability that a occurs given some h ∈I), where each such probability measure is independent of every other such measure.1 1Traditionally, an information partition is not specified for chance. In fact, as long as the same chance information set cannot be revisited, it has no strategic effect on the game itself. However, this extension allows us to consider using the same sampled chance outcome for an entire set of histories, which is an important part of Zinkevich and colleagues’ chance-sampling CFR variant. 2 • For each player i ∈N a utility function ui from the terminal states Z to the reals R. If N = {1, 2} and u1 = −u2, it is a zero-sum extensive game. Define ∆u,i = maxz ui(z) − minz ui(z) to be the range of utilities to player i. In this paper, we will only concern ourselves with two-player, zero-sum extensive games. Furthermore, we will assume perfect recall, a restriction on the information partitions such that a player can always distinguish between game states where they previously took a different action or were previously in a different information set. 2.1 Strategies and Equilibria A strategy of player i, σi, in an extensive game is a function that assigns a distribution over A(Ii) to each Ii ∈Ii. We denote Σi as the set of all strategies for player i. A strategy profile, σ, consists of a strategy for each player, σ1, . . . , σn. We let σ−i refer to the strategies in σ excluding σi. Let πσ(h) be the probability of history h occurring if all players choose actions according to σ. We can decompose πσ(h) = Πi∈N∪{c}πσ i (h) into each player’s contribution to this probability. Here, πσ i (h) is the contribution to this probability from player i when playing according to σ. Let πσ −i(h) be the product of all players’ contribution (including chance) except that of player i. For I ⊆H, define πσ(I) = P h∈I πσ(h), as the probability of reaching a particular information set given all players play according to σ, with πσ i (I) and πσ −i(I) defined similarly. Finally, let πσ(h, z) = πσ(z)/πσ(h) if h ⊑z, and zero otherwise. Let πσ i (h, z) and πσ −i(h, z) be defined similarly. Using this notation, we can define the expected payoff for player i as ui(σ) = P h∈Z ui(h)πσ(h). Given a strategy profile, σ, we define a player’s best response as a strategy that maximizes their expected payoff assuming all other players play according to σ. The best-response value for player i is the value of that strategy, bi(σ−i) = maxσ′ i∈Σi ui(σ′ i, σ−i). An ϵ-Nash equilibrium is an approximation of a Nash equilibrium; it is a strategy profile σ that satisfies ∀i ∈N ui(σ) + ϵ ≥max σ′ i∈Σi ui(σ′ i, σ−i) (1) If ϵ = 0 then σ is a Nash Equilibrium: no player has any incentive to deviate as they are all playing best responses. If a game is two-player and zero-sum, we can use exploitability as a metric for determining how close σ is to an equilibrium, ϵσ = b1(σ2) + b2(σ1). 2.2 Counterfactual Regret Minimization Regret is an online learning concept that has triggered a family of powerful learning algorithms. To define this concept, first consider repeatedly playing an extensive game. Let σt i be the strategy used by player i on round t. The average overall regret of player i at time T is: RT i = 1 T max σ∗ i ∈Σi T X t=1 ui(σ∗ i , σt −i) −ui(σt)  (2) Moreover, define ¯σt i to be the average strategy for player i from time 1 to T. In particular, for each information set I ∈Ii, for each a ∈A(I), define: ¯σt i(a|I) = PT t=1 πσt i (I)σt(a|I) PT t=1 πσt i (I) . (3) There is a well-known connection between regret, average strategies, and Nash equilibria. Theorem 1 In a zero-sum game, if RT i∈{1,2} ≤ϵ, then ¯σT is a 2ϵ equilibrium. An algorithm for selecting σt i for player i is regret minimizing if player i’s average overall regret (regardless of the sequence σt −i) goes to zero as t goes to infinity. Regret minimizing algorithms in self-play can be used as a technique for computing an approximate Nash equilibrium. Moreover, an algorithm’s bounds on the average overall regret bounds the convergence rate of the approximation. Zinkevich and colleagues [1] used the above approach in their counterfactual regret algorithm (CFR). The basic idea of CFR is that overall regret can be bounded by the sum of positive per-informationset immediate counterfactual regret. Let I be an information set of player i. Define σ(I→a) to be 3 a strategy profile identical to σ except that player i always chooses action a from information set I. Let ZI be the subset of all terminal histories where a prefix of the history is in the set I; for z ∈ZI let z[I] be that prefix. Since we are restricting ourselves to perfect recall games z[I] is unique. Define counterfactual value vi(σ, I) as, vi(σ, I) = X z∈ZI πσ −i(z[I])πσ(z[I], z)ui(z). (4) The immediate counterfactual regret is then RT i,imm(I) = maxa∈A(I) RT i,imm(I, a), where RT i,imm(I, a) = 1 T T X t=1  vi(σt (I→a), I) −vi(σt, I)  (5) Let x+ = max(x, 0). The key insight of CFR is the following result. Theorem 2 [1, Theorem 3] RT i ≤P I∈Ii RT,+ i,imm(I) Using regret-matching2 the positive per-information set immediate counterfactual regrets can be driven to zero, thus driving average overall regret to zero. This results in an average overall regret bound [1, Theorem 4]: RT i ≤∆u,i|Ii| p |Ai|/ √ T, where |Ai| = maxh:P (h)=i |A(h)|. We return to this bound, tightening it further, in Section 4. This result suggests an algorithm for computing equilibria via self-play, which we will refer to as vanilla CFR. The idea is to traverse the game tree computing counterfactual values using Equation 4. Given a strategy, these values define regret terms for each player for each of their information sets using Equation 5. These regret values accumulate and determine the strategies at the next iteration using the regret-matching formula. Since both players are regret minimizing, Theorem 1 applies and so computing the strategy profile ¯σt gives us an approximate Nash Equilibrium. Since CFR only needs to store values at each information set, its space requirement is O(|I|). However, as previously mentioned vanilla CFR requires a complete traversal of the game tree on each iteration, which prohibits its use in many large games. Zinkevich and colleagues [4] made steps to alleviate this concern with a chance-sampled variant of CFR for poker-like games. 3 Monte Carlo CFR The key to our approach is to avoid traversing the entire game tree on each iteration while still having the immediate counterfactual regrets be unchanged in expectation. In general, we want to restrict the terminal histories we consider on each iteration. Let Q = {Q1, . . . , Qr} be a set of subsets of Z, such that their union spans the set Z. We will call one of these subsets a block. On each iteration we will sample one of these blocks and only consider the terminal histories in that block. Let qj > 0 be the probability of considering block Qj for the current iteration (where Pr j=1 qj = 1). Let q(z) = P j:z∈Qj qj, i.e., q(z) is the probability of considering terminal history z on the current iteration. The sampled counterfactual value when updating block j is: ˜vi(σ, I|j) = X z∈Qj∩ZI 1 q(z)ui(z)πσ −i(z[I])πσ(z[I], z) (6) Selecting a set Q along with the sampling probabilities defines a complete sample-based CFR algorithm. Rather than doing full game tree traversals the algorithm samples one of these blocks, and then examines only the terminal histories in that block. Suppose we choose Q = {Z}, i.e., one block containing all terminal histories and q1 = 1. In this case, sampled counterfactual value is equal to counterfactual value, and we have vanilla CFR. Suppose instead we choose each block to include all terminal histories with the same sequence of chance outcomes (where the probability of a chance outcome is independent of players’ actions as 2Regret-matching selects actions with probability proportional to their positive regret, i.e., σt i(a|I) = RT,+ i,imm(I, a)/ P a′∈A(I) RT,+ i,imm(I, a). Regret-matching satisfies Blackwell’s approachability criteria. [7, 8] 4 in poker-like games). Hence qj is the product of the probabilities in the sampled sequence of chance outcomes (which cancels with these same probabilities in the definition of counterfactual value) and we have Zinkevich and colleagues’ chance-sampled CFR. Sampled counterfactual value was designed to match counterfactual value on expectation. We show this here, and then use this fact to prove a probabilistic bound on the algorithm’s average overall regret in the next section. Lemma 1 Ej∼qj [˜vi(σ, I|j)] = vi(σ, I) Proof: Ej∼qj [˜vi(σ, I|j)] = X j qj˜vi(σ, I|j) = X j X z∈Qj∩ZI qj q(z)πσ −i(z[I])πσ(z[I], z)ui(z) (7) = X z∈ZI P j:z∈Qj qj q(z) πσ −i(z[I])πσ(z[I], z)ui(z) (8) = X z∈ZI πσ −i(z[I])πσ(z[I], z)ui(z) = vi(σ, I) (9) Equation 8 follows from the fact that Q spans Z. Equation 9 follows from the definition of q(z). This results in the following MCCFR algorithm. We sample a block and for each information set that contains a prefix of a terminal history in the block we compute the sampled immediate counterfactual regrets of each action, ˜r(I, a) = ˜vi(σt (I→a), I) −˜vi(σt, I). We accumulate these regrets, and the player’s strategy on the next iteration applies the regret-matching algorithm to the accumulated regrets. We now present two specific members of this family, giving details on how the regrets can be updated efficiently. Outcome-Sampling MCCFR. In outcome-sampling MCCFR we choose Q so that each block contains a single terminal history, i.e., ∀Q ∈Q, |Q| = 1. On each iteration we sample one terminal history and only update each information set along that history. The sampling probabilities, qj must specify a distribution over terminal histories. We will specify this distribution using a sampling profile, σ′, so that q(z) = πσ′(z). Note that any choice of sampling policy will induce a particular distribution over the block probabilities q(z). As long as σ′ i(a|I) > ϵ, then there exists a δ > 0 such that q(z) > δ, thus ensuring Equation 6 is well-defined. The algorithm works by sampling z using policy σ′, storing πσ′(z). The single history is then traversed forward (to compute each player’s probability of playing to reach each prefix of the history, πσ i (h)) and backward (to compute each player’s probability of playing the remaining actions of the history, πσ i (h, z)). During the backward traversal, the sampled counterfactual regrets at each visited information set are computed (and added to the total regret). ˜r(I, a) =  wI · 1 −σ(a|z[I])  if (z[I]a) ⊑z −wI · σ(a|z[I]) otherwise , where wI = ui(z)πσ −i(z)πσ i (z[I]a, z) πσ′(z) (10) One advantage of outcome-sampling MCCFR is that if our terminal history is sampled according to the opponent’s policy, so σ′ −i = σ−i, then the update no longer requires explicit knowledge of σ−i as it cancels with the σ′ −i. So, wI becomes ui(z)πσ i (z[I], z)/πσ′ i (z). Therefore, we can use outcomesampling MCCFR for online regret minimization. We would have to choose our own actions so that σ′ i ≈σt i, but with some exploration to guarantee qj ≥δ > 0. By balancing the regret caused by exploration with the regret caused by a small δ (see Section 4 for how MCCFR’s bound depends upon δ), we can bound the average overall regret as long as the number of playings T is known in advance. This effectively mimics the approach taking by Exp3 for regret minimization in normalform games [9]. An alternative form for Equation 10 is recommended for implementation. This and other implementation details can be found in the paper’s supplemental material or the appendix of the associated technical report [10]. 5 External-Sampling MCCFR. In external-sampling MCCFR we sample only the actions of the opponent and chance (those choices external to the player). We have a block Qτ ∈Q for each pure strategy of the opponent and chance, i.e.,, for each deterministic mapping τ from I ∈Ic ∪ IN\{i} to A(I). The block probabilities are assigned based on the distributions fc and σ−i, so qτ = Q I∈Ic fc(τ(I)|I) Q I∈IN\{i} σ−i(τ(I)|I). The block Qτ then contains all terminal histories z consistent with τ, that is if ha is a prefix of z with h ∈I for some I ∈I−i then τ(I) = a. In practice, we will not actually sample τ but rather sample the individual actions that make up τ only as needed. The key insight is that these block probabilities result in q(z) = πσ −i(z). The algorithm iterates over i ∈N and for each doing a post-order depth-first traversal of the game tree, sampling actions at each history h where P(h) ̸= i (storing these choices so the same actions are sampled at all h in the same information set). Due to perfect recall it can never visit more than one history from the same information set during this traversal. For each such visited information set the sampled counterfactual regrets are computed (and added to the total regrets). ˜r(I, a) = (1 −σ(a|I)) X z∈Q∩ZI ui(z)πσ i (z[I]a, z) (11) Note that the summation can be easily computed during the traversal by always maintaining a weighted sum of the utilities of all terminal histories rooted at the current history. 4 Theoretical Analysis We now present regret bounds for members of the MCCFR family, starting with an improved bound for vanilla CFR that depends more explicitly on the exact structure of the extensive game. Let ⃗ai be a subsequence of a history such that it contains only player i’s actions in that history, and let ⃗Ai be the set of all such player i action subsequences. Let Ii(⃗ai) be the set of all information sets where player i’s action sequence up to that information set is ⃗ai. Define the M-value for player i of the game to be Mi = P ⃗ai∈⃗Ai p |Ii(⃗a)|. Note that p |Ii| ≤Mi ≤|Ii| with both sides of this bound being realized by some game. We can strengthen vanilla CFR’s regret bound using this constant, which also appears in the bounds for the MCCFR variants. Theorem 3 When using vanilla CFR for player i, RT i ≤∆u,iMi p |Ai|/ √ T. We now turn our attention to the MCCFR family of algorithms, for which we can provide probabilistic regret bounds. We begin with the most exciting result: showing that external-sampling requires only a constant factor more iterations than vanilla CFR (where the constant depends on the desired confidence in the bound). Theorem 4 For any p ∈(0, 1], when using external-sampling MCCFR, with probability at least 1 −p, average overall regret is bounded by, RT i ≤  1 + √ 2 √p  ∆u,iMi p |Ai|/ √ T. Although requiring the same order of iterations, note that external-sampling need only traverse a fraction of the tree on each iteration. For balanced games where players make roughly equal numbers of decisions, the iteration cost of external-sampling is O( p |H|), while vanilla CFR is O(|H|), meaning external-sampling MCCFR requires asymptotically less time to compute an approximate equilibrium than vanilla CFR (and consequently chance-sampling CFR, which is identical to vanilla CFR in the absence of chance nodes). Theorem 5 For any p ∈(0, 1], when using outcome-sampling MCCFR where ∀z ∈Z either πσ −i(z) = 0 or q(z) ≥δ > 0 at every timestep, with probability 1 −p, average overall regret is bounded by RT i ≤  1 + √ 2 √p  1 δ  ∆u,iMi p |Ai|/ √ T The proofs for the theorems in this section can be found in the paper’s supplemental material and as an appendix of the associated technical report [10]. The supplemental material also presents a slightly complicated, but general result for any member of the MCCFR family, from which the two theorems presented above are derived. 6 Game |H| (106) |I| (103) l M1 M2 tvc tos tes OCP 22.4 2 5 45 32 28s 46µs 99µs Goof 98.3 3294 14 89884 89884 110s 150µs 150ms LTTT 70.4 16039 18 1333630 1236660 38s 62µs 70ms PAM 91.8 20 13 9541 2930 120s 85µs 28ms Table 1: Game properties. The value of |H| is in millions and |I| in thousands, and l = maxh∈H|h|. tvc, tos, and tes are the average wall-clock time per iteration4 for vanilla CFR, outcome-sampling MCCFR, and external-sampling MCCFR. 5 Experimental Results We evaluate the performance of MCCFR compared to vanilla CFR on four different games. Goofspiel [11] is a bidding card game where players have a hand of cards numbered 1 to N, and take turns secretly bidding on the top point-valued card in a point card stack using cards in their hands. Our version is less informational: players only find out the result of each bid and not which cards were used to bid, and the player with the highest total points wins. We use N = 7 in our experiments. One-Card Poker [12] is a generalization of Kuhn Poker [13], we use a deck of size 500. Princess and Monster [14, Research Problem 12.4.1] is a pursuit-evasion game on a graph, neither player ever knowing the location of the other. In our experiments we use random starting positions, a 4-connected 3 by 3 grid graph, and a horizon of 13 steps. The payoff to the evader is the number of steps uncaptured. Latent Tic-Tac-Toe is a twist on the classic game where moves are not disclosed until after the opponent’s next move, and lost if invalid at the time they are revealed. While all of these games have imperfect information and roughly of similar size, they are a diverse set of games, varying both in the degree (the ratio of the number of information sets to the number of histories) and nature (whether due to chance or opponent actions) of imperfect information. The left columns of Table 1 show various constants, including the number of histories, information sets, game length, and M-values, for each of these domains. We used outcome-sampling MCCFR, external-sampling MCCFR, and vanilla CFR to compute an approximate equilibrium in each of the four games. For outcome-sampling MCCFR we used an epsilon-greedy sampling profile σ′. At each information set, we sample an action uniformly randomly with probability ϵ and according to the player’s current strategy σt. Through experimentation we found that ϵ = 0.6 worked well across all games; this is interesting because the regret bound suggests δ should be as large as possible. This implies that putting some bias on the most likely outcome to occur is helpful. With vanilla CFR we used to an implementational trick called pruning to dramatically reduce the work done per iteration. When updating one player’s regrets, if the other player has no probability of reaching the current history, the entire subtree at that history can be pruned for the current iteration, with no effect on the resulting computation. We also used vanilla CFR without pruning to see the effects of pruning in our domains. Figure 1 shows the results of all four algorithms on all four domains, plotting approximation quality as a function of the number of nodes of the game tree the algorithm touched while computing. Nodes touched is an implementation-independent measure of computation; however, the results are nearly identical if total wall-clock time is used instead. Since the algorithms take radically different amounts of time per iteration, this comparison directly answers if the sampling variants’ lower cost per iteration outweighs the required increase in the number of iterations. Furthermore, for any fixed game (and degree of confidence that the bound holds), the algorithms’ average overall regret is falling at the same rate, O(1/ √ T), meaning that only their short-term rather than asymptotic performance will differ. The graphs show that the MCCFR variants often dramatically outperform vanilla CFR. For example, in Goofspiel, both MCCFR variants require only a few million nodes to reach ϵσ < 0.5 where CFR takes 2.5 billion nodes, three orders of magnitude more. In fact, external-sampling, which has the tightest theoretical computation-time bound, outperformed CFR and by considerable margins (excepting LTTT) in all of the games. Note that pruning is key to vanilla CFR being at all practical in these games. For example, in Latent Tic-Tac-Toe the first iteration of CFR touches 142 million nodes, but later iterations touch as few as 5 million nodes. This is because pruning is not possible 4As measured on an 8-core Intel Xeon 2.5 GHz machine running Linux x86 64 kernel 2.6.27. 7 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 0 1e+09 2e+09 3e+09 4e+09 5e+09 Nodes Touched Goofspiel CFR CFR with pruning MCCFR-outcome MCCFR-external 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 0 2e+08 4e+08 6e+08 Nodes Touched Latent Tic-Tac-Toe CFR CFR with pruning MCCFR-outcome MCCFR-external 0 0.05 0.1 0.15 0.2 0.25 0 2e+08 4e+08 6e+08 8e+08 1e+09 Nodes Touched One-Card Poker CFR CFR with pruning MCCFR-outcome MCCFR-external 0 2 4 6 8 10 12 14 16 18 20 0 1e+08 2e+08 3e+08 4e+08 5e+08 Nodes Touched Princess and Monster CFR CFR with pruning MCCFR-outcome MCCFR-external Figure 1: Convergence rates of Vanilla CFR, outcome-sampled MCCFR, and external-sampled MCCFR for various games. The y axis in each graph represents the exploitability of the strategies for the two players ϵσ (see Section 2.1). in the first iteration. We believe this is due to dominated actions in the game. After one or two traversals, the players identify and eliminate dominated actions from their policies, allowing these subtrees to pruned. Finally, it is interesting to note that external-sampling was not uniformly the best choice, with outcome-sampling performing better in Goofspiel. With outcome-sampling performing worse than vanilla CFR in LTTT, this raises the question of what specific game properties might favor one algorithm over another and whether it might be possible to incorporate additional game specific constants into the bounds. 6 Conclusion In this paper we defined a family of sample-based CFR algorithms for computing approximate equilibria in extensive games, subsuming all previous CFR variants. We also introduced two sampling schemes: outcome-sampling, which samples only a single history for each iteration, and externalsampling, which samples a deterministic strategy for the opponent and chance. In addition to presenting a tighter bound for vanilla CFR, we presented regret bounds for both sampling variants, which showed that external sampling with high probability gives an asymptotic computational time improvement over vanilla CFR. We then showed empirically in very different domains that the reduction in iteration time outweighs the increase in required iterations leading to faster convergence. There are three interesting directions for future work. First, we would like to examine how the properties of the game effect the algorithms’ convergence. Such an analysis could offer further algorithmic or theoretical improvements, as well as practical suggestions, such as how to choose a sampling policy in outcome-sampled MCCFR. Second, using outcome-sampled MCCFR as a general online regret minimizing technique in extensive games (when the opponents’ strategy is not known or controlled) appears promising. It would be interesting to compare the approach, in terms of bounds, computation, and practical convergence, to Gordon’s Lagrangian hedging [5]. Lastly, it seems like this work could be naturally extended to cases where we don’t assume perfect recall. Imperfect recall could be used as a mechanism for abstraction over actions, where information sets are grouped by important partial sequences rather than their full sequences. 8 References [1] Martin Zinkevich, Michael Johanson, Michael Bowling, and Carmelo Piccione. Regret minimization in games with incomplete information. In Advances in Neural Information Processing Systems 20 (NIPS), 2008. [2] Andrew Gilpin, Samid Hoda, Javier Pe˜na, and Tuomas Sandholm. Gradient-based algorithms for finding Nash equilibria in extensive form games. In 3rd International Workshop on Internet and Network Economics (WINE’07), 2007. [3] D. Koller, N. Megiddo, and B. von Stengel. Fast algorithms for finding randomized strategies in game trees. In Proceedings of the 26th ACM Symposium on Theory of Computing (STOC ’94), pages 750–759, 1994. [4] Martin Zinkevich, Michael Johanson, Michael Bowling, and Carmelo Piccione. Regret minimization in game with incomplete information. Technical Report TR07-14, University of Alberta, 2007. http://www.cs.ualberta.ca/research/techreports/2007/ TR07-14.php. [5] Geoffrey J. Gordon. No-regret algorithms for online convex programs. In In Neural Information Processing Systems 19, 2007. [6] Martin J. Osborne and Ariel Rubinstein. A Course in Game Theory. MIT Press, 1994. [7] Sergiu Hart and Andreu Mas-Colell. A simple adaptive procedure leading to correlated equilibrium. Econometrica, 68(5):1127–1150, September 2000. [8] D. Blackwell. An analog of the minimax theorem for vector payoffs. Pacific Journal of Mathematics, 6:1–8, 1956. [9] Peter Auer, Nicol`o Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. Gambling in a rigged casino: The adversarial multi-arm bandit problem. In 36th Annual Symposium on Foundations of Computer Science, pages 322–331, 1995. [10] Marc Lanctot, Kevin Waugh, Martin Zinkevich, and Michael Bowling. Monte carlo sampling for regret minimization in extensive games. Technical Report TR09-15, University of Alberta, 2009. http://www.cs.ualberta.ca/research/techreports/2009/ TR09-15.php. [11] S. M. Ross. Goofspiel — the game of pure strategy. Journal of Applied Probability, 8(3):621– 625, 1971. [12] Geoffrey J. Gordon. No-regret algorithms for structured prediction problems. Technical Report CMU-CALD-05-112, Carnegie Mellon University, 2005. [13] H. W. Kuhn. Simplified two-person poker. Contributions to the Theory of Games, 1:97–103, 1950. [14] Rufus Isaacs. Differential Games: A Mathematical Theory with Applications to Warfare and Pursuit, Control and Optimization. John Wiley & Sons, 1965. 9
2009
1
3,592
Spatial Normalized Gamma Processes Vinayak Rao Gatsby Computational Neuroscience Unit University College London vrao@gatsby.ucl.ac.uk Yee Whye Teh Gatsby Computational Neuroscience Unit University College London ywteh@gatsby.ucl.ac.uk Abstract Dependent Dirichlet processes (DPs) are dependent sets of random measures, each being marginally DP distributed. They are used in Bayesian nonparametric models when the usual exchangeability assumption does not hold. We propose a simple and general framework to construct dependent DPs by marginalizing and normalizing a single gamma process over an extended space. The result is a set of DPs, each associated with a point in a space such that neighbouring DPs are more dependent. We describe Markov chain Monte Carlo inference involving Gibbs sampling and three different Metropolis-Hastings proposals to speed up convergence. We report an empirical study of convergence on a synthetic dataset and demonstrate an application of the model to topic modeling through time. 1 Introduction Bayesian nonparametrics have recently garnered much attention in the machine learning and statistics communities, due to their elegant treatment of infinite dimensional objects like functions and densities, as well as their ability to sidestep the need for model selection. The Dirichlet process (DP) [1] is a cornerstone of Bayesian nonparametrics, and forms a basic building block for a wide variety of extensions and generalizations, including the infinite hidden Markov model [2], the hierarchical DP [3], the infinite relational model [4], adaptor grammars [5], to name just a few. By itself, the DP is a model that assumes that data are infinitely exchangeable, i.e. the ordering of data items does not matter. This assumption is false in many situations and there has been a concerted effort to extend the DP to more structured data. Much of this effort has focussed on defining priors on collections of dependent random probability measures. [6] expounded on the notion of dependent DPs, that is, a dependent set of random measures that are all marginally DPs. The property of being marginally DP here is both due to a desire to construct mathematically elegant solutions, and also due to the fact that the DP and its implications as a statistical model, e.g. on the behaviour of induced clusterings of data or asymptotic consistency, are well-understood. In this paper, we propose a simple and general framework for the construction of dependent DPs on arbitrary spaces. The idea is based on the fact that just as Dirichlet distributions can be generated by drawing a set of independent gamma variables and normalizing, the DP can be constructed by drawing a sample from a gamma process (ΓP) and normalizing (i.e. it is an example of a normalized random measure [7, 8]). A ΓP is an example of a completely random measure [9]: it has the property that the random masses it assigns to disjoint subsets are independent. Furthermore, the restriction of a ΓP to a subset is itself a ΓP. This implies the following easy construction of a set of dependent DPs: define a ΓP over an extended space, associate each DP with a different region of the space, and define each DP by normalizing the restriction of the ΓP on the associated region. This produces a set of dependent DPs, with the amount of overlap among the regions controlling the amount of dependence. We call this model a spatial normalized gamma process (SNΓP). More generally, our construction can be extended to normalizing restrictions of any completely random measure, and we call the resulting dependent random measures spatial normalized random measures (SNRMs). 1 In Section 2 we briefly describe the ΓP. Then we describe our construction of the SNΓP in Section 3. We describe inference procedures based on Gibbs and Metropolis-Hastings sampling in Section 4 and report experimental results in Section 5. We conclude by discussing limitations and possible extensions of the model as well as related work in Section 6. 2 Gamma Processes We briefly describe the gamma process (ΓP) here. A good high-level introduction can be found in [10]. Let (Θ, Ω) be a measure space on which we would like to define a ΓP. Like the DP, realizations of the ΓP are atomic measures with random weighted point masses. We can visualize the point masses θ ∈Θ and their corresponding weights w > 0 as points in a product space Θ ⊗[0, ∞). Consider a Poisson process over this product space with mean measure µ(dθdw) = α(dθ)w−1e−wdw. (1) Here α is a measure on the space (Θ, Ω) and is called the base measure of the ΓP. A sample from this Poisson process will yield an infinite set of atoms {θi, wi}∞ i=1 since R Θ⊗[0,∞) µ(dθdw) = ∞. A sample from the ΓP is then defined as G = ∞ X i=1 wiδθi ∼ΓP(α). (2) It can be shown that the total mass G(S) = P∞ i=1 wi1(θi ∈S) of any measurable subset S ⊂Θ is simply gamma distributed with shape parameter α(S), thus the natural name gamma process. Dividing G by G(Θ), we get a normalized random measure—a random probability measure. Specifically, we get a sample from the Dirichlet process DP(α): D = G/G(Θ) ∼DP(α). (3) Here we used an atypical parameterization of the DP in terms of the base measure α. The usual (equivalent) parameters of the DP are: strength parameter α(Θ) and base distribution α/α(Θ). Further, the DP is independent of the normalization: D⊥⊥G(Θ). The gamma process is an example of a completely random measure [9]. This means that for mutually disjoint measurable subsets S1, . . . , Sn ∈Ωthe random numbers {G(S1), . . . , G(Sn)} are mutually independent. Two straightforward consequences will be of importance in the rest of this paper. Firstly, if S ∈Ωthen the restriction G′(dθ) = G(dθ ∩S) onto S is a ΓP with base measure α′(dθ) = α(dθ ∩S). Secondly, if Θ = Θ1 ⊗Θ2 is a two dimensional space, then the projection G′′(dθ1) = R Θ2 G(dθ1dθ2) onto Θ1 is also a ΓP with base measure α′′(dθ1) = R Θ2 α(dθ1dθ2). 3 Spatial Normalized Gamma Processes In this section we describe our proposal for constructing dependent DPs. Let (Θ, Ω) be a probability space and T an index space. We wish to construct a set of dependent random measures over (Θ, Ω), one Dt for each t ∈T such that each Dt is marginally DP. Our approach is to define a gamma process G over an extended space and let each Dt be a normalized restriction/projection of G. Because restrictions and projections of gamma processes are also gamma processes, each Dt will be DP distributed. To this end, let Y be an auxiliary space and for each t ∈T, let Yt ⊂Y be a measurable set. For any measure µ over Θ ⊗Y define the restricted projection µt by µt(dθ) = Z Yt µ(dθdy) = µ(dθ ⊗Yt). (4) Note that µt is a measure over Θ for each t ∈T. Now let α be a base measure over the product space Θ ⊗Y and consider a gamma process G ∼ΓP(α) (5) 2 over Θ ⊗Y. Since restrictions and projections of ΓPs are ΓPs as well, Gt will be a ΓP over Θ with base measure αt: Gt(dθ) = Z Yt G(dθdy) ∼ΓP(αt) (6) Now normalizing, Dt = Gt/Gt(Θ) ∼DP(αt) (7) We call the resulting set of dependent DPs {Dt}t∈T spatial normalized gamma processes (SNΓPs). If the index space is continuous, {Dt}t∈T can equivalently be thought of as a measure-valued stochastic process. The amount of dependence between Ds and Dt for s, t ∈T is related to the amount of overlap between Ys and Yt. Generally, the subsets Yt are defined so that the closer s and t are in T, the more overlap Ys and Yt have and as a result Ds and Dt are more dependent. 3.1 Examples We give two examples of SNΓPs, both with index set T = R interpreted as the time line. Generalizations to higher dimensional Euclidean spaces Rn are straightforward. Let H be a base distribution over Θ and γ > 0 be a concentration parameter. The first example uses Y = R as well, with the subsets being Yt = [t −L, t + L] for some fixed window length L > 0. The base measure is α(dθdy) = γH(dθ)dy/2L. In this case the measurevalued stochastic process {Dt}t∈R is stationary. The base measure αt works out to be: αt(dθ) = Z t+L t−L γH(dθ)dy/2L = γH(dθ), (8) so that each Dt ∼DP(γH) with concentration parameter γ and base distribution H. We can interpret this SNΓP as follows. Each atom in the overall ΓP G has a time-stamp y and a time-span of [y −L, y + L], so that it will only appear in the DPs Dt within the window t ∈[y −L, y + L]. As a result, two DPs Ds and Dt will share more atoms the closer s and t are to each other, and no atoms if |s −t| > 2L. Further, the dependence between Ds and Dt depends on |s −t| only, decreasing with increasing |s −t| and independent if |s −t| > 2L. The second example generalizes the first one by allowing different atoms to have different window lengths. Each atom now has a time-stamp y and a window length l, so that it appears in DPs in the window [y −l, y + l]. Our auxiliary space is thus Y = R ⊗[0, ∞), with Yt = {(y, l) : |y −t| ≤l} (see Figure 1). Let β(dl) be a distribution over window lengths in [0, ∞). We use the base measure α(dθdydl) = γH(dθ)dyβ(dl)/2l. The restricted projection is then αt(dθ) = Z |y−t|≤l γH(dθ)dyβ(dl)/2l = γH(dθ) Z ∞ 0 β(dl) Z t+l t−l dy/2l = γH(dθ) (9) so that each Dt is again simply DP(γH). Now Ds and Dt will always be dependent with the amount of dependence decreasing as |s −t| increases. 3.2 Interpretation as Mixtures of DPs Even though the SNΓP as described above defines an uncountably infinite number of DPs, in practice we will only have observations at a finite number of times, say t1, . . . , tm. We define R as the smallest collection of disjoint regions of Y such that each Ytj is a union of subsets in R. Thus R = {∩m j=1Sj : Sj = Ytj or Sj = Y\Ytj, with at least one Sj = Ytj and ∩m j=1 Sj ̸= ∅}. For 1 ≤j ≤m let Rj be the collection of regions in R contained in Ytj, so that ∪R∈Rj = Ytj. For each R ∈R define GR(dθ) = G(dθ ⊗R) (10) We see that each GR is a ΓP with base measure αR(dθ) = α(dθ ⊗R). Normalizing, DR = GR/GR(Θ) ∼DP(αR), with DR⊥⊥DR′ for distinct R, R′ ∈R. Now Dtj(dθ) = P R∈Rj GR(Θ) P R′∈Rj GR′(Θ)DR(dθ) (11) 3 t1 L SCALE = L t2 t3 Y Figure 1: The extended space Y⊗L over which the overall ΓP is defined in the second example. Not shown is the Θ-space over which the DPs are defined. Also not shown is the fourth dimension W needed to define the Poisson process used to construct the ΓP. t1, t2, t3 ∈Y are three times at which observations are present. The subset Ytj corresponding to each tj is the triangular area touching tj. The regions in R are the six areas formed by various intersections of the triangular areas. so each Dtj is a mixture where each component DR is drawn independently from a DP. Further, the mixing proportions are Dirichlet distributed and independent from the components by virtue of each GR(Θ) being gamma distributed and independent from DR. Thus we have the following equivalent construction for a SNΓP: DR ∼DP(αR) gR ∼Gamma(αR(Θ)) for R ∈R Dtj = X R∈Rj πjRDR πjR = gR P R′∈Rj gR for R ∈Rj (12) Note that the DPs in this construction are all defined only over Θ, and references to the auxiliary space Y and the base measure α are only used to define the individual base measures αR and the shape parameters of the gR’s. Figure 1 shows the regions for the second example corresponding to observations at three times. The mixture of DPs construction is related to the hierarchical Dirichlet process defined in [11] (not the one defined by Teh et al [3]). The difference is that the parameters of the prior over the mixing proportions exactly matches the concentration parameters of the individual DPs. A consequence of this is that each mixture Dtj is now conveniently also a DP. 4 Inference in the SNΓP The mixture of DPs interpretation of the SNΓP makes sampling from the model, and consequently inference via Markov chain Monte Carlo sampling, easy. In what follows, we describe both Gibbs sampling and Metropolis-Hastings based updates for a hierarchical model in which the dependent DPs act as prior distributions over a collection of infinite mixture models. Formally, our observations now lie in a measurable space (X, Σ) equipped with a set of probability measures Fθ parametrized by θ ∈Θ. Observation i at time tj is denoted xji, lies in region rji and is drawn from mixture component parametrized by θji. Thus to augment (12), we have rji ∼Mult({πjR : R ∈Rj}) θji ∼Drji xji ∼Fθji (13) where rji = R with probability πjR for each R ∈Rj. In words, we first pick a region rji from the set Rj, then a mixture component θji, followed by drawing xji from the mixture distribution. 4.1 Gibb Sampling We derive a Gibbs sampler for the SNΓP where the region DPs DR are integrated out and replaced by Chinese restaurants. Let cji denote the index of the cluster in Drji to which observation xji is assigned. We also assume that the base distribution H is conjugate to the mixture distributions Fθ so that the cluster parameters are integrated out as well. The Gibbs sampler iteratively resamples the 4 latent variables left: rji’s, cji’s and gR’s. In the following, let mjRc be the number of observations from time tj assigned to cluster c in the DP DR in region R, and let f ¬ji Rc (xji) be the density of observation xji conditioned on the other variables currently assigned to cluster c in DR, with its cluster parameters integrated out. We denote marginal counts with dots, for example m·Rc is the number of observations (over all times) assigned to cluster c in region R. Superscripts ¬ji means observation xji is excluded. rji and cji are resampled together; their conditional joint probability given the other variables is: p(rji = R, cji = c|others) ∝  gR P r∈Rj gr  m¬ji ·Rc m¬ji ·R· +αR(Θ)  f ¬ji Rc (xji) (14) The probability of xji joining a new cluster in region R is p(rji = R, cji = cnew|others) ∝  gR P r∈Rj gr  αR(Θ) m¬ji ·R· +αR(Θ)  fRcnew(xji) (15) where R ∈Rj and c denotes the index of an existing cluster in region R. The updates of the gR’s are more complicated as they are coupled and not of standard form: p({gR}R∈R|others) =  Q R∈R gαR(Θ)+m·R·−1 R e−gR  Q j  P R∈Rj gR −mj·· (16) To sample the gR’s we introduce auxiliary variables {Aj} to simplify the rightmost term above. In particular, using the Gamma identity Γ(mj··)  P R∈Rj gR −mj·· = Z ∞ 0 Amj··−1 j e −P R∈Rj gRAjdAj (17) we have that (16) is the marginal of {gR}R∈R of the distribution: q({gR}R∈R, {Aj}) ∝ Y R∈R gαR(Θ)+m·R·−1 R e−gR Y j Amj··−1 j e −P R∈Rj gRAj (18) Now we can Gibbs sample the gR’s and Aj’s: gR|others ∼Gamma(αR(Θ) + m·R·, 1 + P j∈JR Aj) (19) Aj|others ∼Gamma(mj··, P R∈Rj gR) (20) Here JR is the collection of indices j such that R ∈Rj. 4.2 Metropolis-Hastings Proposals To improve convergence and mixing of the Markov chain, we introduce three Metropolis-Hastings (MH) proposals in addition to the Gibbs sampling updates described above. These propose nonincremental changes in the assignment of observations to clusters and regions, allowing the Markov chain to traverse to different modes that are hard to reach using Gibbs sampling. The first proposal (Algorithm 1) proceeds like the split-merge proposal of [12]. It either splits an existing cluster in a region into two new clusters in the same region, or merges two existing clusters in a region into a single cluster. To improve the acceptance probability, we use 5 rounds of restricted Gibbs sampling [12]. The second proposal (Algorithm 2) seeks to move a picked cluster from one region to another. The new region is chosen from a region neighbouring the current one (for example in Figure 1 the neigbors are the four regions diagonally neighbouring the current one). To improve acceptance probability we also resample the gR’s associated with the current and proposed regions. The move can be invalid if the cluster contains an observation from a time point not associated with the new region; in this case the move is simply rejected. The third proposal (Algorithm 3) we considered seeks to combine into one step what would take two steps under the previous two proposals: splitting a cluster and moving it to a new region (or the reverse: moving a cluster into a new region and merging it with a cluster therein). 5 Algorithm 1 Split and Merge in the Same Region (MH1) 1: Let S0 be the current state of the Markov chain. 2: Pick a region R with probability proportional to m·R· and two distinct observations in R 3: Construct a launch state S′ by creating two new clusters, each containing one of the two observations, and running restricted Gibbs sampling 4: if the two observations belong to the same cluster in S0 then 5: Propose split: run one last round of restricted Gibbs sampling to reach the proposed state S1 6: else 7: Propose merge: the proposed state S1 is the (unique) state merging the two clusters 8: end if 9: Accept proposed state S1 according to acceptance probability min  1, p(S1)q(S′→S0) p(S0)q(S′→S1)  where p(S) is the posterior probability of state S and q(S′ →S) is the probability of proposing state S from the launch state S′. Algorithm 2 Move (MH2) 1: Pick a cluster c in region R0 with probability proportional to m·R0c 2: Pick a region R1 neighbouring R0 and propose moving c to R1 3: Propose new weights gR0, gR1 by sampling both from (19) 4: Accept or reject the move Algorithm 3 Split/merged Move (MH3) 1: Pick a region R0, a cluster c contained in R, and a neighbouring region R1 with probability proportional to the number of observations in c that cannot be assigned to a cluster in R1 2: if c contains observations than can be moved to R1 then 3: Propose assigning these observations to a new cluster in R1 4: else 5: Pick a cluster from those in R1 and propose merging it into c 6: end if 7: Propose new weights gR0, gR1 by sampling from (19) 8: Accept or reject the proposal 5 Experiments Synthetic data In the first of our experiments, we artificially generated 60 data points at each of 5 times by sampling from a mixture of 10 Gaussians. Each component was assigned a timespan, ranging from a single time to the entire range of five times. We modelled this data as a collection of five DP mixture of Gaussians, with a SNΓP prior over the five dependent DPs. We used the set-up as described in the second example. To encourage clusters to be shared across times (i.e. to avoid similar clusters with non-overlapping timespans), we chose the distribution over window lengths β(w) to give larger probabilities to larger timespans. Even in this simple model, Gibbs sampling alone usually did not converge to a good optimum; remaining stuck around local maxima. Figure 2 shows the evolution of the log-likelihood for 5 different samplers: plain Gibbs sampling, Gibbs sampling augmented with each of MH proposals 1, 2 and 3, and finally a sampler that interleaved all three MH samplers with Gibbs sampling. Not surprisingly, the complete sampler converged fastest, with Gibbs sampling with MH-proposal 2 (Gibbs+MH2) performing nearly as well. Gibbs+MH1 seemed converge no faster than just Gibbs sampling, with Gibbs+MH3 giving performance somewhere in between. The fact that Gibbs+MH2 performs so well can be explained by the easy clustering structure of the problem, so that exploring region assignments of clusters rather than cluster assignments of observations was the challenge faced by the sampler (note its high acceptance rate in Figure 4). To demonstrate how the additional MH proposals help mixing, we examined how the cluster assignment of observations varied over iterations. At each iteration, we construct a 600 by 600 binary matrix, with element (i, j) being 1 if observations i and j are assigned to the same cluster. In Figure 3, we plot the average L1 difference between matrices at different iteration lags. Somewhat counterintuitively, Gibbs+MH1 does much better than Gibbs sampling with all MH proposals. 6 0 100 200 300 400 500 600 700 800 −1000 −950 −900 −850 −800 −750 Gibbs+MH1 MH2+MH3 Gibbs+MH2 Gibbs+MH3 Gibbs+MH1 Gibbs Figure 2: log-likelihoods (the coloured lines are ordered at iteration 80 like the legend). 1 2 3 4 5 6 7 8 9 10 500 1000 1500 2000 2500 3000 3500 4000 Gibbs+MH1 Gibbs+MH1 MH2+MH3 Gibbs+MH3 Gibbs+MH2 Gibbs Figure 3: Dissimilarity in clustering structure vs lag (the coloured lines are ordered like the legend). Figure 4: Acceptance rates of the MH proposals for Gibbs+MH1+MH2+MH3 after burn-in (percentages). Proposal Synthetic NIPS MH-Proposal 1 0.51 0.6621 MH-Proposal 2 11.7 0.6548 MH-Proposal 3 0.22 0.0249 0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 5 6 7 8 9 0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 6 8 10 0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 6 7 8 Figure 5: Evolution of the timespan of a cluster. From top to bottom: Gibbs+MH1+MH2+MH3, Gibbs+MH2 and Gibbs+MH1 (pink), Gibbs+MH3 (black) and Gibbs (magenta). This is because the latter is simultaneously exploring the region assignment of clusters as well. In Gibbs+MH1, clusters split and merge frequently since they stay in the same regions, causing the cluster matrix to vary rapidly. In Gibbs+MH1+MH2+MH3, after a split the new clusters often move into separate regions; so it takes longer before they can merge again. Nonetheless, this demonstrates the importance of split/merge proposals like MH1 and MH3; [12] studied this in greater detail. We next examined how well the proposals explore the region assignment of clusters. In particular, at each step of the Markov chain, we pick the cluster with mean closest to the mean of one of the true Gaussian mixture components, and tracked how its timespan evolved. Figure 5 shows that without MH proposal 2, the clusters remain essentially frozen in their initial regions. NIPS dataset For our next experiment we modelled the proceedings of the first 13 years of NIPS. The number of word tokens was about 2 million spread over 1740 documents, with about 13000 unique words. We used a model that involves both the SNΓP (to capture changes in topic distributions across the years) and the hierarchical Dirichlet process (HDP) [3] (to capture differences among documents). Each document is modeled using a different DP, with the DPs in year i sharing the same base distribution Di. On top of this, we place a SNΓP (with structure given by the second example in Section 3.1) prior on {Di}13 i=1. Consequently, each topic is associated with a distribution over words, and has a particular timespan. Each document in year i is a mixture over the topics whose timespan include year i. Our model allows statistical strength to be shared in a more refined manner than the HDP. Instead of all DPs having the same base distribution, we have 13 dependent base distributions drawn from the SNΓP. The concentration parameters of our DPs were chosen to encourage shared topics, their magnitude chosen to produce about a 100 topics over the whole corpus on average. Figure 6 shows some of the topics identified by the model and their timespans. For inference, we used Gibbs sampling, interleaved with all three MH proposals to update the SNΓP. the Markov chain was initialized randomly except that all clusters were assigned to the top-most region (spanning the 13 years). We calculated per-word perplexity [3] on test documents (about half of all documents, withheld during training). We obtained an average perplexity of 3023.4, as opposed to about 3046.5 for the HDP. 7 1 13 2 3 4 5 6 7 8 9 10 11 12 (60385 words) (173268 words) (98342 words) (20290 words) (7021 words) (3223 words) (2074 words) (5334 words) (780 words) topic B topic C topic D topic E topic F topic G topic I topic H scale topic A year Topic A function, model, data, error, learning, probability, distribution Topic B model, visual, figure, image, motion, object, field Topic C network, memory, neural, state, input, matrix, hopfield Topic D rules, rule, language, tree, representations, stress, grammar Topic E classifier, genetic, memory, classification, tree, algorithm, data Topic F map, brain, fish, electric, retinal, eye, tectal Topic G recurrent, time, context, sequence, gamma, tdnn, sequences Topic H chain, protein, region, mouse, human, markov, sequence Topic I routing, load, projection, forecasting, shortest, demand, packet Figure 6: Inferred topics with their timespans (the horizontal lines). In parentheses are the number of words assigned to each topic. On the right are the top ten most probable words in the topics. Computationally, the 3 MH steps are much cheaper than a round of Gibbs sampling. When trying to split a large cluster (or merge 2 large clusters), MH proposal 1 can still be fairly expensive because of the rounds of restricted Gibbs sampling. MH proposal 3 does not face this problem. However we find that after the burn-in period it tends to have low acceptance rate. We believe we need to redesign MH proposal 3 to produce more intelligent splits to increase the acceptance rate. Finally, MH-proposal 2 is the cheapest, both in terms of computation and book-keeping, and has reasonably high acceptance rate. We ran MH-proposal 2 a hundred times between successive Gibbs sampling updates. The acceptance rates of the MH proposals (given in Figure 4) are slightly lower than those reported by [12], where a plain DP mixture model was applied to a simple synthetic data set, and where split/merge acceptance rates were on the order of 1 to 5 percent. 6 Discussion We described a conceptually simple and elegant framework for the construction of dependent DPs based on normalized gamma processes. The resulting collection of random probability measures has a number of useful properties: the marginal distributions are DPs and the weights of shared atoms can vary across DPs. We developed auxiliary variable Gibbs and Metropolis-Hastings samplers for the model and applied it to time-varying topic modelling where each topic has its own time-span. Since [6] there has been strong interest in building dependent sets of random measures. Interestingly, the property of each random measure being marginally DP, as originally proposed by [6], is often not met in the literature, where dependent stochastic processes are defined through shared and random parameters [3, 14, 15, 11]. Useful dependent DPs had not been found [16] until recently, when a flurry of models were proposed [17, 18, 19, 20, 21, 22, 23]. However most of these proposals have been defined only for the real line (interpreted as the time line) and not for arbitrary spaces. [24, 25, 26, 13] proposed a variety of spatial DPs where the atoms and weights of the DPs are dependent through Gaussian processes. A model similar to ours was proposed recently in [23], using the same basic idea of introducing dependencies between DPs through spatially overlapping regions. This model differs from ours in the content of these shared regions (breaks of a stick in that case vs a (restricted) Gamma process in ours) and the construction of the DPs (they use the stick breaking construction of the DP, we normalize the restricted Gamma process). Consequently, the nature of the dependencies between the DPs differ; for instance, their model cannot be interpreted as a mixture of DPs like ours. There are a number of interesting future directions. First, we can allow, at additional complexity, the locations of atoms to vary using the spatial DP approach [13]. Second, more work need still be done to improve inference in the model, e.g. using a more intelligent MH proposal 3. Third, although we have only described spatial normalized gamma processes, it should be straightforward to extend the approach to spatial normalized random measures [7, 8]. Finally, further investigations into the properties of the SNΓP and its generalizations, including the nature of the dependency between DPs and asymptotic behavior, are necessary for a complete understanding of these processes. 8 References [1] T. S. Ferguson. A Bayesian analysis of some nonparametric problems. Annals of Statistics, 1(2):209–230, 1973. [2] M. J. Beal, Z. Ghahramani, and C. E. Rasmussen. The infinite hidden Markov model. In Advances in Neural Information Processing Systems, volume 14, 2002. [3] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101(476):1566–1581, 2006. [4] C. Kemp, J. B. Tenenbaum, T. L. Griffiths, T. Yamada, and N. Ueda. Learning systems of concepts with an infinite relational model. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 21, 2006. [5] M. Johnson, T. L. Griffiths, and S. Goldwater. Adaptor grammars: A framework for specifying compositional nonparametric Bayesian models. In Advances in Neural Information Processing Systems, volume 19, 2007. [6] S. MacEachern. Dependent nonparametric processes. In Proceedings of the Section on Bayesian Statistical Science. American Statistical Association, 1999. [7] L. E. Nieto-Barajas, I. Pruenster, and S. G. Walker. Normalized random measures driven by increasing additive processes. Annals of Statistics, 32(6):2343–2360, 2004. [8] L. F. James, A. Lijoi, and I. Pruenster. Bayesian inference via classes of normalized random measures. ICER Working Papers - Applied Mathematics Series 5-2005, ICER - International Centre for Economic Research, April 2005. [9] J. F. C. Kingman. Completely random measures. Pacific Journal of Mathematics, 21(1):59–78, 1967. [10] J. F. C. Kingman. Poisson Processes. Oxford University Press, 1993. [11] P. M¨uller, F. A. Quintana, and G. Rosner. A method for combining inference across related nonparametric Bayesian models. Journal of the Royal Statistical Society, 66:735–749, 2004. [12] S. Jain and R. M. Neal. A split-merge Markov chain Monte Carlo procedure for the Dirichlet process mixture model. Technical report, Department of Statistics, University of Toronto, 2004. [13] J. A. Duan, M. Guindani, and A. E. Gelfand. Generalized spatial Dirichlet process models. Biometrika, 94(4):809–825, 2007. [14] A. Rodr´ıguez, D. B. Dunson, and A. E. Gelfand. The nested Dirichlet process. Technical Report 2006-19, Institute of Statistics and Decision Sciences, Duke University, 2006. [15] D. B. Dunson, Y. Xue, and L. Carin. The matrix stick-breaking process: Flexible Bayes meta analysis. Technical Report 07-03, Institute of Statistics and Decision Sciences, Duke University, 2007. http://ftp.isds.duke.edu/WorkingPapers/07-03.html. [16] N. Srebro and S. Roweis. Time-varying topic models using dependent Dirichlet processes. Technical Report UTML-TR-2005-003, Department of Computer Science, University of Toronto, 2005. [17] J. E. Griffin and M. F. J. Steel. Order-based dependent Dirichlet processes. Journal of the American Statistical Association, Theory and Methods, 101:179–194, 2006. [18] J. E. Griffin. The Ornstein-Uhlenbeck Dirichlet process and other time-varying processes for Bayesian nonparametric inference. Technical report, Department of Statistics, University of Warwick, 2007. [19] F. Caron, M. Davy, and A. Doucet. Generalized Polya urn for time-varying Dirichlet process mixtures. In Proceedings of the Conference on Uncertainty in Artificial Intelligence, volume 23, 2007. [20] A. Ahmed and E. P. Xing. Dynamic non-parametric mixture models and the recurrent Chinese restaurant process. In Proceedings of The Eighth SIAM International Conference on Data Mining, 2008. [21] J. E. Griffin and M. F. J. Steel. Bayesian nonparametric modelling with the Dirichlet process regression smoother. Technical report, University of Kent and University of Warwick, 2008. [22] J. E. Griffin and M. F. J. Steel. Generalized spatial Dirichlet process models. Technical report, University of Kent and University of Warwick, 2009. [23] Y. Chung and D. B. Dunson. The local Dirichlet process. Annals of the Institute of Mathematical Statistics, 2009. to appear. [24] S.N. MacEachern, A. Kottas, and A.E. Gelfand. Spatial nonparametric Bayesian models. In Proceedings of the 2001 Joint Statistical Meetings, 2001. [25] C. E. Rasmussen and Z. Ghahramani. Infinite mixtures of Gaussian process experts. In Advances in Neural Information Processing Systems, volume 14, 2002. [26] A. E. Gelfand, A. Kottas, and S. N. MacEachern. Bayesian nonparametric spatial modeling with Dirichlet process mixing. Journal of the American Statistical Association, 100(471):1021–1035, 2005. 9
2009
10
3,593
Learning to Hash with Binary Reconstructive Embeddings Brian Kulis and Trevor Darrell UC Berkeley EECS and ICSI Berkeley, CA {kulis,trevor}@eecs.berkeley.edu Abstract Fast retrieval methods are increasingly critical for many large-scale analysis tasks, and there have been several recent methods that attempt to learn hash functions for fast and accurate nearest neighbor searches. In this paper, we develop an algorithm for learning hash functions based on explicitly minimizing the reconstruction error between the original distances and the Hamming distances of the corresponding binary embeddings. We develop a scalable coordinate-descent algorithm for our proposed hashing objective that is able to efficiently learn hash functions in a variety of settings. Unlike existing methods such as semantic hashing and spectral hashing, our method is easily kernelized and does not require restrictive assumptions about the underlying distribution of the data. We present results over several domains to demonstrate that our method outperforms existing state-of-the-art techniques. 1 Introduction Algorithms for fast indexing and search have become important for a variety of problems, particularly in the domains of computer vision, text mining, and web databases. In cases where the amount of data is huge—large image repositories, video sequences, and others—having fast techniques for finding nearest neighbors to a query is essential. At an abstract level, we may view hashing methods for similarity search as mapping input data (which may be arbitrarily high-dimensional) to a lowdimensional binary (Hamming) space. Unlike standard dimensionality-reduction techniques from machine learning, the fact that the embeddings are binary is critical to ensure fast retrieval times— one can perform efficient linear scans of the binary data to find the exact nearest neighbors in the Hamming space, or one can use data structures for finding approximate nearest neighbors in the Hamming space which have running times that are sublinear in the number of total objects [1, 2]. Since the Hamming distance between two objects can be computed via an xor operation and a bit count, even a linear scan in the Hamming space for a nearest neighbor to a query in a database of 100 million objects can currently be performed within a few seconds on a typical workstation. If the input dimensionality is very high, hashing methods lead to enormous computational savings. In order to be successful, hashing techniques must appropriately preserve distances when mapping to the Hamming space. One of the basic but most widely-employed methods, locality-sensitive hashing (LSH) [1, 2], generates embeddings via random projections and has been used for many large-scale search tasks. An advantage to this technique is that the random projections provably maintain the input distances in the limit as the number of hash bits increases; at the same time, it has been observed that the number of hash bits required may be large in some cases to faithfully maintain the distances. On the other hand, several recent techniques—most notably semantic hashing [3] and spectral hashing [4]—attempt to overcome this problem by designing hashing techniques that leverage machine learning to find appropriate hash functions to optimize an underlying hashing objective. Both methods have shown advantages over LSH in terms of the number of bits required 1 to find good approximate nearest neighbors. However, these methods cannot be directly applied in kernel space and have assumptions about the underlying distributions of the data. In particular, as noted by the authors, spectral hashing assumes a uniform distribution over the data, a potentially restrictive assumption in some cases. In this paper, we introduce and analyze a simple objective for learning hash functions, develop an efficient coordinate-descent algorithm, and demonstrate that the proposed approach leads to improved results as compared to existing hashing techniques. The main idea is to construct hash functions that explicitly preserve the input distances when mapping to the Hamming space. To achieve this, we minimize a squared loss over the error between the input distances and the reconstructed Hamming distances. By analyzing the reconstruction objective, we show how to efficiently and exactly minimize the objective function with respect to a single variable. If there are n training points, k nearest neighbors per point in the training data, and b bits in our desired hash table, our method ends up costing O(nb(k + log n)) time per iteration to update all hash functions, and provably reaches a local optimum of the reconstruction objective. In experiments, we compare against relevant existing hashing techniques on a variety of important vision data sets, and show that our method is able to compete with or outperform state-of-the-art hashing algorithms on these data sets. We also apply our method on the very large Tiny Image data set of 80 million images [5], to qualitatively show some example retrieval results obtained by our proposed method. 1.1 Related Work Methods for fast nearest neighbor retrieval are generally broken down into two families. One group partitions the data space recursively, and includes algorithms such as k −d trees [6], M-trees [7], cover trees [8], metric trees [9], and other related techniques. These methods attempt to speed up nearest neighbor computation, but can degenerate to a linear scan in the worst case. Our focus in this paper is on hashing-based methods, which map the data to a low-dimensional Hamming space. Locality-sensitive hashing [1, 2] is the most popular method, and extensions have been explored for accommodating distances such as ℓp norms [10], learned metrics [11], and image kernels [12]. Algorithms based on LSH typically come with guarantees that the approximate nearest neighbors (neighbors within (1 + ǫ) times the true nearest neighbor distance) may be found in time that is sublinear in the total number of database objects (but as a function of ǫ). Unlike standard dimensionalityreduction techniques, the binary embeddings allow for extremely fast similarity search operations. Several recent methods have explored ways to improve upon the random projection techniques used in LSH. These include semantic hashing [3], spectral hashing [4], parameter-sensitive hashing [13], and boosting-based hashing methods [14]. 2 Hashing Formulation In the following section, we describe our proposed method, starting with the choice of parameterization for the hash functions and the objective function to minimize. We then develop a coordinatedescent algorithm used to minimize the objective function, and discuss extensions of the proposed approach. 2.1 Setup Let our data set be represented by a set of n vectors, given by X = [x1 x2 ... xn]. We will assume that these vectors are normalized to have unit ℓ2 norm—this will make it easier to maintain the proper scale for comparing distances in the input space to distance in the Hamming space.1 Let a kernel function over the data be denoted as κ(xi, xj). We use a kernel function as opposed to the standard inner product to emphasize that the algorithm can be expressed purely in kernel form. We would like to project each data point to a low-dimensional binary space to take advantage of fast nearest neighbor routines. Suppose that the desired number of dimensions of the binary space is b; we will compute the b-dimensional binary embedding by projecting our data using a set of b hash functions h1, ..., hb. Each hash function hi is a binary-valued function, and our low-dimensional 1Alternatively, we may scale the data appropriately by a constant so that the squared Euclidean distances 1 2∥xi −xj∥2 are in [0, 1]. 2 binary reconstruction can be represented as ˜xi = [h1(xi); h2(xi); ...; hb(xi)]. Finally, denote d(xi, xj) = 1 2∥xi −xj∥2 and ˜d(xi, xj) = 1 b∥˜xi −˜xj∥2. Notice that d and ˜d are always between 0 and 1. 2.2 Parameterization and Objective In standard random hyperplane locality-sensitive hashing (e.g. [1]), each hash function hp is generated independently by selecting a random vector rp from a multivariate Gaussian with zero-mean and identity covariance. Then the hash function is given as hp(x) = sign(rT p x). In contrast, we propose to generate a sequence of hash functions that are dependent on one another, in the same spirit as in spectral hashing (though with a different parameterization). We introduce a matrix W of size b × n, and we parameterize the hash functions h1, ..., hp, ..., hb as follows: hp(x) = sign µ s X q=1 Wpqκ(xpq, x) ¶ . Note that the data points xpq for each hash function need not be the same for each hq (that is, each hash function may utilize different sets of points). Similarly, the number of points s used for each hash function may change, though for simplicity we will present the case when s is the same for each function (and so we can represent all weights via the b × s matrix W). Though we are not aware of any existing methods that parameterize the hash functions in this way, this parameterization is natural for several reasons. It does not explicitly assume anything about the distribution of the data. It is expressed in kernelized form, meaning we can easily work over a variety of input data. Furthermore, the form of each hash function—the sign of a linear combination of kernel function values—is the same as several kernel-based learning algorithms such as support vector machines. Rather than simply choosing the matrix W based on random hyperplanes, we will specifically construct this matrix to achieve good reconstructions. In particular, we will look at the squared error between the original distances (using d) and the reconstructed distances (using ˜d). We minimize the following objective with respect to the weight matrix W: O({xi}n i=1, W) = X (i,j)∈N (d(xi, xj) −˜d(xi, xj))2. (1) The set N is a selection of pairs of points, and can be chosen based on the application. Typically, we will choose this to be a set of pairs which includes both the nearest neighbors as well as other pairs from the database (see Section 3 for details). If we choose k pairs for each point, then the total size of N will be nk. 2.3 Coordinate-Descent Algorithm The objective O given in (1) is highly non-convex in W, making optimization the main challenge in using the proposed objective for hashing. One of the most difficult issues is due to the fact that the reconstructions are binary; the objective is not continuous or differentiable, so it is not immediately clear how an effective algorithm would proceed. One approach is to replace the sign function by the sigmoid function, as is done with neural networks and logistic regression.2 Then the objective O and gradient ∇O can both be computed in O(nkb) time. However, our experience with minimizing O with such an approach using a quasi-Newton L-BFGS algorithm typically resulted in poor local optima; we need an alternative method. Instead of the continuous relaxation, we will consider fixing all but one weight Wpq, and optimize the original objective O with respect to Wpq. Surprisingly, we will show below that an exact, optimal update to this weight can be achieved in time O(n log n+nk). Such an approach will update a single hash function hp; then, by choosing a single weight to update for each hash function, we can update all hash functions in O(nb(k + log n)) time. In particular, if k = Ω(log n), then we can update all hash functions on the order of the time it takes to compute the objective function itself, making the updates particularly efficient. We will also show that this method provably converges to a local optimum of the objective function O. 2The sigmoid function is defined as s(x) = 1/(1 + e−x), and its derivative is s′(x) = s(x)(1 −s(x)). 3 We sketch out the details of our coordinate-descent scheme below. We begin with a simple lemma characterizing how the objective function changes when we update a single hash function. Lemma 1. Let ¯Dij = d(xi, xj) −˜d(xi, xj). Consider updating some hash function hold to hnew (where ˜d uses hold), and let ho and hn be the n × 1 vectors obtained by applying the old and new hash functions to each data point, respectively. Then the objective function O from (1) after updating the hash function can be expressed as O = X (i,j)∈N µ ¯Dij + 1 b (ho(i) −ho(j))2 −1 b (hn(i) −hn(j))2 ¶2 . Proof. For notational convenience in this proof, let ˜Dold and ˜Dnew be the matrices of reconstructed distances using hold and hnew, respectively, and let Hold and Hnew be the n × b matrices of old and new hash bits, respectively. Also, let et be the t-th standard basis vector and e be a vector of all ones. Note that Hnew = Hold + (hn −ho)eT t , where t is the index of the hash function being updated. We can express ˜Dold as ˜Dold = 1 b µ ℓoldeT + eℓT old −2HoldHT old ¶ , where ℓold is the vector of squared norms of the rows of Hold. Note that the corresponding vector of squared norms of the rows of Hnew may be expressed as ℓnew = ℓold −ho + hn since the hash vectors are binary-valued. Therefore we may write ˜Dnew = 1 b µ (ℓold + hn −ho)eT + e(ℓold + hn −ho)T −2(Hold + (hn −ho)eT t )(Hold + (hn −ho)eT t )T ¶ = ˜Dold + 1 b µ (hn −ho)eT + e(hn −ho)T −2(hnhT n −hohT o ) ¶ = ˜Dold −1 b µ (hoeT + ehT o −2hohT o ) −(hneT + ehT n −2hnhT n) ¶ , where we have used the fact that Holdet = ho. We can then write the objective using ˜Dnew to obtain O = X (i,j)∈N µ ¯Dij + 1 b (ho(i) + ho(j) −2ho(i)ho(j)) −1 b (hn(i) + hn(j) −2hn(i)hn(j)) ¶2 = X (i,j)∈N µ ¯Dij + 1 b (ho(i) −ho(j))2 −1 b (hn(i) −hn(j))2 ¶2 , since ho(i)2 = ho(i) and hn(i)2 = hn(i). This completes the proof. The lemma above demonstrates that, when updating a hash function, the new objective function can be computed in O(nk) time, assuming that we have computed and stored the values of ¯Dij. Next we show that we can compute an optimal weight update in time O(nk + n log n). Consider choosing some hash function hp, and choose one weight index q, i.e. fix all entries of W except Wpq, which corresponds to the one weight updated during this iteration of coordinatedescent. Modifying the value of Wpq results in updating hp to a new hashing function hnew. Now, for every point x, there is a hashing threshold: a new value of Wpq, which we will call ˆWpq, such that s X q=1 ˆWpqκ(xpq, x) = 0. 4 Observe that, if cx = Ps q=1 Wpqκ(xpq, x), then the threshold tx is given by tx = Wpq − cx κ(xpq, x). We first compute the thresholds for all n data points: once we have the values of cx for all x, computing tx for all points requires O(n) time. Since we are updating a single Wpq per iteration, we can update the values of cx in O(n) time after updating Wpq, so the total time to compute all thresholds tx is O(n). Next, we sort the thresholds in increasing order, which defines a set of n + 1 intervals (interval 0 is the interval of values smaller than the first threshold, interval 1 is the interval of points between the first and the second threshold, and so on). Observe that, for any fixed interval, the new computed hash function hnew does not change over the entire interval. Furthermore, observe that as we cross from one threshold to the next, a single bit of the corresponding hash vector flips. As a result, we need only compute the objective function at each of the n + 1 intervals, and choose the interval that minimizes the objective function. We choose a value Wpq within that interval (which will be optimal) and update the hash function using this new choice of weight. The following result shows that we can choose the appropriate interval in time O(nk). When we add the cost of sorting the thresholds, the total cost of an update to a single weight Wpq is O(nk + n log n). Lemma 2. Consider updating a single hash function. Suppose we have a sequence of hash vectors ht0, ..., htn such that htj−1 and htj differ by a single bit for 1 ≤j ≤n. Then the objective functions for all n + 1 hash functions can be computed in O(nk) time. Proof. The objective function may be computed in O(nk) time for the hash function ht0 corresponding to the smallest interval. Consider the case when going from ho = htj−1 to hn = htj for some 1 ≤j ≤n. Let the index of the bit that changes in hn be a. The only terms of the sum in the objective that change are ones of the form (a, j) ∈N and (i, a) ∈N. Let fa = 1 if ho(a) = 0, hn(a) = 1, and fa = −1 otherwise. Then we can simplify (hn(i) −hn(j))2 −(ho(i) −ho(j))2 to fa(1 −2hn(j)) when a = i and to fa(1 −2hn(i)) when a = j (the expression is zero when i = j and will not contribute to the objective). Therefore the relevant terms in the objective function as given in Lemma 1 may be written as: X (a,j)∈N µ ¯Daj −fa b (1 −2hn(j)) ¶2 + X (i,a)∈N µ ¯Dia −fa b (1 −2hn(i)) ¶2 . As there are k nearest neighbors, the first sum will have k elements and can be computed in O(k) time. The second summation may have more or less than k terms, but across all data points there will be k terms on average. Furthermore, we must update ¯D as we progress through the hash functions, which can also be straightforwardly done in O(k) time on average. Completing this process over all n + 1 hash functions results in a total of O(nk) time. Putting everything together, we have shown the following result: Theorem 3. Fix all but one entry Wpq of the hashing weight matrix W. An optimal update to Wpq to minimize (1) may be computed in O(nk + n log n) time. Our overall strategy successively cycles through each hash function one by one, randomly selects a weight to update for each hash function, and computes the optimal updates for those weights. It then repeats this process until reaching local convergence. One full iteration to update all hash functions requires time O(nb(k + log n)). Note that local convergence is guaranteed in a finite number of updates since each update will never increase the objective function value, and only a finite number of possible hash configurations are possible. 2.4 Extensions The method described in the previous section may be enhanced in various ways. For instance, the algorithm we developed is completely unsupervised. One could easily extend the method to a supervised one, which would be useful for example in large-scale k-NN classification tasks. In this scenario, one would additionally receive a set of similar and dissimilar pairs of points based on 5 class labels or other background knowledge. For all similar pairs, one could set the target original distance to be zero, and for all dissimilar pairs, one could set the target original distance to be large (say, 1). One may also consider loss functions other than the quadratic loss considered in this paper. Another option would be to use an ℓ1-type loss, which would not penalize outliers as severely. Additionally, one may want to introduce regularization, especially for the supervised case. For example, the addition of an ℓ1 regularization over the entries of W could lead to sparse hash functions, and may be worth additional study. 3 Experiments We now present results comparing our proposed approach to the relevant existing methods—locality sensitive hashing, semantic hashing (RBM), and spectral hashing. We also compared against the Boosting SSC algorithm [14] but were unable to find parameters to yield competitive performance, and so we do not present those results here. We implemented our binary reconstructive embedding method (BRE) and LSH, and used the same code for spectral hashing and RBM that was employed in [4]. We further present some qualitative results over the Tiny Image data set to show example retrieval results obtained by our method. 3.1 Data Sets and Methodology We applied the hashing algorithms to a number of important large-scale data sets from the computer vision community. Our vision data sets include: the Photo Tourism data [15], a collection of approximately 300,000 image patches, processed using SIFT to form 128-dimensional vectors; the Caltech-101 [16], a standard benchmark for object recognition in the vision community; and LabelMe and Peekaboom [17], two image data set on top of which global Gist descriptors have been extracted. We also applied our method to MNIST, the standard handwritten digits data set, and Nursery, one of the larger UCI data sets. We mean-centered the data and normalized the feature vectors to have unit norm. Following the suggestion in [4], we apply PCA (or kernel PCA in the case of kernelized data) to the input data before applying spectral hashing or BRE—the results of the RBM method and LSH were better without applying PCA, so PCA is not applied for these algorithms. For all data sets, we trained the methods using 1000 randomly selected data points. For training the BRE method, we select nearest neighbors using the top 5th percentile of the training distances and set the target distances to 0; we found that this ensures that the nearest neighbors in the embedded space will have Hamming distance very close to 0. We also choose farthest neighbors using the 98th percentile of the training distances and maintained their original distances as target distances. Having both near and far neighbors improves performance for BRE, as it prevents a trivial solution where all the database objects are given the same hash key. The spectral hashing and RBM parameters are set as in [4, 17]. After constructing the hash functions for each method, we randomly generate 3000 hashing queries (except for Caltech-101, which has fewer than 4000 data points; in this case we choose the remainder of the data as queries). We follow the evaluation scheme developed in [4]. We collect training/test pairs such that the unnormalized Hamming distance using the constructed hash functions is less than or equal to three. We then compute the percentage of these pairs that are nearest neighbors in the original data space, which are defined as pairs of points from the training set whose distances are in the top 5th percentile. This percentage is plotted as the number of bits increases. Once the number of bits is sufficiently high (e.g. 50), one would expect that distances with a Hamming distance less than or equal to three would correspond to nearest neighbors in the original data embedding. 3.2 Quantitative Results In Figure 1, we plot hashing retrieval results over each of the data sets. We can see that the BRE method performs comparably to or outperforms the other methods on all data sets. Observe that both RBM and spectral hashing underperform all other methods on at least one data set. On some 6 10 20 30 40 50 0 0.2 0.4 0.6 0.8 1 Number of bits Prop. of good neighbors with Hamm. distance <= 3 Photo Tourism BRE Spectral hashing RBM LSH 10 20 30 40 50 0.4 0.5 0.6 0.7 0.8 0.9 1 Number of bits Prop. of good neighbors with Hamm. distance <= 3 Caltech−101 BRE Spectral hashing RBM LSH 10 20 30 40 50 0 0.2 0.4 0.6 0.8 1 Number of bits Prop. of good neighbors with Hamm. distance <= 3 LabelMe BRE Spectral hashing RBM LSH 10 20 30 40 50 0 0.2 0.4 0.6 0.8 1 Number of bits Prop. of good neighbors with Hamm. distance <= 3 Peekaboom BRE Spectral hashing RBM LSH 10 20 30 40 50 0 0.2 0.4 0.6 0.8 1 Number of bits Prop. of good neighbors with Hamm. distance <= 3 MNIST BRE Spectral hashing RBM LSH 10 20 30 40 50 0 0.2 0.4 0.6 0.8 1 Number of bits Prop. of good neighbors with Hamm. distance <= 3 Nursery BRE Spectral hashing RBM LSH Figure 1: Results over Photo Tourism, Caltech-101, LabelMe, Peekaboom, MNIST, and Nursery. The plots show how well the nearest neighbors in the Hamming space (pairs of data points with unnormalized Hamming distance less than or equal to 3) correspond to the nearest neighbors (top 5th percentile of distances) in the original dataset. Overall, our method outperforms, or performs comparably to, existing methods. See text for further details. data sets, RBM appears to require significantly more than 1000 training images to achieve good performance, and in these cases the training time is substantially higher than the other methods. One surprising outcome of these results is that LSH performs well in comparison to the other existing methods (and outperforms some of them for some data sets)—this stands in contrast to the results of [4], where LSH showed significantly poorer performance (we also evaluated our LSH implementation using the same training/test split as in [4] and found similar results). The better performance in our tests may be due to our implementation of LSH; we use Charikar’s random projection method [1] to construct hash tables. In terms of training time, the BRE method typically converges in 50–100 iterations of updating all hash functions, and takes 1–5 minutes to train per data set on our machines (depending on the number of bits requested). Relatively speaking, the time required for training is typically faster than RBM but slower than spectral hashing and LSH. Search times in the binary space are uniform across each of the methods and our timing results are similar to those reported previously (see, e.g. [17]). 3.3 Qualitative Results Finally, we present qualitative results on the large Tiny Image data set [5] to demonstrate our method applied to a very large database. This data set contains 80 million images, and is one of the largest readily available data sets for content-based image retrieval. Each image is stored as 32 × 32 pixels, and we employ the global Gist descriptors that have been extracted for each image. We ran our reconstructive hashing algorithm on the Gist descriptors for the Tiny Image data set using 50 bits, with 1000 training images used to construct the hash functions as before. We selected a random set of queries from the database and compared the results of a linear scan over the Gist features with the hashing results over the Gist features. When obtaining hashing results, we collected the nearest neighbors in the Hamming space to the query (the top 0.01% of the Hamming distances), and then sorted these by their distance in the original Gist space. Some example results are displayed in Figure 2; we see that, with 50 bits, we can obtain very good results that are qualitatively similar to the results of the linear scan. 7 Figure 2: Qualitative results over the 80 million images in the Tiny Image database [5]. For each group of images, the top left image is the query, the top row corresponds to a linear scan, and the second row corresponds to the hashing retrieval results using 50 hash bits. The hashing results are similar to the linear scan results but are significantly faster to obtain. 4 Conclusion and Future Work In this paper, we presented a method for learning hash functions, developed an efficient coordinatedescent algorithm for finding a local optimum, and demonstrated improved performance on several benchmark vision data sets as compared to existing state-of-the-art hashing algorithms. One avenue for future work is to explore alternate methods of optimization; our approach, while simple and fast, may fall into poor local optima in some cases. Second, we would like to explore the use of our algorithm in the supervised setting for large-scale k-NN tasks. Acknowledgments This work was supported in part by DARPA, Google, and NSF grants IIS-0905647 and IIS-0819984. We thank Rob Fergus for the spectral hashing and RBM code, and Greg Shakhnarovich for the Boosting SSC code. References [1] M. Charikar. Similarity Estimation Techniques from Rounding Algorithms. In STOC, 2002. [2] P. Indyk and R. Motwani. Approximate Nearest Neighbors: Towards Removing the Curse of Dimensionality. In STOC, 1998. [3] R. R. Salakhutdinov and G. E. Hinton. Learning a Nonlinear Embedding by Preserving Class Neighbourhood Structure. In AISTATS, 2007. [4] Y. Weiss, A. Torralba, and R. Fergus. Spectral Hashing. In NIPS, 2008. [5] A. Torralba, R. Fergus, and W. T. Freeman. 80 Million Tiny Images: A Large Dataset for Non-parametric Object and Scene Recognition. TPAMI, 30(11):1958–1970, 2008. 8 [6] J. Freidman, J. Bentley, and A. Finkel. An Algorithm for Finding Best Matches in Logarithmic Expected Time. ACM Transactions on Mathematical Software, 3(3):209–226, September 1977. [7] P. Ciaccia, M. Patella, and P. Zezula. M-tree: An Efficient Access Method for Similarity Search in Metric Spaces. In VLDB, 1997. [8] A. Beygelzimer, S. Kakade, and J. Langford. Cover Trees for Nearest Neighbor. In ICML, 2006. [9] J. Uhlmann. Satisfying General Proximity / Similarity Queries with Metric Trees. Information Processing Letters, 40:175–179, 1991. [10] M. Datar, N. Immorlica, P. Indyk, and V. Mirrokni. Locality-Sensitive Hashing Scheme Based on p-Stable Distributions. In SOCG, 2004. [11] P. Jain, B. Kulis, and K. Grauman. Fast Image Search for Learned Metrics. In CVPR, 2008. [12] K. Grauman and T. Darrell. Pyramid Match Hashing: Sub-Linear Time Indexing Over Partial Correspondences. In CVPR, 2007. [13] G. Shakhnarovich, P. Viola, and T. Darrell. Fast Pose Estimation with Parameter-Sensitive Hashing. In ICCV, 2003. [14] G. Shakhnarovich. Learning Task-specific Similarity. PhD thesis, MIT, 2006. [15] N. Snavely, S. Seitz, and R. Szeliski. Photo Tourism: Exploring Photo Collections in 3D. In SIGGRAPH Conference Proceedings, pages 835–846, New York, NY, USA, 2006. ACM Press. [16] L. Fei-Fei, R. Fergus, and P. Perona. Learning Generative Visual Models from Few Training Examples: an Incremental Bayesian Approach Tested on 101 Object Categories. In Workshop on Generative Model Based Vision, Washington, D.C., June 2004. [17] A. Torralba, R. Fergus, and Y. Weiss. Small Codes and Large Databases for Recognition. In CVPR, 2008. 9
2009
100
3,594
Multi-Label Prediction via Compressed Sensing Daniel Hsu UC San Diego djhsu@cs.ucsd.edu Sham M. Kakade TTI-Chicago sham@tti-c.org John Langford Yahoo! Research jl@hunch.net Tong Zhang Rutgers University tongz@rci.rutgers.edu Abstract We consider multi-label prediction problems with large output spaces under the assumption of output sparsity – that the target (label) vectors have small support. We develop a general theory for a variant of the popular error correcting output code scheme, using ideas from compressed sensing for exploiting this sparsity. The method can be regarded as a simple reduction from multi-label regression problems to binary regression problems. We show that the number of subproblems need only be logarithmic in the total number of possible labels, making this approach radically more efficient than others. We also state and prove robustness guarantees for this method in the form of regret transform bounds (in general), and also provide a more detailed analysis for the linear prediction setting. 1 Introduction Suppose we have a large database of images, and we want to learn to predict who or what is in any given one. A standard approach to this task is to collect a sample of these images x along with corresponding labels y = (y1, . . . , yd) ∈{0, 1}d, where yi = 1 if and only if person or object i is depicted in image x, and then feed the labeled sample to a multi-label learning algorithm. Here, d is the total number of entities depicted in the entire database. When d is very large (e.g. 103, 104), the simple one-against-all approach of learning a single predictor for each entity can become prohibitively expensive, both at training and testing time. Our motivation for the present work comes from the observation that although the output (label) space may be very high dimensional, the actual labels are often sparse. That is, in each image, only a small number of entities may be present and there may only be a small amount of ambiguity in who or what they are. In this work, we consider how this sparsity in the output space, or output sparsity, eases the burden of large-scale multi-label learning. Exploiting output sparsity. A subtle but critical point that distinguishes output sparsity from more common notions of sparsity (say, in feature or weight vectors) is that we are interested in the sparsity of E[y|x] rather than y. In general, E[y|x] may be sparse while the actual outcome y may not (e.g. if there is much unbiased noise); and, vice versa, y may be sparse with probability one but E[y|x] may have large support (e.g. if there is little distinction between several labels). Conventional linear algebra suggests that we must predict d parameters in order to find the value of the d-dimensional vector E[y|x] for each x. A crucial observation – central to the area of compressed sensing [1] – is that methods exist to recover E[y|x] from just O(k log d) measurements when E[y|x] is k-sparse. This is the basis of our approach. 1 Our contributions. We show how to apply algorithms for compressed sensing to the output coding approach [2]. At a high level, the output coding approach creates a collection of subproblems of the form “Is the label in this subset or its complement?”, solves these problems, and then uses their solution to predict the final label. The role of compressed sensing in our application is distinct from its more conventional uses in data compression. Although we do employ a sensing matrix to compress training data, we ultimately are not interested in recovering data explicitly compressed this way. Rather, we learn to predict compressed label vectors, and then use sparse reconstruction algorithms to recover uncompressed labels from these predictions. Thus we are interested in reconstruction accuracy of predictions, averaged over the data distribution. The main contributions of this work are: 1. A formal application of compressed sensing to prediction problems with output sparsity. 2. An efficient output coding method, in which the number of required predictions is only logarithmic in the number of labels d, making it applicable to very large-scale problems. 3. Robustness guarantees, in the form of regret transform bounds (in general) and a further detailed analysis for the linear prediction setting. Prior work. The ubiquity of multi-label prediction problems in domains ranging from multiple object recognition in computer vision to automatic keyword tagging for content databases has spurred the development of numerous general methods for the task. Perhaps the most straightforward approach is the well-known one-against-all reduction [3], but this can be too expensive when the number of possible labels is large (especially if applied to the power set of the label space [4]). When structure can be imposed on the label space (e.g. class hierarchy), efficient learning and prediction methods are often possible [5, 6, 7, 8, 9]. Here, we focus on a different type of structure, namely output sparsity, which is not addressed in previous work. Moreover, our method is general enough to take advantage of structured notions of sparsity (e.g. group sparsity) when available [10]. Recently, heuristics have been proposed for discovering structure in large output spaces that empirically offer some degree of efficiency [11]. As previously mentioned, our work is most closely related to the class of output coding method for multi-class prediction, which was first introduced and shown to be useful experimentally in [2]. Relative to this work, we expand the scope of the approach to multi-label prediction and provide bounds on regret and error which guide the design of codes. The loss based decoding approach [12] suggests decoding so as to minimize loss. However, it does not provide significant guidance in the choice of encoding method, or the feedback between encoding and decoding which we analyze here. The output coding approach is inconsistent when classifiers are used and the underlying problems being encoded are noisy. This is proved and analyzed in [13], where it is also shown that using a Hadamard code creates a robust consistent predictor when reduced to binary regression. Compared to this method, our approach achieves the same robustness guarantees up to a constant factor, but requires training and evaluating exponentially (in d) fewer predictors. Our algorithms rely on several methods from compressed sensing, which we detail where used. 2 Preliminaries Let X be an arbitrary input space and Y ⊂Rd be a d-dimensional output (label) space. We assume the data source is defined by a fixed but unknown distribution over X × Y. Our goal is to learn a predictor F : X →Y with low expected ℓ2 2-error Ex∥F(x) −E[y|x]∥2 2 (the sum of mean-squarederrors over all labels) using a set of n training data {(xi, yi)}n i=1. We focus on the regime in which the output space is very high-dimensional (d very large), but for any given x ∈X, the expected value E[y|x] of the corresponding label y ∈Y has only a few non-zero entries. A vector is k-sparse if it has at most k non-zero entries. 2 3 Learning and Prediction 3.1 Learning to Predict Compressed Labels Let A : Rd →Rm be a linear compression function, where m ≤d (but hopefully m ≪d). We use A to compress (i.e. reduce the dimension of) the labels Y, and learn a predictor H : X →A(Y) of these compressed labels. Since A is linear, we simply represent A ∈Rm×d as a matrix. Specifically, given a sample {(xi, yi)}n i=1, we form a compressed sample {(xi, Ayi)}n i=1 and then learn a predictor H of E[Ay|x] with the objective of minimizing the ℓ2 2-error Ex∥H(x)−E[Ay|x]∥2 2. 3.2 Predicting Sparse Labels To obtain a predictor F of E[y|x], we compose the predictor H of E[Ay|x] (learned using the compressed sample) with a reconstruction algorithm R : Rm →Rd. The algorithm R maps predictions of compressed labels h ∈Rm to predictions of labels y ∈Y in the original output space. These algorithms typically aim to find a sparse vector y such that Ay closely approximates h. Recent developments in the area of compressed sensing have produced a spate of reconstruction algorithms with strong performance guarantees when the compression function A satisfies certain properties. We abstract out the relevant aspects of these guarantees in the following definition. Definition. An algorithm R is a valid reconstruction algorithm for a family of compression functions (Ak ⊂S m≥1 Rm×d : k ∈N) and sparsity error sperr : N × Rd →R, if there exists a function f : N →N and constants C1, C2 ∈R such that: on input k ∈N, A ∈Ak with m rows, and h ∈Rm, the algorithm R(k, A, h) returns an f(k)-sparse vector by satisfying ∥by −y∥2 2 ≤C1 · ∥h −Ay∥2 2 + C2 · sperr(k, y) for all y ∈Rd. The function f is the output sparsity of R and the constants C1 and C2 are the regret factors. Informally, if the predicted compressed label H(x) is close to E[Ay|x] = AE[y|x], then the sparse vector by returned by the reconstruction algorithm should be close to E[y|x]; this latter distance ∥by−E[y|x]∥2 2 should degrade gracefully in terms of the accuracy of H(x) and the sparsity of E[y|x]. Moreover, the algorithm should be agnostic about the sparsity of E[y|x] (and thus the sparsity error sperr(k, E[y|x])), as well as the “measurement noise” (the prediction error ∥H(x) −E[Ay|x]∥2). This is a subtle condition and precludes certain reconstruction algorithm (e.g. Basis Pursuit [14]) that require the user to supply a bound on the measurement noise. However, the condition is needed in our application, as such bounds on the prediction error (for each x) are not generally known beforehand. We make a few additional remarks on the definition. 1. The minimum number of rows of matrices A ∈Ak may in general depend on k (as well as the ambient dimension d). In the next section, we show how to construct such A with close to the optimal number of rows. 2. The sparsity error sperr(k, y) should measure how poorly y ∈Rd is approximated by a k-sparse vector. 3. A reasonable output sparsity f(k) for sparsity level k should not be much more than k, e.g. f(k) = O(k). Concrete examples of valid reconstruction algorithms (along with the associated Ak, sperr, etc.) are given in the next section. 4 Algorithms Our prescribed recipe is summarized in Algorithms 1 and 2. We give some examples of compression functions and reconstruction algorithms in the following subsections. 3 Algorithm 1 Training algorithm parameters sparsity level k, compression function A ∈Ak with m rows, regression learning algorithm L input training data S ⊂X × Rd for i = 1, . . . , m do hi ←L({(x, (Ay)i) : (x, y) ∈S}) end for output regressors H = [h1, . . . , hm] Algorithm 2 Prediction algorithm parameters sparsity level k, compression function A ∈Ak with m rows, valid reconstruction algorithm R for Ak input regressors H = [h1, . . . , hm], test point x ∈X output by = ⃗R(k, A, [h1(x), . . . , hm(x)]) Figure 1: Training and prediction algorithms. 4.1 Compression Functions Several valid reconstruction algorithms are known for compression matrices that satisfy a restricted isometry property. Definition. A matrix A ∈Rm×d satisfies the (k, δ)-restricted isometry property ((k, δ)-RIP), δ ∈ (0, 1), if (1 −δ)∥x∥2 2 ≤∥Ax∥2 2 ≤(1 + δ)∥x∥2 2 for all k-sparse x ∈Rd. While some explicit constructions of (k, δ)-RIP matrices are known (e.g. [15]), the best guarantees are obtained when the matrix is chosen randomly from an appropriate distribution, such as one of the following [16, 17]. • All entries i.i.d. Gaussian N(0, 1/m), with m = O(k log(d/k)). • All entries i.i.d. Bernoulli B(1/2) over {±1/√m}, with m = O(k log(d/k)). • m randomly chosen rows of the d × d Hadamard matrix over {±1/√m}, with m = O(k log5 d). The hidden constants in the big-O notation depend inversely on δ and the probability of failure. A striking feature of these constructions is the very mild dependence of m on the ambient dimension d. This translates to a significant savings in the number of learning problems one has to solve after employing our reduction. Some reconstruction algorithms require a stronger guarantee of bounded coherence µ(A) ≤ O(1/k), where µ(A) defined as µ(A) = max 1≤i<j≤d |(A⊤A)i,j|/ q |(A⊤A)i,i||(A⊤A)j,j| It is easy to check that the Gaussian, Bernoulli, and Hadamard-based random matrices given above have coherence bounded by O( p (log d)/m) with high probability. Thus, one can take m = O(k2 log d) to guarantee 1/k coherence. This is a factor k worse than what was needed for (k, δ)-RIP, but the dependence on d is still small. 4.2 Reconstruction Algorithms In this section, we give some examples of valid reconstruction algorithms. Each of these algorithm is valid with respect to the sparsity error given by sperr(k, y) = ∥y −y(1:k)∥2 2 + 1 k ∥y −y(1:k)∥2 1 where y(1:k) is the best k-sparse approximation of y (i.e. the vector with just the k largest (in magnitude) coefficients of y). The following theorem relates reconstruction quality to approximate sparse regression, giving a sufficient condition for any algorithm to be valid for RIP matrices. 4 Algorithm 3 Prediction algorithm with R = OMP parameters sparsity level k, compression function A = [a1| . . . |ad] ∈Ak with m rows, input regressors H = [h1, . . . , hm], test point x ∈X h ←[h1(x), . . . , hm(x)]⊤ (predict compressed label vector) by ←⃗0, J ←∅, r ←h for i = 1, . . . , 2k do j∗←arg maxj |r⊤aj|/∥aj∥2 (column of A most correlated with residual r) J ←J ∪{j∗} (add j∗to set of selected columns) byJ ←(AJ)†h, byJc ←⃗0 (least-squares restricted to columns in J) r ←h −Aby (update residual) end for output by Figure 2: Prediction algorithm specialized with Orthogonal Matching Pursuit. Theorem 1. Let Ak = {(k + f(k), δ)-RIP matrices} for some function f : N →N, and let A ∈Ak have m rows. If for any h ∈Rm, a reconstruction algorithm R returns an f(k)-sparse solution by = R(k, A, h) satisfying ∥Aby −h∥2 2 ≤inf y∈Rd C∥Ay(1:k) −h∥2 2, then it is a valid reconstruction algorithm for Ak and sperr given above, with output sparsity f and regret factors C1 = 2(1 + √ C)2/(1 −δ) and C2 = 4(1 + (1 + √ C)/(1 −δ))2. Proofs are deferred to Appendix B. Iterative and greedy algorithms. Orthogonal Matching Pursuit (OMP) [18], FoBa [19], and CoSaMP [20] are examples of iterative or greedy reconstruction algorithms. OMP is a greedy forward selection method that repeatedly selects a new column of A to use in fitting h (see Algorithm 3). FoBa is similar, except it also incorporates backward steps to un-select columns that are later discovered to be unnecessary. CoSaMP is also similar to OMP, but instead selects larger sets of columns in each iteration. FoBa and CoSaMP are valid reconstruction algorithms for RIP matrices ((8k, 0.1)-RIP and (4k, 0.1)-RIP, respectively) and have linear output sparsity (8k and 2k). These guarantees are apparent from the cited references. For OMP, we give the following guarantee. Theorem 2. If µ(A) ≤0.1/k, then after f(k) = 2k steps of OMP, the algorithm returns by satisfying ∥Aby −h∥2 2 ≤23∥Ay(1:k) −h∥2 2 ∀y ∈Rd. This theorem, combined with Theorem 1, implies that OMP is valid for matrices A with µ(A) ≤ 0.1/k and has output sparsity f(k) = 2k. ℓ1 algorithms. Basis Pursuit (BP) [14] and its variants are based on finding the minimum ℓ1-norm solution to a linear system. While the basic form of BP is ill-suited for our application (it requires the user to supply the amount of measurement error ∥Ay −h∥2), its more advanced path-following or multi-stage variants may be valid [21]. 5 Analysis 5.1 General Robustness Guarantees We now state our main regret transform bound, which follows immediately from the definition of a valid reconstruction algorithm and linearity of expectation. Theorem 3 (Regret Transform). Let R be a valid reconstruction algorithm for {Ak : k ∈N} and sperr : N × Rd →R. Then there exists some constants C1 and C2 such that the following holds. 5 Pick any k ∈N, A ∈Ak with m rows, and H : X →Rm. Let F : X →Rd be the composition of R(k, A, ·) and H, i.e. F(x) = R(k, A, H(x)). Then Ex∥F(x) −E[y|x]∥2 2 ≤ C1 · Ex∥H(x) −E[Ay|x]∥2 2 + C2 · sperr(k, E[y|x]). The simplicity of this theorem is a consequence of the careful composition of the learned predictors with the reconstruction algorithm meeting the formal specifications described above. In order compare this regret bound with the bounds afforded by Sensitive Error Correcting Output Codes (SECOC) [13], we need to relate Ex∥H(x)−E[Ay|x]∥2 2 to the average scaled mean-squarederror over all induced regression problems; the error is scaled by the maximum difference Li = maxy∈Y(Ay)i −miny(Ay)i between induced labels: ¯r = 1 m m X i=1 Ex H(x)i −E[(Ay)i|x] Li 2 . In k-sparse multi-label problems, we have Y = {y ∈{0, 1}d : ∥y∥0 ≤k}. In these terms, SECOC can be tuned to yield Ex∥F(x) −E[y|x]∥2 2 ≤4k2 · ¯r for general k. For now, ignore the sparsity error. For simplicity, let A ∈Rm×d with entries chosen i.i.d. from the Bernoulli B(1/2) distribution over {±1/√m}, where m = O(k log d). Then for any k-sparse y, we have ∥Ay∥∞≤k/√m, and thus Li ≤2k/√m for each i. This gives the bound C1 · Ex∥H(x) −E[Ay|x]∥2 2 ≤4C1 · k2 · ¯r, which is within a constant factor of the guarantee afforded by SECOC. Note that our reduction induces exponentially (in d) fewer subproblems than SECOC. Now we consider the sparsity error. In the extreme case m = d, E[y|x] is allowed to be fully dense (k = d) and sperr(k, E[y|x]) = 0. When m = O(k log d) < d, we potentially incur an extra penalty in sperr(k, E[y|x]), which relates how far E[y|x] is from being k-sparse. For example, suppose E[y|x] has small ℓp norm for 0 ≤p < 2. Then even if E[y|x] has full support, the penalty will decrease polynomially in k ≈m/ log d. 5.2 Linear Prediction A danger of using generic reductions is that one might create a problem instance that is even harder to solve than the original problem. This is an oft cited issue with using output codes for multiclass problems. In the case of linear prediction, however, the danger is mitigated, as we now show. Suppose, for instance, there is a perfect linear predictor of E[y|x], i.e. E[y|x] = B⊤x for some B ∈Rp×d (here X = Rp). Then it is easy to see that H = BA⊤is a perfect linear predictor of E[Ay|x]: H⊤x = AB⊤x = AE[y|x] = E[Ay|x]. The following theorem generalizes this observation to imperfect linear predictors for certain wellbehaved A. Theorem 4. Suppose X ⊂Rp. Let B ∈Rp×d be a linear function with Ex B⊤x −E[y|x] 2 2 = ǫ. Let A ∈Rm×d have entries drawn i.i.d. from N(0, 1/m), and let H = BA⊤. Then with high probability (over the choice of A), Ex∥H⊤x −AE[y|x]∥2 2 ≤ 1 + O(1/√m)  ǫ. Remark 5. Similar guarantees can be proven for the Bernoulli-based matrices. Note that d does not appear in the bound, which is in contrast to the expected spectral norm of A: roughly 1+O( p d/m). Theorem 4 implies that the errors of any linear predictor are not magnified much by the compression function. So a good linear predictor for the original problem implies an almost-as-good linear predictor for the induced problem. Using this theorem together with known results about linear prediction [22], it is straightforward to derive sample complexity bounds for achieving a given error relative to that of the best linear predictor in some class. The bound will depend polynomially in k but only logarithmically in d. This is cosmetically similar to learning bounds for feature-efficient algorithms (e.g. [23, 22]) which are concerned with sparsity in the weight vector, rather than in the output. 6 6 Experimental Validation We conducted an empirical assessment of our proposed reduction on two labeled data sets with large label spaces. These experiments demonstrate the feasibility of our method – a sanity check that the reduction does in fact preserve learnability – and compare different compression and reconstruction options. 6.1 Data Image data.1 The first data set was collected by the ESP Game [24], an online game in which players ultimately provide word tags for a diverse set of web images. The set contains nearly 68000 images, with about 22000 unique labels. We retained just the 1000 most frequent labels: the least frequent of these occurs 39 times in the data, and the most frequent occurs about 12000 times. Each image contains about four labels on average. We used half of the data for training and half for testing. We represented each image as a bag-of-features vector in a manner similar to [25]. Specifically, we identified 1024 representative SURF features points [26] from 10 × 10 gray-scale patches chosen randomly from the training images; this partitions the space of image patches (represented with SURF features) into Voronoi cells. We then built a histogram for each image, counting the number of patches that fall in each cell. Text data.2 The second data set was collected by Tsoumakas et al. [11] from del.icio.us, a social bookmarking service in which users assign descriptive textual tags to web pages. The set contains about 16000 labeled web page and 983 unique labels. The least frequent label occurs 21 times and the most frequent occurs almost 6500 times. Each web page is assigned 19 labels on average. Again, we used half the data for training and half for testing. Each web page is represented as a boolean bag-of-words vector, with the vocabulary chosen using a combination of frequency thresholding and χ2 feature ranking. See [11] for details. Each binary label vector (in both data sets) indicates the labels of the corresponding data point. 6.2 Output Sparsity We first performed a bit of exploratory data analysis to get a sense of how sparse the target in our data is. We computed the least-squares linear regressor bB ∈Rp×d on the training data (without any output coding) and predicted the label probabilities bp(x) = bB⊤x on the test data (clipping values to the range [0, 1]). Using bp(x) as a surrogate for the actual target E[y|x], we examined the relative ℓ2 2 error of bp and its best k-sparse approximation ǫ(k, bp(x)) = Pd i=k+1 bp(i)(x)2/∥bp(x)∥2 2, where bp(1)(x) ≥. . . ≥bp(d)(x). Examining Exǫ(k, bp(x)) as a function of k, we saw that in both the image and text data, the falloff with k is eventually super-polynomial, but we are interested in the behavior for small k where it appears polynomial k−r for some r. Around k = 10, we estimated an exponent of 0.50 for the image data and 0.55 for the text data. This is somewhat below the standard of what is considered sparse (e.g. vectors with small ℓ1-norm show k−1 decay). Thus, we expect the reconstruction algorithms will have to contend with the sparsity error of the target. 6.3 Procedure We used least-squares linear regression as our base learning algorithm, with no regularization on the image data and with ℓ2-regularization with the text data (λ = 0.01) for numerical stability. We did not attempt any parameter tuning. 1http://hunch.net/∼learning/ESP-ImageSet.tar.gz 2http://mlkd.csd.auth.gr/multilabel.html 7 The compression functions we used were generated by selecting m random rows of the 1024×1024 Hadamard matrix, for m ∈{100, 200, 300, 400}. We also experimented with Gaussian matrices; these yielded similar but uniformly worse results. We tested the greedy and iterative reconstruction algorithms described earlier (OMP, FoBa, and CoSaMP) as well as a path-following version of Lasso based on LARS [21]. Each algorithm was used to recover a k-sparse label vector byk from the predicted compressed label H(x), for k = 1, . . . , 10. We measured the ℓ2 2 distance ∥byk −y∥2 2 of the prediction to the true test label y. In addition, we measured the precision of the predicted support at various values of k using the 10sparse label prediction. That is, we ordered the coefficients of each 10-sparse label prediction by10 by magnitude, and measured the precision of predicting the first k coordinates | supp(by10 (1:k)) ∩ supp(y)|/k. Actually, for k ≥6, we used by2k instead of by10. We used correlation decoding (CD) as a baseline method, as it is a standard decoding method for ECOC approaches. CD predicts using the top k coordinates in A⊤H(x), ordered by magnitude. For mean-squared-error comparisons, we used the least-squares approximation of H(x) using these k columns of A. Note that CD is not a valid reconstruction algorithm when m < d. 6.4 Results As expected, the performance of the reduction, using any reconstruction algorithm, improves as the number of induced subproblems m is increased (see figures in Appendix A) When m is small and A ̸∈AK, the reconstruction algorithm cannot reliably choose k ≥K coordinates, so its performance may degrade after this point by over-fitting. But when the compression function A is in AK for a sufficiently large K, then the squared-error decreases as the output sparsity k increases up to K. Note the fact that precision-at-k decreases as k increases is expected, as fewer data will have at least k correct labels. All of the reconstruction algorithms at least match or out-performed the baseline on the meansquared-error criterion, except when m = 100. When A has few rows, (1) A ∈AK only for very small K, and (2) many of its columns will have significant correlation. In this case, when choosing k > K columns, it is better to choose correlated columns to avoid over-fitting. Both OMP and FoBa explicitly avoid this and thus do not fare well; but CoSaMP, Lasso, and CD do allow selecting correlated columns and thus perform better in this regime. The results for precision-at-k are similar to that of mean-squared-error, except that choosing correlated columns does not necessarily help in the small m regime. This is because the extra correlated columns need not correspond to accurate label coordinates. In summary, the experiments demonstrate the feasibility and robustness of our reduction method for two natural multi-label prediction tasks. They show that predictions of relatively few compressed labels are sufficient to recover an accurate sparse label vector, and as our theory suggests, the robustness of the reconstruction algorithms is a key factor in their success. Acknowledgments We thank Andy Cotter for help processing the image features for the ESP Game data. This work was completed while the first author was an intern at TTI-C in 2008. References [1] David Donoho. Compressed sensing. IEEE Trans. Info. Theory, 52(4):1289–1306, 2006. [2] T. Dietterich and G. Bakiri. Solving multiclass learning problems via error-correcting output codes. Journal of Artificial Intelligence Research, 2:263–286, 1995. [3] R. Rifkin and A. Klautau. In defense of one-vs-all classification. Journal of Machine Learning Research, 5:101–141, 2004. [4] M. Boutell, J. Luo, X. Shen, and C. Brown. Learning multi-label scene classification. Pattern Recognition, 37(9):1757–1771, 2004. [5] A. Clare and R.D. King. Knowledge discovery in multi-label phenotype data. In European Conference on Principles of Data Mining and Knowledge Discovery, 2001. 8 [6] B. Taskar, C. Guestrin, and D. Koller. Max-margin markov networks. In NIPS, 2003. [7] N. Cesa-Bianchi, C. Gentile, and L. Zaniboni. Incremental algorithms for hierarchical classification. Journal of Machine Learning Research, 7:31–54, 2006. [8] I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun. Support vector machine learning for interdependent and structured output spaces. In ICML, 2004. [9] J. Rousu, C. Saunders, S. Szedmak, and J. Shawe-Taylor. Kernel-based learning of hierarchical multilabel classification models. Journal of Machine Learning Research, 7:1601–1626, 2006. [10] J. Huang, T. Zhang, and D. Metaxax. Learning with structured sparsity. In ICML, 2009. [11] G. Tsoumakas, I. Katakis, and I. Vlahavas. Effective and efficient multilabel classification in domains with large number of labels. In Proc. ECML/PKDD 2008 Workshop on Mining Multidimensional Data, 2008. [12] Erin Allwein, Robert Schapire, and Yoram Singer. Reducing multiclass to binary: A unifying approach for margin classifiers. Journal of Machine Learning Research, 1:113–141, 2000. [13] J. Langford and A. Beygelzimer. Sensitive error correcting output codes. In Proc. Conference on Learning Theory, 2005. [14] Emmanuel Cand`es, Justin Romberg, and Terrence Tao. Stable signal recovery from incomplete and inaccurate measurements. Comm. Pure Appl. Math., 59:1207–122, 2006. [15] R. DeVore. Deterministic constructions of compressed sensing matrices. J. of Complexity, 23:918–925, 2007. [16] Shahar Mendelson, Alain Pajor, and Nicole Tomczak-Jaegermann. Uniform uncertainty principle for Bernoulli and subgaussian ensembles. Constructive Approximation, 28(3):277–289, 2008. [17] M. Rudelson and R. Vershynin. Sparse reconstruction by convex relaxation: Fourier and Gaussian measurements. In Proc. Conference on Information Sciences and Systems, 2006. [18] S. Mallat and Z. Zhang. Matching pursuits with time-frequency dictionaries. IEEE Transactions on Signal Processing, 41(12):3397–3415, 1993. [19] Tong Zhang. Adaptive forward-backward greedy algorithm for sparse learning with linear models. In Proc. Neural Information Processing Systems, 2008. [20] D. Needell and J.A. Tropp. CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. Applied and Computational Harmonic Analysis, 2007. [21] Bradley Efron, Trevor Hastie, Iain Johnstone, and Robert Tibshirani. Least angle regression. Annals of Statistics, 32(2):407–499, 2004. [22] Sham M. Kakade, Karthik Sridharan, and Ambuj Tewari. On the complexity of linear prediction: Risk bounds, margin bounds, and regularization. In Proc. Neural Information Processing Systems, 2008. [23] Andrew Ng. Feature selection, l1 vs. l2 regularization, and rotational invariance. In ICML, 2004. [24] Luis von Ahn and Laura Dabbish. Labeling images with a computer game. In Proc. ACM Conference on Human Factors in Computing Systems, 2004. [25] Marcin Marszałek, Cordelia Schmid, Hedi Harzallah, and Joost van de Weijer. Learning object representations for visual object class recognition. In Visual Recognition Challange Workshop, in conjunction with ICCV, 2007. [26] Herbert Bay, Andreas Ess, Tinne Tuytelaars, and Luc Van Gool. SURF: Speeded up robust features. Computer Vision and Image Understanding, 110(3):346–359, 2008. [27] David Donoho, Michael Elad, and Vladimir Temlyakov. Stable recovery of sparse overcomplete representations in the presence of noise. IEEE Trans. Info. Theory, 52(1):6–18, 2006. [28] Sanjoy Dasgupta. Learning Probability Distributions. PhD thesis, University of California, 2000. 9
2009
101
3,595
Kernel Choice and Classifiability for RKHS Embeddings of Probability Distributions Bharath K. Sriperumbudur Department of ECE UC San Diego, La Jolla, USA bharathsv@ucsd.edu Kenji Fukumizu The Institute of Statistical Mathematics Tokyo, Japan fukumizu@ism.ac.jp Arthur Gretton Carnegie Mellon University MPI for Biological Cybernetics arthur.gretton@gmail.com Gert R. G. Lanckriet Department of ECE UC San Diego, La Jolla, USA gert@ece.ucsd.edu Bernhard Sch¨olkopf MPI for Biological Cybernetics T¨ubingen, Germany bs@tuebingen.mpg.de Abstract Embeddings of probability measures into reproducing kernel Hilbert spaces have been proposed as a straightforward and practical means of representing and comparing probabilities. In particular, the distance between embeddings (the maximum mean discrepancy, or MMD) has several key advantages over many classical metrics on distributions, namely easy computability, fast convergence and low bias of finite sample estimates. An important requirement of the embedding RKHS is that it be characteristic: in this case, the MMD between two distributions is zero if and only if the distributions coincide. Three new results on the MMD are introduced in the present study. First, it is established that MMD corresponds to the optimal risk of a kernel classifier, thus forming a natural link between the distance between distributions and their ease of classification. An important consequence is that a kernel must be characteristic to guarantee classifiability between distributions in the RKHS. Second, the class of characteristic kernels is broadened to incorporate all strictly positive definite kernels: these include non-translation invariant kernels and kernels on non-compact domains. Third, a generalization of the MMD is proposed for families of kernels, as the supremum over MMDs on a class of kernels (for instance the Gaussian kernels with different bandwidths). This extension is necessary to obtain a single distance measure if a large selection or class of characteristic kernels is potentially appropriate. This generalization is reasonable, given that it corresponds to the problem of learning the kernel by minimizing the risk of the corresponding kernel classifier. The generalized MMD is shown to have consistent finite sample estimates, and its performance is demonstrated on a homogeneity testing example. 1 Introduction Kernel methods are broadly established as a useful way of constructing nonlinear algorithms from linear ones, by embedding points into higher dimensional reproducing kernel Hilbert spaces (RKHSs) [9]. A generalization of this idea is to embed probability distributions into RKHSs, giving 1 us a linear method for dealing with higher order statistics [6, 12, 14]. More specifically, suppose we are given the set P of all Borel probability measures defined on the topological space M, and the RKHS (H, k) of functions on M with k as its reproducing kernel (r.k.). For P ∈P, denote by Pk := R M k(., x) dP(x). If k is measurable and bounded, then we may define the embedding of P in H as Pk ∈H. The RKHS distance between two such mappings associated with P, Q ∈P is called the maximum mean discrepancy (MMD) [6, 14], and is written γk(P, Q) = ∥Pk −Qk∥H. (1) We say that k is characteristic [4, 14] if the mapping P 7→Pk is injective, in which case (1) is zero if and only if P = Q, i.e., γk is a metric on P. An immediate application of the MMD is to problems of comparing distributions based on finite samples: examples include tests of homogeneity [6], independence [7], and conditional independence [4]. In this application domain, the question of whether k is characteristic is key: without this property, the algorithms can fail through inability to distinguish between particular distributions. Characteristic kernels are important in binary classification: The problem of distinguishing distributions is strongly related to binary classification: indeed, one would expect easily distinguishable distributions to be easily classifiable.1 The link between these two problems is especially direct in the case of the MMD: in Section 2, we show that γk is the negative of the optimal risk (corresponding to a linear loss function) associated with the Parzen window classifier [9, 11] (also called kernel classification rule [3, Chapter 10]), where the Parzen window turns out to be k. We also show that γk is an upper bound on the margin of a hard-margin support vector machine (SVM). The importance of using characteristic RKHSs is further underlined by this link: if the property does not hold, then there exist distributions that are unclassifiable in the RKHS H. We further strengthen this by showing that characteristic kernels are necessary (and sufficient under certain conditions) to achieve Bayes risk in the kernel-based classification algorithms. Characterization of characteristic kernels: Given the centrality of the characteristic property to both RKHS classification and RKHS distribution testing, we should take particular care in establishing which kernels satisfy this requirement. Early results in this direction include [6], where k is shown to be characteristic on compact M if it is universal in the sense of Steinwart [15, Definition 4]; and [4, 5], which address the case of non-compact M, and show that k is characteristic if and only if H + R is dense in the Banach space of p-power (p ≥1) integrable functions. The conditions in both these studies can be difficult to check and interpret, however, and the restriction of the first to compact M is limiting. In the case of translation invariant kernels, [14] proved the kernel to be characteristic if and only if the support of the Fourier transform of k is the entire Rd, which is a much easier condition to verify. Similar sufficient conditions are obtained by [5] for translation invariant kernels on groups and semi-groups. In Section 3, we expand the class of characteristic kernels to include kernels that may or may not be translation invariant, with the introduction of a novel criterion: strictly positive definite kernels (see Definition 3) on M are characteristic. Choice of characteristic kernels: In expanding the families of allowable characteristic kernels, we have so far neglected the question of which characteristic kernel to choose. A practitioner asking by how much two samples differ does not want to receive a blizzard of answers for every conceivable kernel and bandwidth setting, but a single measure that satisfies some “reasonable” notion of distance across the family of kernels considered. Thus, in Section 4, we propose a generalization of the MMD, yielding a new distance measure between P and Q defined as γ(P, Q) = sup{γk(P, Q) : k ∈K} = sup{∥Pk −Qk∥H : k ∈K}, (2) which is the maximal RKHS distance between P and Q over a family, K of positive definite kernels. For example, K can be the family of Gaussian kernels on Rd indexed by the bandwidth parameter. This distance measure is very natural in the light of our results on binary classification (in Section 2): most directly, this corresponds to the problem of learning the kernel by minimizing the risk of the associated Parzen-based classifier. As a less direct justification, we also increase the upper bound on the margin allowed for a hard margin SVM between the samples. To apply the generalized MMD in practice, we must ensure its empirical estimator is consistent. In our main result of Section 4, we provide an empirical estimate of γ(P, Q) based on finite samples, and show that many popular kernels like the Gaussian, Laplacian, and the entire Mat´ern class on Rd yield consistent estimates 1There is a subtlety here, since unlike the problem of testing for differences in distributions, classification suffers from slow learning rates. See [3, Chapter 7] for details. 2 of γ(P, Q). The proof is based on bounding the Rademacher chaos complexity of K, which can be understood as the U-process equivalent of Rademacher complexity [2]. Finally, in Section 5, we provide a simple experimental demonstration that the generalized MMD can be applied in practice to the problem of homogeneity testing. Specifically, we show that when two distributions differ on particular length scales, the kernel selected by the generalized MMD is appropriate to this difference, and the resulting hypothesis test outperforms the heuristic kernel choice employed in earlier studies [6]. The proofs of the results in Sections 2-4 are provided in the supplementary material. 2 Characteristic Kernels and Binary Classification One of the most important applications of the maximum mean discrepancy is in nonparametric hypothesis testing [6, 7, 4], where the characteristic property of k is required to distinguish between probability measures. In the following, we show how MMD naturally appears in binary classification, with reference to the Parzen window classifier and hard-margin SVM. This motivates the need for characteristic k to guarantee that classes arising from different distributions can be classified by kernel-based algorithms. To this end, let us consider the binary classification problem with X being a M-valued random variable, Y being a {−1, +1}-valued random variable and the product space, M × {−1, +1}, being endowed with an induced Borel probability measure µ. A discriminant function, f is a real valued measurable function on M, whose sign is used to make a classification decision. Given a loss function L : {−1, +1} × R →R, the goal is to choose an f that minimizes the risk associated with L, with the optimal L-risk being defined as RL F⋆= inf f∈F⋆ Z M L(y, f(x)) dµ(x, y) = inf f∈F⋆ n ε Z M L1(f) dP + (1 −ε) Z M L−1(f) dQ o , (3) where F⋆is the set of all measurable functions on M, L1(α) := L(1, α), L−1(α) := L(−1, α), P(X) := µ(X|Y = +1), Q(X) := µ(X|Y = −1), ε := µ(M, Y = +1). Here, P and Q represent the class-conditional distributions and ε is the prior distribution of class +1. Now, we present the result that relates γk to the optimal risk associated with the Parzen window classifier. Theorem 1 (γk and Parzen classification). Let L1(α) = −α ε and L−1(α) = α 1−ε. Then, γk(P, Q) = −RL Fk, where Fk = {f : ∥f∥H ≤1} and H is an RKHS with a measurable and bounded k. Suppose {(Xi, Yi)}N i=1, Xi ∈M, Yi ∈{−1, +1}, ∀i is a training sample drawn i.i.d. from µ and m = |{i : Yi = 1}|. If ef ∈Fk is an empirical minimizer of (3) (where F⋆is replaced by Fk in (3)), then sign( ef(x)) = ½ 1, 1 m P Yi=1 k(x, Xi) > 1 N−m P Yi=−1 k(x, Xi) −1, 1 m P Yi=1 k(x, Xi) ≤ 1 N−m P Yi=−1 k(x, Xi) , (4) which is the Parzen window classifier. Theorem 1 shows that γk is the negative of the optimal L-risk (where L is the linear loss as defined in Theorem 1) associated with the Parzen window classifier. Therefore, if k is not characteristic, which means γk(P, Q) = 0 for some P ̸= Q, then RL Fk = 0, i.e., the risk is maximum (note that since 0 ≤γk(P, Q) = −RL Fk, the maximum risk is zero). In other words, if k is characteristic, then the maximum risk is obtained only when P = Q. This motivates the importance of characteristic kernels in binary classification. In the following, we provide another result which provides a similar motivation for the importance of characteristic kernels in binary classification, wherein we relate γk to the margin of a hard-margin SVM. Theorem 2 (γk and hard-margin SVM). Suppose {(Xi, Yi)}N i=1, Xi ∈M, Yi ∈{−1, +1}, ∀i is a training sample drawn i.i.d. from µ. Assuming the training sample is separable, let fsvm be the solution to the program, inf{∥f∥H : Yif(Xi) ≥1, ∀i}, where H is an RKHS with measurable and bounded k. If k is characteristic, then 1 ∥fsvm∥H ≤γk(Pm, Qn) 2 , (5) where Pm := 1 m P Yi=1 δXi, Qn := 1 n P Yi=−1 δXi, m = |{i : Yi = 1}| and n = N −m. δx represents the Dirac measure at x. 3 Theorem 2 provides a bound on the margin of hard-margin SVM in terms of MMD. (5) shows that a smaller MMD between Pm and Qn enforces a smaller margin (i.e., a less smooth classifier, fsvm, where smoothness is measured as ∥fsvm∥H). We can observe that the bound in (5) may be loose if the number of support vectors is small. Suppose k is not characteristic, then γk(Pm, Qn) can be zero for Pm ̸= Qn and therefore the margin is zero, which means even unlike distributions can become inseparable in this feature representation. Another justification of using characteristic kernels in kernel-based classification algorithms can be provided by studying the conditions on H for which the Bayes risk is realized for all µ. Steinwart and Christmann [16, Corollary 5.37] have showed that under certain conditions on L, the Bayes risk is achieved for all µ if and only if H is dense in Lp(M, η) for all η, where η = εP + (1 −ε)Q. Here, Lp(M, η) represents the Banach space of p-power integrable functions, where p ∈[1, ∞) is dependent on the loss function, L. Denseness of H in Lp(M, η) implies H + R is dense Lp(M, η), which therefore yields that k is characteristic [4, 5]. On the other hand, if constant functions are included in H, then it is easy to show that the characteristic property of k is also sufficient to achieve the Bayes risk. As an example, it can be shown that characteristic kernels are necessary (and sufficient if constant functions are in H) for SVMs to achieve the Bayes risk [16, Example 5.40]. Therefore, the characteristic property of k is fundamental in kernel-based classification algorithms. Having showed how characteristic kernels play a role in kernel-based classification, in the following section, we provide a novel characterization for them. 3 Novel Characterization for Characteristic Kernels A positive definite (pd) kernel, k is said to be characteristic to P if and only if γk(P, Q) = 0 ⇔P = Q, ∀P, Q ∈P. The following result provides a novel characterization for characteristic kernels, which shows that strictly pd kernels are characteristic to P. An advantage with this characterization is that it holds for any arbitrary topological space M unlike the earlier characterizations where a group structure on M is assumed [14, 5]. First, we define strictly pd kernels as follows. Definition 3 (Strictly positive definite kernels). Let M be a topological space. A measurable and bounded kernel, k is said to be strictly positive definite if and only if R M R M k(x, y) dµ(x) dµ(y) > 0 for all finite non-zero signed Borel measures, µ defined on M. Note that the above definition is not equivalent to the usual definition of strictly pd kernels that involves finite sums [16, Definition 4.15]. The above definition is a generalization of integrally strictly positive definite functions [17, Section 6]: R R k(x, y)f(x)f(y) dx dy > 0 for all f ∈L2(Rd), which is the strictly positive definiteness of the integral operator given by the kernel. Definition 3 is stronger than the finite sum definition as [16, Theorem 4.62] shows a kernel that is strictly pd in the finite sum sense but not in the integral sense. Theorem 4 (Strictly pd kernels are characteristic). If k is strictly positive definite on M, then k is characteristic to P. The proof idea is to derive necessary and sufficient conditions for a kernel not to be characteristic. We show that choosing k to be strictly pd violates these conditions and k is therefore characteristic to P. Examples of strictly pd kernels on Rd include exp(−σ∥x−y∥2 2), σ > 0, exp(−σ∥x−y∥1), σ > 0, (c2 + ∥x −y∥2 2)−β, β > 0, c > 0, B2l+1-splines etc. Note that ˜k(x, y) = f(x)k(x, y)f(y) is a strictly pd kernel if k is strictly pd, where f : M →R is a bounded continuous function. Therefore, translation-variant strictly pd kernels can be obtained by choosing k to be a translation invariant strictly pd kernel. A simple example of a translation-variant kernel that is a strictly pd kernel on compact sets of Rd is ˜k(x, y) = exp(σxT y), σ > 0, where we have chosen f(.) = exp(σ∥.∥2 2/2) and k(x, y) = exp(−σ∥x −y∥2 2/2), σ > 0. Therefore, ˜k is characteristic on compact sets of Rd, which is the same result that follows from the universality of ˜k [15, Section 3, Example 1]. The following result in [10], which is based on the usual definition of strictly pd kernels, can be obtained as a corollary to Theorem 4. Corollary 5 ([10]). Let X = {xi}m i=1 ⊂M, Y = {yj}n j=1 ⊂M and assume that xi ̸= xj, yi ̸= yj, ∀i, j. Suppose k is strictly positive definite. Then Pm i=1 αik(., xi) = Pn j=1 βjk(., yj) for some αi, βj ∈R\{0} ⇒X = Y . Suppose we choose αi = 1 m, ∀i and βj = 1 n, ∀j in Corollary 5. Then Pm i=1 αik(., xi) and Pn j=1 βjk(., yj) represent the mean functions in H. Note that the Parzen classifier in (4) 4 is a mean classifier (that separates the mean functions) in H, i.e., sign(⟨k(., x), w⟩H), where w = 1 m Pm i=1 k(., xi) −1 n Pn i=1 k(., yi). Suppose k is strictly pd (more generally, suppose k is characteristic). Then, by Corollary 5, the normal vector, w to the hyperplane in H passing through the origin is zero, i.e., the mean functions coincide (and are therefore not classifiable) if and only if X = Y . 4 Generalizing the MMD for Classes of Characteristic Kernels The discussion so far has been related to the characteristic property of k that makes γk a metric on P. We have seen that this characteristic property is of prime importance both in distribution testing, and to ensure classifiability of dissimilar distributions in the RKHS. We have not yet addressed how to choose among a selection/family of characteristic kernels, given a particular pair of distributions we wish to discriminate between. We introduce one approach to this problem in the present section. Let M = Rd and kσ(x, y) = exp(−σ∥x −y∥2 2), σ ∈R+, where σ represents the bandwidth parameter. {kσ : σ ∈R+} is the family of Gaussian kernels and {γkσ : σ ∈R+} is the family of MMDs indexed by the kernel parameter, σ. Note that kσ is characteristic for any σ ∈R++ and therefore γkσ is a metric on P for any σ ∈R++. However, in practice, one would prefer a single number that defines the distance between P and Q. The question therefore to be addressed is how to choose appropriate σ. The choice of σ has important implications on the statistical aspect of γkσ. Note that as σ →0, kσ →1 and as σ →∞, kσ →0 a.e., which means γkσ(P, Q) →0 as σ →0 or σ →∞for all P, Q ∈P (this behavior is also exhibited by kσ(x, y) = exp(−σ∥x −y∥1) and kσ(x, y) = σ2/(σ2 + ∥x −y∥2 2), which are also characteristic). This means choosing sufficiently small or sufficiently large σ (depending on P and Q) makes γkσ(P, Q) arbitrarily small. Therefore, σ has to be chosen appropriately in applications to effectively distinguish between P and Q. Presently, the applications involving MMD set σ heuristically [6, 7]. To generalize the MMD to families of kernels, we propose the following modification to γk, which yields a pseudometric on P, γ(P, Q) = sup{γk(P, Q) : k ∈K} = sup{∥Pk −Qk∥H : k ∈K}. (6) Note that γ is the maximal RKHS distance between P and Q over a family, K of positive definite kernels. It is easy to check that if any k ∈K is characteristic, then γ is a metric on P. Examples for K include: Kg := {e−σ∥x−y∥2 2, x, y ∈Rd : σ ∈R+}; Kl := {e−σ∥x−y∥1, x, y ∈Rd : σ ∈R+}; Kψ := {e−σψ(x,y), x, y ∈M : σ ∈R+}, where ψ : M × M →R is a negative definite kernel; Krbf := { R ∞ 0 e−λ∥x−y∥2 2 dµσ(λ), x, y ∈Rd, µσ ∈M + : σ ∈Σ ⊂Rd}, where M + is the set of all finite nonnegative Borel measures, µσ on R+ that are not concentrated at zero, etc. The proposal of γ(P, Q) in (6) can be motivated by the connection that we have established in Section 2 between γk and the Parzen window classifier. Since the Parzen window classifier depends on the kernel, k, one can propose to learn the kernel like in support vector machines [8], wherein the kernel is chosen such that RL Fk in Theorem 1 is minimized over k ∈K, i.e., infk∈K RL Fk = −supk∈K γk(P, Q) = −γ(P, Q). A similar motivation for γ can be provided based on (5) as learning the kernel in a hard-margin SVM by maximizing its margin. At this point, we briefly discuss the issue of normalized vs. unnormalized kernel families, K in (6). We say a translation-invariant kernel, k on Rd is normalized if R M ψ(y) dy = c (some positive constant independent of the kernel parameter), where k(x, y) = ψ(x −y). K is a normalized kernel family if every kernel in K is normalized. If K is not normalized, we say it is unnormalized. For example, it is easy to see that Kg and Kl are unnormalized kernel families. Let us consider the normalized Gaussian family, Kn g = {(σ/π)d/2e−σ∥x−y∥2 2, x, y ∈Rd : σ ∈[σ0, ∞)}. It can be shown that for any kσ, kτ ∈Kn g , 0 < σ < τ < ∞, we have γkσ(P, Q) ≥γkτ (P, Q), which means, γ(P, Q) = γσ0(P, Q). Therefore, the generalized MMD reduces to a single kernel MMD. A similar result also holds for the normalized inverse-quadratic kernel family, { p 2σ2/π(σ2 + ∥x − y∥2 2)−1, x, y ∈R : σ ∈[σ0, ∞)}. These examples show that the generalized MMD definition is usually not very useful if K is a normalized kernel family. In addition, σ0 should be chosen beforehand, which is equivalent to heuristically setting the kernel parameter in γk. Note that σ0 cannot be zero because in the limiting case of σ →0, the kernels approach a Dirac distribution, which means the limiting kernel is not bounded and therefore the definition of MMD in (1) does not hold. So, in this work, we consider unnormalized kernel families to render the definition of generalized MMD in (6) useful. 5 To use γ in statistical applications where P and Q are known only through i.i.d. samples {Xi}m i=1 and {Yi}n i=1 respectively, we require its estimator γ(Pm, Qn) to be consistent, where Pm and Qn represent the empirical measures based on {Xi}m i=1 and {Yj}n j=1. For k measurable and bounded, [6, 12] have shown that γk(Pm, Qn) is a p mn/(m + n)-consistent estimator of γk(P, Q). The statistical consistency of γ(Pm, Qn) is established in the following theorem, which uses tools from U-process theory [2, Chapters 3,5]. We begin with the following definition. Definition 6 (Rademacher chaos). Let G be a class of functions on M × M and {ρi}n i=1 be independent Rademacher random variables, i.e., Pr(ρi = 1) = Pr(ρi = −1) = 1 2. The homogeneous Rademacher chaos process of order two with respect to {ρi}n i=1 is defined as {n−1 Pn i<j ρiρjg(xi, xj) : g ∈G} for some {xi}n i=1 ⊂M. The Rademacher chaos complexity over G is defined as Un(G; {xi}n i=1) := Eρ sup g∈G ¯¯¯ 1 n n X i<j ρiρjg(xi, xj) ¯¯¯. (7) We now provide the main result of the present section. Theorem 7 (Consistency of γ(Pm, Qn)). Let every k ∈K be measurable and bounded with ν := supk∈K,x∈M k(x, x) < ∞. Then, with probability at least 1 −δ, |γ(Pm, Qn) −γ(P, Q)| ≤A, where A = r 16Um(K; {Xi}) m + 16Un(K; {Yi}) n + ( √ 8ν + q 36ν log 4 δ )√m + n √mn . (8) From (8), it is clear that if Um(K; {Xi}) = OP(1) and Un(K; {Yi}) = OQ(1), then γ(Pm, Qn) a.s. → γ(P, Q). The following result provides a bound on Um(K; {Xi}) in terms of the entropy integral. Lemma 8 (Entropy bound). For any K as in Theorem 7 with 0 ∈K, there exists a universal constant C such that Um(K; {Xi}m i=1) ≤C Z ν 0 log N(K, D, ϵ) dϵ, (9) where D(k1, k2) = 1 m hPm i<j(k1(Xi, Xj) −k2(Xi, Xj))2i 1 2 . N(K, D, ϵ) represents the ϵcovering number of K with respect to the metric D. Assuming K to be a VC-subgraph class, the following result, as a corollary to Lemma 8 provides an estimate of Um(K; {Xi}m i=1). Before presenting the result, we first provide the definition of a VC-subgraph class. Definition 9 (VC-subgraph class). The subgraph of a function g : M × R is the subset of M × R given by {(x, t) : t < g(x)}. A collection G of measurable functions on a sample space is called a VC-subgraph class, if the collection of all subgraphs of the functions in G forms a VC-class of sets (in M × R). The VC-index (also called the VC-dimension) of a VC-subgraph class, G is the same as the pseudodimension of G. See [1, Definition 11.1] for details. Corollary 10 (Um(K; {Xi}) for VC-subgraph, K). Suppose K is a VC-subgraph class with V (K) being the VC-index. Assume K satisfies the conditions in Theorem 7 and 0 ∈K. Then Um(K; {Xi}) ≤Cν log(C1V (K)(16e9)V (K)), (10) for some universal constants C and C1. Using (10) in (8), we have |γ(Pm, Qn) −γ(P, Q)| = OP,Q( p (m + n)/mn) and by the BorelCantelli lemma, |γ(Pm, Qn) −γ(P, Q)| a.s. →0. Now, the question reduces to which of the kernel classes, K have V (K) < ∞. [18, Lemma 12] showed that V (Kg) = 1 (also see [19]) and Um(Krbf) ≤C2Um(Kg), where C2 < ∞. It can be shown that V (Kψ) = 1 and V (Kl) = 1. All these classes satisfy the conditions of Theorem 7 and Corollary 10 and therefore provide consistent estimates of γ(P, Q) for any P, Q ∈P. Examples of kernels on Rd that are covered by these classes include the Gaussian, Laplacian, inverse multiquadratics, Mat´ern class etc. Other choices for K that are popular in machine learning are the linear combination of kernels, Klin := {kλ = Pl i=1 λiki | kλ is pd, Pl i=1 λi = 1} and Kcon := {kλ = Pl i=1 λiki | λi ≥0, Pl i=1 λi = 1}. [13, Lemma 7] have shown that V (Kcon) ≤V (Klin) ≤l. Therefore, instead of using a class based on a fixed, parameterized kernel, one can also use a finite linear combination of kernels to compute γ. 6 So far, we have presented the metric property and statistical consistency (of the empirical estimator) of γ. Now, the question is how do we compute γ(Pm, Qn) in practice. To show this, in the following, we present two examples. Example 11. Suppose K = Kg. Then, γ(Pm, Qn) can be written as γ2(Pm, Qn) = sup σ∈R+   m X i,j=1 e−σ∥Xi−Xj∥2 m2 + n X i,j=1 e−σ∥Yi−Yj∥2 n2 −2 m,n X i,j=1 e−σ∥Xi−Yj∥2 mn  . (11) The optimum σ∗can be obtained by solving (11) and γ(Pm, Qn) = ∥Pmkσ∗−Qnkσ∗∥Hσ⋆. Example 12. Suppose K = Kcon. Then, γ(Pm, Qn) becomes γ2(Pm, Qn) = sup k∈Kcon ∥Pmk −Qnk∥2 H = sup k∈Kcon Z Z k d(Pm −Qn) ⊗(Pm −Qn) = sup{λT a : λT 1 = 1, λ ⪰0}, (12) where we have replaced k by Pl i=1 λiki. Here λ = (λ1, . . . , λl) and (a)i = ∥Pmki −Qnki∥2 Hi = 1 m2 Pm a,b=1 ki(Xa, Xb) + 1 n2 Pn a,b=1 ki(Ya, Yb) − 2 mn Pm,n a,b=1 ki(Xa, Yb). It is easy to see that γ2(Pm, Qn) = max1≤i≤l(a)i. Similar examples can be provided for other K, where γ(Pm, Qn) can be computed by solving a semidefinite program (K = Klin) or by the constrained gradient descent ( K = Kl, Krbf). Finally, while the approach in (6) to generalizing γk is our focus in this paper, an alternative Bayesian strategy would be to define a non-negative finite measure λ over K, and to average γk over that measure, i.e., β(P, Q) := R K γk(P, Q) dλ(k). This also yields a pseudometric on P. That said, β(P, Q) ≤λ(K)γ(P, Q), ∀P, Q, which means if P and Q can be distinguished by β, they can be distinguished by γ, but not vice-versa. In this sense, γ is stronger than β. One further complication with the Bayesian approach is in defining a sensible λ over K. Note that γk0 (single kernel MMD based on k0) can be obtained by defining λ(k) = δ(k −k0) in β(P, Q). 5 Experiments In this section, we present a benchmark experiment that illustrates the generalized MMD proposed in Section 4 is preferred above the single kernel MMD where the kernel parameter is set heuristically. The experimental setup is as follows. Let p = N(0, σ2 p), a normal distribution in R with zero mean and variance, σ2 p. Let q be the perturbed version of p, given as q(x) = p(x)(1 + sin νx). Here p and q are the densities associated with P and Q respectively. It is easy to see that q differs from p at increasing frequencies with increasing ν. Let k(x, y) = exp(−(x −y)2/σ). Now, the goal is that given random samples drawn i.i.d. from P and Q (with ν fixed), we would like to test H0 : P = Q vs. H1 : P ̸= Q. The idea is that as ν increases, it will be harder to distinguish between P and Q for a fixed sample size. Therefore, using this setup we can verify whether the adaptive bandwidth selection achieved by γ (as the test statistic) helps to distinguish between P and Q at higher ν compared to γk with a heuristic σ. To this end, using γ(Pm, Qn) and γk(Pm, Qn) (with various σ) as test statistics Tmn, we design a test that returns H0 if Tmn ≤cmn, and H1 otherwise. The problem therefore reduces to finding cmn. cmn is determined as the (1 −α) quantile of the asymptotic distribution of Tmn under H0, which therefore fixes the type-I error (the probability of rejecting H0 when it is true) to α. The consistency of this test under γk (for any fixed σ) is proved in [6]. A similar result can be shown for γ under some conditions on K. We skip the details here. In our experiments, we set m = n = 1000, σ2 p = 10 and draw two sets of independent random samples from Q. The distribution of Tmn is estimated by bootstrapping on these samples (250 bootstrap iterations are performed) and the associated 95th quantile (we choose α = 0.05) is computed. Since the performance of the test is judged by its type-II error (the probability of accepting H0 when H1 is true), we draw a random sample, one each from P and Q and test whether P = Q. This process is repeated 300 times, and estimates of type-I and type-II errors are obtained for both γ and γk. 14 different values for σ are considered on a logarithmic scale of base 2 with exponents (−3, −2, −1, 0, 1, 3 2, 2, 5 2, 3, 7 2, 4, 5, 6) along with the median distance between samples as one more choice. 5 different choices for ν are considered: ( 1 2, 3 4, 1, 5 4, 3 2). 7 0.5 0.75 1 1.25 1.5 0 2 4 5 6 ν Error (in %) Type−I error Type−II error (a) −3 −2 −1 0 1 2 3 4 5 6 5 10 15 20 25 log σ Type−I error (in %) ν=0.5 ν=0.75 ν=1.0 ν=1.25 ν=1.5 (b) −3 −2 −1 0 1 2 3 4 5 6 0 50 100 log σ Type−II error (in %) ν=0.5 ν=0.75 ν=1.0 ν=1.25 ν=1.5 (c) 0.5 0.75 1 1.25 1.5 0 1 2 3 log σ ν (d) 0.5 0.75 1 1.25 1.5 8 9 10 11 Median as σ ν (e) Figure 1: (a) Type-I and Type-II errors (in %) for γ for varying ν. (b,c) Type-I and type-II error (in %) for γk (with different σ) for varying ν. The dotted line in (c) corresponds to the median heuristic, which shows that its associated type-II error is very large at large ν. (d) Box plot of log σ grouped by ν, where σ is selected by γ. (e) Box plot of the median distance between points (which is also a choice for σ), grouped by ν. Refer to Section 5 for details. Figure 1(a) shows the estimated type-I and type-II errors using γ as the test statistic for varying ν. Note that the type-I error is close to its design value of 5%, while the type-II error is zero for all ν, which means γ distinguishes between P and Q for all perturbations. Figures 1(b,c) show the estimates of type-I and type-II errors using γk as the test statistic for different σ and ν. Figure 1(d) shows the box plot for log σ, grouped by ν, where σ is the bandwidth selected by γ. Figure 1(e) shows the box plot of the median distance between points (which is also a choice for σ), grouped by ν. From Figures 1(c) and (e), it is easy to see that the median heuristic exhibits high type-II error for ν = 3 2, while γ exhibits zero type-II error (from Figure 1(a)). Figure 1(c) also shows that heuristic choices of σ can result in high type-II errors. It is intuitive to note that as ν increases, (which means the characteristic function of Q differs from that of P at higher frequencies), a smaller σ is needed to detect these changes. The advantage of using γ is that it selects σ in a distribution-dependent fashion and its behavior in the box plot shown in Figure 1(d) matches with the previously mentioned intuition about the behavior of σ with respect to ν. These results demonstrate the validity of using γ as a distance measure in applications. 6 Conclusions In this work, we have shown how MMD appears in binary classification, and thus that characteristic kernels are important in kernel-based classification algorithms. We have broadened the class of characteristic RKHSs to include those induced by strictly positive definite kernels (with particular application to kernels on non-compact domains, and/or kernels that are not translation invariant). We have further provided a convergent generalization of MMD over families of kernel functions, which becomes necessary even in considering relatively simple families of kernels (such as the Gaussian kernels parameterized by their bandwidth). The usefulness of the generalized MMD is illustrated experimentally with a two-sample testing problem. Acknowledgments The authors thank anonymous reviewers for their constructive comments and especially the reviewer who pointed out the connection between characteristic kernels and the achievability of Bayes risk. B. K. S. was supported by the MPI for Biological Cybernetics, National Science Foundation (grant DMS-MSPA 0625409), the Fair Isaac Corporation and the University of California MICRO program. A. G. was supported by grants DARPA IPTO FA8750-09-1-0141, ONR MURI N000140710747, and ARO MURI W911NF0810242. 8 References [1] M. Anthony and P. L. Bartlett. Neural Network Learning: Theoretical Foundations. Cambridge University Press, UK, 1999. [2] V. H. de la Pe˜na and E. Gin´e. Decoupling: From Dependence to Independence. Springer-Verlag, NY, 1999. [3] L. Devroye, L. Gyorfi, and G. Lugosi. A Probabilistic Theory of Pattern Recognition. Springer-Verlag, New York, 1996. [4] K. Fukumizu, A. Gretton, X. Sun, and B. Sch¨olkopf. Kernel measures of conditional dependence. In J.C. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20, pages 489–496, Cambridge, MA, 2008. MIT Press. [5] K. Fukumizu, B. K. Sriperumbudur, A. Gretton, and B. Sch¨olkopf. Characteristic kernels on groups and semigroups. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural Information Processing Systems 21, pages 473–480, 2009. [6] A. Gretton, K. M. Borgwardt, M. Rasch, B. Sch¨olkopf, and A. Smola. A kernel method for the two sample problem. In B. Sch¨olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems 19, pages 513–520. MIT Press, 2007. [7] A. Gretton, K. Fukumizu, C.-H. Teo, L. Song, B. Sch¨olkopf, and A. Smola. A kernel statistical test of independence. In Advances in Neural Information Processing Systems 20, pages 585–592. MIT Press, 2008. [8] G. R. G. Lanckriet, N. Christianini, P. Bartlett, L. El Ghaoui, and M. I. Jordan. Learning the kernel matrix with semidefinite programming. Journal of Machine Learning Research, 5:24–72, 2004. [9] B. Sch¨olkopf and A. J. Smola. Learning with Kernels. MIT Press, Cambridge, MA, 2002. [10] B. Sch¨olkopf, B. K. Sriperumbudur, A. Gretton, and K. Fukumizu. RKHS representation of measures. In Learning Theory and Approximation Workshop, Oberwolfach, Germany, 2008. [11] J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge University Press, UK, 2004. [12] A. J. Smola, A. Gretton, L. Song, and B. Sch¨olkopf. A Hilbert space embedding for distributions. In Proc. 18th International Conference on Algorithmic Learning Theory, pages 13–31. Springer-Verlag, Berlin, Germany, 2007. [13] N. Srebro and S. Ben-David. Learning bounds for support vector machines with learned kernels. In G. Lugosi and H. U. Simon, editors, Proc. of the 19th Annual Conference on Learning Theory, pages 169–183, 2006. [14] B. K. Sriperumbudur, A. Gretton, K. Fukumizu, G. R. G. Lanckriet, and B. Sch¨olkopf. Injective Hilbert space embeddings of probability measures. In R. Servedio and T. Zhang, editors, Proc. of the 21st Annual Conference on Learning Theory, pages 111–122, 2008. [15] I. Steinwart. On the influence of the kernel on the consistency of support vector machines. Journal of Machine Learning Research, 2:67–93, 2002. [16] I. Steinwart and A. Christmann. Support Vector Machines. Springer, 2008. [17] J. Stewart. Positive definite functions and generalizations, an historical survey. Rocky Mountain Journal of Mathematics, 6(3):409–433, 1976. [18] Y. Ying and C. Campbell. Generalization bounds for learning the kernel. In Proc. of the 22nd Annual Conference on Learning Theory, 2009. [19] Y. Ying and D. X. Zhou. Learnability of Gaussians with flexible variances. Journal of Machine Learning Research, 8:249–276, 2007. 9
2009
102
3,596
Semi-Supervised Learning with the Graph Laplacian: The Limit of Infinite Unlabelled Data Boaz Nadler Dept. of Computer Science and Applied Mathematics Weizmann Institute of Science Rehovot, Israel 76100 boaz.nadler@weizmann.ac.il Nathan Srebro Toyota Technological Institute Chicago, IL 60637 nati@uchicago.edu Xueyuan Zhou Dept. of Computer Science University of Chicago Chicago, IL 60637 zhouxy@cs.uchicago.edu Abstract We study the behavior of the popular Laplacian Regularization method for SemiSupervised Learning at the regime of a fixed number of labeled points but a large number of unlabeled points. We show that in Rd, d ⩾2, the method is actually not well-posed, and as the number of unlabeled points increases the solution degenerates to a noninformative function. We also contrast the method with the Laplacian Eigenvector method, and discuss the “smoothness” assumptions associated with this alternate method. 1 Introduction and Setup In this paper we consider the limit behavior of two popular semi-supervised learning (SSL) methods based on the graph Laplacian: the regularization approach [15] and the spectral approach [3]. We consider the limit when the number of labeled points is fixed and the number of unlabeled points goes to infinity. This is a natural limit for SSL as the basic SSL scenario is one in which unlabeled data is virtually infinite. We can also think of this limit as “perfect” SSL, having full knowledge of the marginal density p(x). The premise of SSL is that the marginal density p(x) is informative about the unknown mapping y(x) we are trying to learn, e.g. since y(x) is expected to be “smooth” in some sense relative to p(x). Studying the infinite-unlabeled-data limit, where p(x) is fully known, allows us to formulate and understand the underlying smoothness assumptions of a particular SSL method, and judge whether it is well-posed and sensible. Understanding the infinite-unlabeled-data limit is also a necessary first step to studying the convergence of the finite-labeled-data estimator. We consider the following setup: Let p(x) be an unknown smooth density on a compact domain Ω⊂ Rd with a smooth boundary. Let y : Ω→Y be the unknown function we wish to estimate. In case of regression Y = R whereas in binary classification Y = {−1, 1}. The standard (transductive) semisupervised learning problem is formulated as follows: Given l labeled points, (x1, y1), . . . , (xl, yl), with yi = y(xi), and u unlabeled points xl+1, . . . , xl+u, with all points xi sampled i.i.d. from p(x), the goal is to construct an estimate of y(xl+i) for any unlabeled point xl+i, utilizing both the labeled and the unlabeled points. We denote the total number of points by n = l + u. We are interested in the regime where l is fixed and u →∞. 1 2 SSL with Graph Laplacian Regularization We first consider the following graph-based approach formulated by Zhu et. al. [15]: ˆy(x) = arg min y In(y) subject to y(xi) = yi, i = 1, . . . , l (1) where In(y) = 1 n2 X i,j Wi,j(y(xi) −y(xj))2 (2) is a Laplacian regularization term enforcing “smoothness” with respect to the n×n similarity matrix W. This formulation has several natural interpretations in terms of, e.g. random walks and electrical circuits [15]. These interpretations, however, refer to a fixed graph, over a finite set of points with given similarities. In contrast, our focus here is on the more typical scenario where the points xi ∈Rd are a random sample from a density p(x), and W is constructed based on this sample. We would like to understand the behavior of the method in terms of the density p(x), particularly in the limit where the number of unlabeled points grows. Under what assumptions on the target labeling y(x) and on the density p(x) is the method (1) sensible? The answer, of course, depends on how the matrix W is constructed. We consider the common situation where the similarities are obtained by applying some decay filter to the distances: Wi,j = G  ∥xi−xj∥ σ  (3) where G : R+ →R+ is some function with an adequately fast decay. Popular choices are the Gaussian filter G(z) = e−z2/2 or the ǫ-neighborhood graph obtained by the step filter G(z) = 1z<1. For simplicity, we focus here on the formulation (1) where the solution is required to satisfy the constraints at the labeled points exactly. In practice, the hard labeling constraints are often replaced with a softer loss-based data term, which is balanced against the smoothness term In(y), e.g. [14, 6]. Our analysis and conclusions apply to such variants as well. Limit of the Laplacian Regularization Term As the number of unlabeled examples grows the regularization term (2) converges to its expectation, where the summation is replaced by integration w.r.t. the density p(x): lim n→∞In(y) = I(σ)(y) = Z Ω Z Ω G  ∥x−x′∥ σ  (y(x) −y(x′))2p(x)p(x′)dxdx′. (4) In the above limit, the bandwidth σ is held fixed. Typically, one would also drive the bandwidth σ to zero as n →∞. There are two reasons for this choice. First, from a practical perspective, this makes the similarity matrix W sparse so it can be stored and processed. Second, from a theoretical perspective, this leads to a clear and well defined limit of the smoothness regularization term In(y), at least when σ →0 slowly enough1, namely when σ = ω( dp log n/n). If σ →0 as n →∞, and as long as nσd/ log n →∞, then after appropriate normalization, the regularizer converges to a density weighted gradient penalty term [7, 8]: lim n→∞ d Cσd+2 In(y) = lim σ→0 d Cσd+2 I(σ)(y) = J(y) = Z Ω ∥∇y(x)∥2p(x)2dx (5) where C = R Rd ∥z∥2G(∥z∥)dz, and assuming 0 < C < ∞(which is the case for both the Gaussian and the step filters). This energy functional J(f) therefore encodes the notion of “smoothness” with respect to p(x) that is the basis of the SSL formulation (1) with the graph constructions specified by (3). To understand the behavior and appropriateness of (1) we must understand this functional and the associated limit problem: ˆy(x) = arg min y J(y) subject to y(xi) = yi, i = 1, . . . , l (6) 1When σ = o( dp 1/n) then all non-diagonal weights Wi,j vanish (points no longer have any “close by” neighbors). We are not aware of an analysis covering the regime where σ decays roughly as dp 1/n, but would be surprised if a qualitatively different meaningful limit is reached. 2 3 Graph Laplacian Regularization in R1 We begin by considering the solution of (6) for one dimensional data, i.e. d = 1 and x ∈R. We first consider the situation where the support of p(x) is a continuous interval Ω= [a, b] ⊂R (a and/or b may be infinite). Without loss of generality, we assume the labeled data is sorted in increasing order a ⩽x1 < x2 < · · · < xl ⩽b. Applying the theory of variational calculus, the solution ˆy(x) satisfies inside each interval (xi, xi+1) the Euler-Lagrange equation d dx  p2(x)dy dx  = 0. Performing two integrations and enforcing the constraints at the labeled points yields y(x) = yi + R x xi 1/p2(t)dt R xi+1 xi 1/p2(t)dt(yi+1 −yi) for xi ⩽x ⩽xi+1 (7) with y(x) = x1 for a ⩽x ⩽x1 and y(x) = xl for xl ⩽x ⩽b. If the support of p(x) is a union of disjoint intervals, the above analysis and the form of the solution applies in each interval separately. The solution (7) seems reasonable and desirable from the point of view of the “smoothness” assumptions: when p(x) is uniform, the solution interpolates linearly between labeled data points, whereas across low-density regions, where p(x) is close to zero, y(x) can change abruptly. Furthermore, the regularizer J(y) can be interpreted as a Reproducing Kernel Hilbert Space (RKHS) squared semi-norm, giving us additional insight into this choice of regularizer: Theorem 1. Let p(x) be a smooth density on Ω= [a, b] ⊂R such that Ap = 1 4 R b a 1/p2(t)dt < ∞. Then, J(f) can be written as a squared semi-norm J(f) = ∥f∥2 Kp induced by the kernel Kp(x, x′) = Ap −1 2 Z x′ x 1 p2(t)dt . (8) with a null-space of all constant functions. That is, ∥f∥Kp is the norm of the projection of f onto the RKHS induced by Kp. If p(x) is supported on several disjoint intervals, Ω= ∪i[ai, bi], then J(f) can be written as a squared semi-norm induced by the kernel Kp(x, x′) = ( 1 4 R bi ai dt p2(t) −1 2 R x′ x dt p2(t) if x, x′ ∈[ai, bi] 0 if x ∈[ai, bi], x′ ∈[aj, bj], i ̸= j (9) with a null-space spanned by indicator functions 1[ai,bi](x) on the connected components of Ω. Proof. For any f(x) = P i αiKp(x, xi) in the RKHS induced by Kp: J(f) = Z df dx 2 p2(x)dx = X i,j αiαjJij (10) where Jij = Z d dxKp(x, xi) d dxKp(x, xj)p2(x)dx When xi and xj are in different connected components of Ω, the gradients of Kp(·, xi) and Kp(·, xj) are never non-zero together and Jij = 0 = Kp(xi, xj). When they are in the same connected component [a, b], and assuming w.l.o.g. a ⩽xi ⩽xj ⩽b: Jij = 1 4 "Z xi a 1 p2(t)dt + Z xj xi −1 p2(t)dt + Z b xj 1 p2(t)dt # = 1 4 Z b a 1 p2(t)dt −1 2 Z xj xi 1 p2(t)dt = Kp(xi, xj). (11) Substituting Jij = Kp(xi, xj) into (10) yields J(f) = P αiαjKp(xi, xj) = ∥f∥Kp. 3 Combining Theorem 1 with the Representer Theorem [13] establishes that the solution of (6) (or of any variant where the hard constraints are replaced by a data term) is of the form: y(x) = l X j=1 αjKp(x, xj) + X i βi1[ai,bi](x), where i ranges over the connected components [ai, bi] of Ω, and we have: J(y) = l X i,j=1 αiαjKp(xi, xj). (12) Viewing the regularizer as ∥y∥2 Kp suggests understanding (6), and so also its empirical approximation (1), by interpreting Kp(x, x′) as a density-based “similarity measure” between x and x′. This similarity measure indeed seems sensible: for a uniform density it is simply linearly decreasing as a function of the distance. When the density is non-uniform, two points are relatively similar only if they are connected by a region in which 1/p2(x) is low, i.e. the density is high, but are much less “similar”, i.e. related to each other, when connected by a low-density region. Furthermore, there is no dependence between points in disjoint components separated by zero density regions. 4 Graph Laplacian Regularization in Higher Dimensions The analysis of the previous section seems promising, at it shows that in one dimension, the SSL method (1) is well posed and converges to a sensible limit. Regretfully, in higher dimensions this is not the case anymore. In the following theorem we show that the infimum of the limit problem (6) is zero and can be obtained by a sequence of functions which are certainly not a sensible extrapolation of the labeled points. Theorem 2. Let p(x) be a smooth density over Rd, d ⩾2, bounded from above by some constant pmax, and let (x1, y1), . . . , (xl, yl) be any (non-repeating) set of labeled examples. There exist continuous functions yǫ(x), for any ǫ > 0, all satisfying the constraints yǫ(xj) = yj, j = 1, . . . , l, such that J(yǫ) ǫ→0 −→0 but yǫ(x) ǫ→0 −→0 for all x ̸= xj, j = 1, . . . , l. Proof. We present a detailed proof for the case of l = 2 labeled points. The generalization of the proof to more labeled points is straightforward. Furthermore, without loss of generality, we assume the first labeled point is at x0 = 0 with y(x0) = 0 and the second labeled point is at x1 with ∥x1∥= 1 and y(x1) = 1. In addition, we assume that the ball B1(0) of radius one centered around the origin is contained in Ω= {x ∈Rd | p(x) > 0}. We first consider the case d > 2. Here, for any ǫ > 0, consider the function yǫ(x) = min  ∥x∥ ǫ , 1  which indeed satisfies the two constraints yǫ(xi) = yi, i = 0, 1. Then, J(yǫ) = Z Bǫ(0) p2(x) ǫ2 dx ⩽pmax ǫ2 Z Bǫ(0) dx = p2 maxVd ǫd−2 (13) where Vd is the volume of a unit ball in Rd. Hence, the sequence of functions yǫ(x) satisfy the constraints, but for d > 2, infǫ J(yǫ) = 0. For d = 2, a more extreme example is necessary: consider the functions yǫ(x) = log  ∥x∥2+ǫ ǫ  log 1+ǫ ǫ  for ∥x∥⩽1 and yǫ(x) = 1 for ∥x∥> 1. These functions satisfy the two constraints yǫ(xi) = yi, i = 0, 1 and: J(yǫ) = 4 h log “ 1+ǫ ǫ ”i2 Z B1(0) ∥x∥2 (∥x∥2+ǫ)2 p2(x)dx ⩽ 4p2 max h log “ 1+ǫ ǫ ”i2 Z 1 0 r2 (r2+ǫ)2 2πrdr ⩽ 4πp2 max h log “ 1+ǫ ǫ ”i2 log 1+ǫ ǫ  = 4πp2 max log 1+ǫ ǫ  ǫ→0 −→0. 4 The implication of Theorem 2 is that regardless of the values at the labeled points, as u →∞, the solution of (1) is not well posed. Asymptotically, the solution has the form of an almost everywhere constant function, with highly localized spikes near the labeled points, and so no learning is performed. In particular, an interpretation in terms of a density-based kernel Kp, as in the onedimensional case, is not possible. Our analysis also carries over to a formulation where a loss-based data term replaces the hard label constraints, as in ˆy = arg min y(x) 1 l l X j=1 (y(xj) −yj)2 + γIn(y) In the limit of infinite unlabeled data, functions of the form yǫ(x) above have a zero data penalty term (since they exactly match the labels) and also drive the regularization term J(y) to zero. Hence, it is possible to drive the entire objective functional (the data term plus the regularization term) to zero with functions that do not generalize at all to unlabeled points. 4.1 Numerical Example We illustrate the phenomenon detailed by Theorem 2 with a simple example. Consider a density p(x) in R2, which is a mixture of two unit variance spherical Gaussians, one per class, centered at the origin and at (4, 0). We sample a total of n = 3000 points, and label two points from each of the two components (four total). We then construct a similarity matrix using a Gaussian filter with σ = 0.4. Figure 1 depicts the predictor ˆy(x) obtained from (1). In fact, two different predictors are shown, obtained by different numerical methods for solving (1). Both methods are based on the observation that the solution ˆy(x) of (1) satisfies: ˆy(xi) = n X j=1 Wij ˆy(xj) / n X j=1 Wij on all unlabeled points i = l + 1, . . . , l + u. (14) Combined with the constraints of (1), we obtain a system of linear equations that can be solved by Gaussian elimination (here invoked through MATLAB’s backslash operator). This is the method used in the top panels of Figure 1. Alternatively, (14) can be viewed as an update equation for ˆy(xi), which can be solved via the power method, or label propagation [2, 6]: start with zero labels on the unlabeled points and iterate (14), while keeping the known labels on x1, . . . , xl. This is the method used in the bottom panels of Figure 1. As predicted, ˆy(x) is almost constant for almost all unlabeled points. Although all values are very close to zero, thresholding at the “right” threshold does actually produce sensible results in terms of the true -1/+1 labels. However, beyond being inappropriate for regression, a very flat predictor is still problematic even from a classification perspective. First, it is not possible to obtain a meaningful confidence measure for particular labels. Second, especially if the size of each class is not known apriori, setting the threshold between the positive and negative classes is problematic. In our example, setting the threshold to zero yields a generalization error of 45%. The differences between the two numerical methods for solving (1) also point out to another problem with the ill-posedness of the limit problem: the solution is numerically very un-stable. A more quantitative evaluation, that also validates that the effect in Figure 1 is not a result of choosing a “wrong” bandwidth σ, is given in Figure 2. We again simulated data from a mixture of two Gaussians, one Gaussian per class, this time in 20 dimensions, with one labeled point per class, and an increasing number of unlabeled points. In Figure 2 we plot the squared error, and the classification error of the resulting predictor ˆy(x). We plot the classification error both when a threshold of zero is used (i.e. the class is determined by sign(ˆy(x))) and with the ideal threshold minimizing the test error. For each unlabeled sample size, we choose the bandwidth σ yielding the best test performance (this is a “cheating” approach which provides a lower bound on the error of the best method for selecting the bandwidth). As the number of unlabeled examples increases the squared error approaches 1, indicating a flat predictor. Using a threshold of zero leads to an increase in the classification error, possibly due to numerical instability. Interestingly, although the predictors become very flat, the classification error using the ideal threshold actually improves slightly. Note that 5 −5 0 5 10 −10 0 10 −1 0 1 DIRECT INVERSION −5 0 5 10 −10 −5 0 5 10 SIGN ERROR: 45% y(x) > 0 y(x) < 0 −5 0 5 10 −10 0 10 −1 0 1 POWER METHOD −5 0 5 10 −10 −5 0 5 10 SIGN ERR: 17.1 Figure 1: Left plots: Minimizer of Eq. (1). Right plots: the resulting classification according to sign(y). The four labeled points are shown by green squares. Top: minimization via Gaussian elimination (MATLAB backslash). Bottom: minimization via label propagation with 1000 iterations - the solution has not yet converged, despite small residuals of the order of 2 · 10−4. 0 200 400 600 800 0.85 0.9 0.95 1 SQUARED ERROR 0 200 400 600 800 0 2 4 6 OPTIMAL BANDWIDTH 0 200 400 600 800 0.26 0.28 0.3 0.32 0−1 ERROR (THRESHOLD=0) 0 200 400 600 800 0 0.5 1 1.5 OPTIMAL BANDWIDTH 0 200 400 600 800 0.16 0.17 0.18 0.19 0−1 ERROR (IDEAL THRESHOLD) 0 200 400 600 800 2 4 6 8 OPTIMAL BANDWIDTH Figure 2: Squared error (top), classification error with a threshold of zero (center) and minimal classification error using ideal threhold (bottom), of the minimizer of (1) as a function of number of unlabeled points. For each error measure and sample size, the bandwidth minimizing the test error was used, and is plotted. ideal classification performance is achieved with a significantly larger bandwidth than the bandwidth minimizing the squared loss, i.e. when the predictor is even flatter. 4.2 Probabilistic Interpretation, Exit and Hitting Times As mentioned above, the Laplacian regularization method (1) has a probabilistic interpretation in terms of a random walk on the weighted graph. Let x(t) denote a random walk on the graph with transition matrix M = D−1W where D is a diagonal matrix with Dii = P j Wij. Then, for the binary classification case with yi = ±1 we have [15]: ˆy(xi) = 2 Pr h x(t) hits a point labeled +1 before hitting a point labeled -1 x(0) = xi i −1 We present an interpretation of our analysis in terms of the limiting properties of this random walk. Consider, for simplicity, the case where the two classes are separated by a low density region. Then, the random walk has two intrinsic quantities of interest. The first is the mean exit time from one cluster to the other, and the other is the mean hitting time to the labeled points in that cluster. As the number of unlabeled points increases and σ →0, the random walk converges to a diffusion process [12]. While the mean exit time then converges to a finite value corresponding to its diffusion analogue, the hitting time to a labeled point increases to infinity (as these become absorbing boundaries of measure zero). With more and more unlabeled data the random walk will fully mix, forgetting where it started, before it hits any label. Thus, the probability of hitting +1 before −1 will become uniform across the entire graph, independent of the starting location xi, yielding a flat predictor. 5 Keeping σ Finite At this point, a reader may ask whether the problems found in higher dimensions are due to taking the limit σ →0. One possible objection is that there is an intrinsic characteristic scale for the data σ0 where (with high probability) all points at a distance ∥xi −xj∥< σ0 have the same label. If this is the case, then it may not necessarily make sense to take values of σ < σ0 in constructing W. However, keeping σ finite while taking the number of unlabeled points to infinity does not resolve the problem. On the contrary, even the one-dimensional case becomes ill-posed in this case. To see this, consider a function y(x) which is zero everywhere except at the labeled points, where y(xj) = yj. With a finite number of labeled points of measure zero, I(σ)(y) = 0 in any dimension 6 −2 0 2 4 6 −1 −0.5 0 0.5 1 50 points x y −2 0 2 4 6 −1 −0.5 0 0.5 1 500 points −2 0 2 4 6 −1 −0.5 0 0.5 1 3500 points Figure 3: Minimizer of (1) for a 1-d problem with a fixed σ = 0.4, two labeled points and an increasing number of unlabeled points. and for any fixed σ > 0. While this limiting function is discontinuous, it is also possible to construct a sequence of continuous functions yǫ that all satisfy the constraints and for which I(σ)(yǫ) ǫ→0 −→0. This behavior is illustrated in Figure 3. We generated data from a mixture of two 1-D Gaussians centered at the origin and at x = 4, with one Gaussian labeled −1 and the other +1. We used two labeled points at the centers of the Gaussians and an increasing number of randomly drawn unlabeled points. As predicted, with a fixed σ, although the solution is reasonable when the number of unlabeled points is small, it becomes flatter, with sharp spikes on the labeled points, as u →∞. 6 Fourier-Eigenvector Based Methods Before we conclude, we discuss a different approach for SSL, also based on the Graph Laplacian, suggested by Belkin and Niyogi [3]. Instead of using the Laplacian as a regularizer, constraining candidate predictors y(x) non-parametrically to those with small In(y) values, here the predictors are constrained to the low-dimensional space spanned by the first few eigenvectors of the Laplacian: The similarity matrix W is computed as before, and the Graph Laplacian matrix L = D −W is considered (recall D is a diagonal matrix with Dii = P j Wij). Only predictors ˆy(x) = Pp j=1ajej (15) spanned by the first p eigenvectors e1, . . . , ep of L (with smallest eigenvalues) are considered. The coefficients aj are chosen by minimizing a loss function on the labeled data, e.g. the squared loss: (ˆa1, . . . , ˆap) = arg min Pl j=1(yj −ˆy(xj))2. (16) Unlike the Laplacian Regularization method (1), the Laplacian Eigenvector method (15)–(16) is well posed in the limit u →∞. This follows directly from the convergence of the eigenvectors of the graph Laplacian to the eigenfunctions of the corresponding Laplace-Beltrami operator [10, 4]. Eigenvector based methods were shown empirically to provide competitive generalization performance on a variety of simulated and real world problems. Belkin and Niyogi [3] motivate the approach by arguing that ‘the eigenfunctions of the Laplace-Beltrami operator provide a natural basis for functions on the manifold and the desired classification function can be expressed in such a basis’. In our view, the success of the method is actually not due to data lying on a low-dimensional manifold, but rather due to the low density separation assumption, which states that different class labels form high-density clusters separated by low density regions. Indeed, under this assumption and with sufficient separation between the clusters, the eigenfunctions of the graph Laplace-Beltrami operator are approximately piecewise constant in each of the clusters, as in spectral clustering [12, 11], providing a basis for a labeling that is constant within clusters but variable across clusters. In other settings, such as data uniformly distributed on a manifold but without any significant cluster structure, the success of eigenvector based methods critically depends on how well can the unknown classification function be approximated by a truncated expansion with relatively few eigenvectors. We illustrate this issue with the following three-dimensional example: Let p(x) denote the uniform density in the box [0, 1]×[0, 0.8]×[0, 0.6], where the box lengths are different to prevent eigenvalue multiplicity. Consider learning three different functions, y1(x) = 1x1>0.5, y2(x) = 1x1>x2/0.8 and y3(x) = 1x2/0.8>x3/0.6. Even though all three functions are relatively simple, all having a linear separating boundary between the classes on the manifold, as shown in the experiment described in Figure 4, the Eigenvector based method (15)–(16) gives markedly different generalization performances on the three targets. This happens both when the number of eigenvectors p is set to p = l/5 as suggested by Belkin and Niyogi, as well as for the optimal (oracle) value of p selected on the test set (i.e. a “cheating” choice representing an upper bound on the generalization error of this method). 7 20 40 60 0 20 40 p = #labeled points/5 # labeled points Prediction Error (%) 20 40 60 0 20 40 optimal p # labeled points 0 5 10 15 0 50 20 labeled points # eigenvectors 0 5 10 15 0 10 20 Approx. Error # eigenvectors Figure 4: Left three panels: Generalization Performance of the Eigenvector Method (15)–(16) for the three different functions described in the text. All panels use n = 3000 points. Prediction counts the number of sign agreements with the true labels. Rightmost panel: best fit when many (all 3000) points are used, representing the best we can hope for with a few leading eigenvectors. The reason for this behavior is that y2(x) and even more so y3(x) cannot be as easily approximated by the very few leading eigenfunctions—even though they seem “simple” and “smooth”, they are significantly more complicated than y1(x) in terms of measure of simplicity implied by the Eigenvector Method. Since the density is uniform, the graph Laplacian converges to the standard Laplacian and its eigenfunctions have the form ψi,j,k(x) = cos(iπx1) cos(jπx2/0.8) cos(kπx3/0.6), making it hard to represent simple decision boundaries which are not axis-aligned. 7 Discussion Our results show that a popular SSL method, the Laplacian Regularization method (1), is not wellbehaved in the limit of infinite unlabeled data, despite its empirical success in various SSL tasks. The empirical success might be due to two reasons. First, it is possible that with a large enough number of labeled points relative to the number of unlabeled points, the method is well behaved. This regime, where the number of both labeled and unlabeled points grow while l/u is fixed, has recently been analyzed by Wasserman and Lafferty [9]. However, we do not find this regime particularly satisfying as we would expect that having more unlabeled data available should improve performance, rather than require more labeled points or make the problem ill-posed. It also places the user in a delicate situation of choosing the “just right” number of unlabeled points without any theoretical guidance. Second, in our experiments we noticed that although the predictor ˆy(x) becomes extremely flat, in binary tasks, it is still typically possible to find a threshold leading to a good classification performance. We do not know of any theoretical explanation for such behavior, nor how to characterize it. Obtaining such an explanation would be very interesting, and in a sense crucial to the theoretical foundation of the Laplacian Regularization method. On a very practical level, such a theoretical understanding might allow us to correct the method so as to avoid the numerical instability associated with flat predictors, and perhaps also make it appropriate for regression. The reason that the Laplacian regularizer (1) is ill-posed in the limit is that the first order gradient is not a sufficient penalty in high dimensions. This fact is well known in spline theory, where the Sobolev Embedding Theorem [1] indicates one must control at least d+1 2 derivatives in Rd. In the context of Laplacian regularization, this can be done using the iterated Laplacian: replacing the graph Laplacian matrix L = D −W, where D is the diagonal degree matrix, with L d+1 2 (matrix to the d+1 2 power). In the infinite unlabeled data limit, this corresponds to regularizing all order- d+1 2 (mixed) partial derivatives. In the typical case of a low-dimensional manifold in a high dimensional ambient space, the order of iteration should correspond to the intrinsic, rather then ambient, dimensionality, which poses a practical problem of estimating this usually unknown dimensionality. We are not aware of much practical work using the iterated Laplacian, nor a good understanding of its appropriateness for SSL. A different approach leading to a well-posed solution is to include also an ambient regularization term [5]. However, the properties of the solution and in particular its relation to various assumptions about the “smoothness” of y(x) relative to p(x) remain unclear. Acknowledgments The authors would like to thank the anonymous referees for valuable suggestions. The research of BN was supported by the Israel Science Foundation (grant 432/06). 8 References [1] R.A. Adams, Sobolev Spaces, Academic Press (New York), 1975. [2] A. Azran, The rendevous algorithm: multiclass semi-supervised learning with Markov Random Walks, ICML, 2007. [3] M. Belkin, P. Niyogi, Using manifold structure for partially labelled classification, NIPS, vol. 15, 2003. [4] M. Belkin and P. Niyogi, Convergence of Laplacian Eigenmaps, NIPS 19, 2007. [5] M. Belkin, P. Niyogi and S. Sindhwani, Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples, JMLR, 7:2399-2434, 2006. [6] Y. Bengio, O. Delalleau, N. Le Roux, label propagation and quadratic criterion, in Semi-Supervised Learning, Chapelle, Scholkopf and Zien, editors, MIT Press, 2006. [7] O. Bosquet, O. Chapelle, M. Hein, Measure Based Regularization, NIPS, vol. 16, 2004. [8] M. Hein, Uniform convergence of adaptive graph-based regularization, COLT, 2006. [9] J. Lafferty, L. Wasserman, Statistical Analysis of Semi-Supervised Regression, NIPS, vol. 20, 2008. [10] U. von Luxburg, M. Belkin and O. Bousquet, Consistency of spectral clustering, Annals of Statistics, vol. 36(2), 2008. [11] M. Meila, J. Shi. A random walks view of spectral segmentation, AI and Statistics, 2001. [12] B. Nadler, S. Lafon, I.G. Kevrekidis, R.R. Coifman, Diffusion maps, spectral clustering and eigenfunctions of Fokker-Planck operators, NIPS, vol. 18, 2006. [13] B. Sch¨olkopf, A. Smola, Learning with Kernels, MIT Press, 2002. [14] D. Zhou, O. Bousquet, T. Navin Lal, J. Weston, B. Sch¨olkopf, Learning with local and global consistency, NIPS, vol. 16, 2004. [15] X. Zhu, Z. Ghahramani, J. Lafferty, Semi-Supervised Learning using Gaussian fields and harmonic functions, ICML, 2003. 9
2009
103
3,597
Maximin affinity learning of image segmentation Srinivas C. Turaga ∗ MIT Kevin L. Briggman Max-Planck Insitute for Medical Research Moritz Helmstaedter Max-Planck Insitute for Medical Research Winfried Denk Max-Planck Insitute for Medical Research H. Sebastian Seung MIT, HHMI Abstract Images can be segmented by first using a classifier to predict an affinity graph that reflects the degree to which image pixels must be grouped together and then partitioning the graph to yield a segmentation. Machine learning has been applied to the affinity classifier to produce affinity graphs that are good in the sense of minimizing edge misclassification rates. However, this error measure is only indirectly related to the quality of segmentations produced by ultimately partitioning the affinity graph. We present the first machine learning algorithm for training a classifier to produce affinity graphs that are good in the sense of producing segmentations that directly minimize the Rand index, a well known segmentation performance measure. The Rand index measures segmentation performance by quantifying the classification of the connectivity of image pixel pairs after segmentation. By using the simple graph partitioning algorithm of finding the connected components of the thresholded affinity graph, we are able to train an affinity classifier to directly minimize the Rand index of segmentations resulting from the graph partitioning. Our learning algorithm corresponds to the learning of maximin affinities between image pixel pairs, which are predictive of the pixel-pair connectivity. 1 Introduction Supervised learning has emerged as a serious contender in the field of image segmentation, ever since the creation of training sets of images with “ground truth” segmentations provided by humans, such as the Berkeley Segmentation Dataset [15]. Supervised learning requires 1) a parametrized algorithm that map images to segmentations, 2) an objective function that quantifies the performance of a segmentation algorithm relative to ground truth, and 3) a means of searching the parameter space of the segmentation algorithm for an optimum of the objective function. In the supervised learning method presented here, the segmentation algorithm consists of a parametrized classifier that predicts the weights of a nearest neighbor affinity graph over image pixels, followed by a graph partitioner that thresholds the affinity graph and finds its connected components. Our objective function is the Rand index [18], which has recently been proposed as a quantitative measure of segmentation performance [23]. We “soften” the thresholding of the classifier output and adjust the parameters of the classifier by gradient learning based on the Rand index. ∗sturaga@mit.edu 1 image weighted afnity graph segmentation afnity graph #2 afnity graph #1 merge! segmentation algorithm hypothetical thresholded afnity graphs affinity prediction threshold, connected components missing afnity graph #2 afnity graph #2 afnity graph #2 missing missing missing missing missing missing missing missing missing missing missing Figure 1: (left) Our segmentation algorithm. We first generate a nearest neighbor weighted affinity graph representing the degree to which nearest neighbor pixels should be grouped together. The segmentation is generated by finding the connected components of the thresholded affinity graph. (right) Affinity misclassification rates are a poor measure of segmentation performance. Affinity graph #1 makes only 1 error (dashed edge) but results in poor segmentations, while graph #2 generates a perfect segmentation despite making many affinity misclassifications (dashed edges). Because maximin edges of the affinity graph play a key role in our learning method, we call it maximin affinity learning of image segmentation, or MALIS. The minimax path and edge are standard concepts in graph theory, and maximin is the opposite-sign sibling of minimax. Hence our work can be viewed as a machine learning application of these graph theoretic concepts. MALIS focuses on improving classifier output at maximin edges, because classifying these edges incorrectly leads to genuine segmentation errors, the splitting or merging of segments. To the best of our knowledge, MALIS is the first supervised learning method that is based on optimizing a genuine measure of segmentation performance. The idea of training a classifier to predict the weights of an affinity graph is not novel. Affinity classifiers were previously trained to minimize the number of misclassified affinity edges [9, 16]. This is not the same as optimizing segmentations produced by partitioning the affinity graph. There have been attempts to train affinity classifiers to produce good segmentations when partitioned by normalized cuts [17, 2]. But these approaches do not optimize a genuine measure of segmentation performance such as the Rand index. The work of Bach and Jordan [2] is the closest to our work. However, they only minimize an upper bound to a renormalized version of the Rand index. Both approaches require many approximations to make the learning tractable. In other related work, classifiers have been trained to optimize performance at detecting image pixels that belong to object boundaries [16, 6, 14]. Our classifier can also be viewed as a boundary detector, since a nearest neighbor affinity graph is essentially the same as a boundary map, up to a sign inversion. However, we combine our classifier with a graph partitioner to produce segmentations. The classifier parameters are not trained to optimize performance at boundary detection, but to optimize performance at segmentation as measured by the Rand index. There are also methods for supervised learning of image labeling using Markov or conditional random fields [10]. But image labeling is more similar to multi-class pixel classification rather than image segmentation, as the latter task may require distinguishing between multiple objects in a single image that all have the same label. In the cases where probabilistic random field models have been used for image parsing and segmentation, the models have either been simplistic for tractability reasons [12] or have been trained piecemeal. For instance, Tu et al. [22] separately train low-level discriminative modules based on a boosting classifier, and train high-level modules of their algorithm to model the joint distribution of the image and the labeling. These models have never been trained to minimize the Rand index. 2 Partitioning a thresholded affinity graph by connected components Our class of segmentation algorithms is constructed by combining a classifier and a graph partitioner (see Figure 1). The classifier is used to generate the weights of an affinity graph. The nodes of the graph are image pixels, and the edges are between nearest neighbor pairs of pixels. The weights of the edges are called affinities. A high affinity means that the two pixels tend to belong to the same 2 segment. The classifier computes the affinity of each edge based on an image patch surrounding the edge. The graph partitioner first thresholds the affinity graph by removing all edges with weights less than some threshold value θ. The connected components of this thresholded affinity graph are the segments of the image. For this class of segmentation algorithms, it’s obvious that a single misclassified edge of the affinity graph can dramatically alter the resulting segmentation by splitting or merging two segments (see Fig. 1). This is why it is important to learn by optimizing a measure of segmentation performance rather than affinity prediction. We are well aware that connected components is an exceedingly simple method of graph partitioning. More sophisticated algorithms, such as spectral clustering [20] or graph cuts [3], might be more robust to misclassifications of one or a few edges of the affinity graph. Why not use them instead? We have two replies to this question. First, because of the simplicity of our graph partitioning, we can derive a simple and direct method of supervised learning that optimizes a true measure of image segmentation performance. So far learning based on more sophisticated graph partitioning methods has fallen short of this goal [17, 2]. Second, even if it were possible to properly learn the affinities used by more sophisticated graph partitioning methods, we would still prefer our simple connected components. The classifier in our segmentation algorithm can also carry out sophisticated computations, if its representational power is sufficiently great. Putting the sophistication in the classifier has the advantage of making it learnable, rather than hand-designed. The sophisticated partitioning methods clean up the affinity graph by using prior assumptions about the properties of image segmentations. But these prior assumptions could be incorrect. The spirit of the machine learning approach is to use a large amount of training data and minimize the use of prior assumptions. If the sophisticated partitioning methods are indeed the best way of achieving good segmentation performance, we suspect that our classifier will learn them from the training data. If they are not the best way, we hope that our classifier will do even better. 3 The Rand index quantifies segmentation performance Image segmentation can be viewed as a special case of the general problem of clustering, as image segments are clusters of image pixels. Long ago, Rand proposed an index of similarity between two clusterings [18]. Recently it has been proposed that the Rand index be applied to image segmentations [23]. Define a segmentation S as an assignment of a segment label si to each pixel i. The indicator function δ(si, sj) is 1 if pixels i and j belong to the same segment (si = sj) and 0 otherwise. Given two segmentations S and ˆS of an image with N pixels, define the function 1 −RI( ˆS, S) = N 2 −1 ∑ i<j δ(si, sj) −δ(ˆsi, ˆsj) (1) which is the fraction of image pixel pairs on which the two segmentations disagree. We will refer to the function 1 −RI( ˆS, S) as the Rand index, although strictly speaking the Rand index is RI( ˆS, S), the fraction of image pixel pairs on which the two segmentations agree. In other words, the Rand index is a measure of similarity, but we will often apply that term to a measure of dissimilarity. In this paper, the Rand index is applied to compare the output ˆS of a segmentation algorithm with a ground truth segmentation S, and will serve as an objective function for learning. Figure 1 illustrates why the Rand index is a sensible measure of segmentation performance. The segmentation of affinity graph #1 incurs a huge Rand index penalty relative to the ground truth. A single wrongly classified edge of the affinity graph leads to an incorrect merger of two segments, causing many pairs of image pixels to be wrongly assigned to the same segment. On the other hand, the segmentation corresponding to affinity graph #2 has a perfect Rand index, even though there are misclassifications in the affinity graph. In short, the Rand index makes sense because it strongly penalizes errors in the affinity graph that lead to split and merger errors. 3 4 1 2 3 4 1’ 2’ 3’ 4’ 3 4 33333 1 2 1’ 2’32’332’33332’333 4 3’ 4’ merger split rand index groundtruth test Figure 2: The Rand index quantifies segmentation performance by comparing the difference in pixel pair connectivity between the groundtruth and test segmentations. Pixel pair connectivities can be visualized as symmetric binary block-diagonal matrices  (si, sj). Each diagonal block corresponds to connected pixel pairs belonging to one of the image segments. The Rand index incurs penalties when pixels pairs that must not be connected are connected or vice versa. This corresponds to locations where the two matrices disagree. An erroneous merger of two groundtruth segments incurs a penalty proportional to the product of the sizes of the two segments. Split errors are similarly penalized. 4 Connectivity and maximin affinity Recall that our segmentation algorithm works by finding connected components of the thresholded affinity graph. Let ˆS be the segmentation produced in this way. To apply the Rand index to train our classifier, we need a simple way of relating the indicator function  (ˆsi, ˆsj) in the Rand index to classifier output. In other words, we would like a way of characterizing whether two pixels are connected in the thresholded affinity graph. To do this, we introduce the concept of maximin affinity, which is defined for any pair of pixels in an affinity graph (the definition is generally applicable to any weighted graph). Let Aklbe the affinity of pixels k and l. Let Pij be the set of all paths in the graph that connect pixels i and j. For every path P in Pij, there is an edge (or edges) with minimal affinity. This is written as min k,l  P Akl, where  k, l  P means that the edge between pixels k and l are in the path P. A maximin path P ij is a path between pixels i and j that maximizes the minimal affinity, P ij = arg max P Pij min  k,l  P Akl (2) The maximin affinity of pixels i and j is the affinity of the maximin edge, or the minimal affinity of the maximin path, A ij = max P Pij min  k,l  P Akl (3) We are now ready for a trivial but important theorem. Theorem 1. A pair of pixels is connected in the thresholded affinity graph if and only if their maximin affinity exceeds the threshold value. Proof. By definition, a pixel pair is connected in the thresholded affinity graph if and only if there exists a path between them. Such a path is equivalent to a path in the unthresholded affinity graph for which the minimal affinity is above the threshold value. This path in turn exists if and only if the maximin affinity is above the threshold value. As a consequence of this theorem, pixel pairs can be classified as connected or disconnected by thresholding maximin affinities. Let ˆS be the segmentation produced by thresholding the affinity graph Aij and then finding connected components. Then the connectivity indicator function is  (ˆsi, ˆsj) = H(A ij   ) (4) where H is the Heaviside step function. Maximin affinities can be computed efficiently using minimum spanning tree algorithms [8]. A maximum spanning tree is equivalent to a minimum spanning tree, up to a sign change of the weights. 4 Any path in a maximum spanning tree is a maximin path. For our nearest neighbor affinity graphs, the maximin affinity of a pixel pair can be computed in O(|E| · α(|V|)) where |E| is the number of graph edges and |V| is the number of pixels and α(·) is the inverse Ackerman function which grows sub-logarithmically. The full matrix A∗ ij can be computed in time O(|V|2) since the computation can be shared. Note that maximin affinities are required for training, but not testing. For segmenting the image at test time, only a connected components computation need be performed, which takes time linear in the number of edges |E|. 5 Optimizing the Rand index by learning maximin affinities Since the affinities and maximin affinities are both functions of the image I and the classifier parameters W, we will write them as Aij(I; W) and A∗ ij(I; W), respectively. By Eq. (4) of the previous section, the Rand index of Eq. (1) takes the form 1 −RI(S, I; W) = N 2 −1 ∑ i<j δ(si, sj) −H(A∗ ij(I; W) −θ) Since this is a discontinuous function of the maximin affinities, we make the usual relaxation by replacing |δ(si, sj) −H(A∗ ij(I; W) −θ)| with a continuous loss function l(δ(si, sj), A∗ ij(I; W)). Any standard loss such as the such as the square loss, 1 2(x −ˆx)2, or the hinge loss can be used for l(x, ˆx). Thus we obtain a cost function suitable for gradient learning, E(S, I; W) = N 2 −1 ∑ i<j l(δ(si, sj), A∗ ij(I; W)) = N 2 −1 ∑ i<j l(δ(si, sj), max P∈Pij min ⟨k,l⟩∈P Akl(I; W)) (5) The max and min operations are continuous and differentiable (though not continuously differentiable). If the loss function l is smooth, and the affinity Akl(I; W) is a smooth function, then the gradient of the cost function is well-defined, and gradient descent can be used as an optimization method. Define (k, l) = mm(i, j) to be the maximin edge for the pixel pair (i, j). If there is a tie, choose between the maximin edges at random. Then the cost function takes the form E(S, I; W) = N 2 −1 ∑ i<j l(δ(si, sj), Amm(i,j)(I; W)) It’s instructive to compare this with the cost function for standard affinity learning Estandard(S, I; W) = 2 cN ∑ ⟨i,j⟩ l(δ(si, sj), Aij(I; W)) where the sum is over all nearest neighbor pixel pairs ⟨i, j⟩and c is the number of nearest neighbors [9]. In contrast, the sum in the MALIS cost function is over all pairs of pixels, whether or not they are adjacent in the affinity graph. Note that a single edge can be the maximin edge for multiple pairs of pixels, so its affinity can appear multiple times in the MALIS cost function. Roughly speaking, the MALIS cost function is similar to the standard cost function, except that each edge in the affinity graph is weighted by the number of pixel pairs that it causes to be incorrectly classified. 6 Online stochastic gradient descent Computing the cost function or its gradient requires finding the maximin edges for all pixel pairs. Such a batch computation could be used for gradient learning. However, online stochastic gradient 5 learning is often more efficient than batch learning [13]. Online learning makes a gradient update of the parameters after each pair of pixels, and is implemented as described in the box. Maximin affinity learning 1. Pick a random pair of (not necessarily nearest neighbor) pixels i and j from a randomly drawn training image I. 2. Find a maximin edge mm(i, j) 3. Make the gradient update: W ←W + η d dW l(δ(si, sj), Amm(i,j)(I; W)) Standard affinity learning 1. Pick a random pair of nearest neighbor pixels i and j from a randomly drawn training image I 2. Make the gradient update: W ←W + η d dW l(δ(si, sj), Aij(I; W)) For comparison, we also show the standard affinity learning [9]. For each iteration, both learning methods pick a random pair of pixels from a random image. Both compute the gradient of the weight of a single edge in the affinity graph. However, the standard method picks a nearest neighbor pixel pair and trains the affinity of the edge between them. The maximin method picks a pixel pair of arbitrary separation and trains the minimal affinity on a maximin path between them. Effectively, our connected components performs spatial integration over the nearest neighbor affinity graph to make connectivity decisions about pixel pairs at large distances. MALIS trains these global decisions, while standard affinity learning trains only local decisions. MALIS is superior because it truly learns segmentation, but this superiority comes at a price. The maximin computation requires that on each iteration the affinity graph be computed for the whole image. Therefore it is slower than the standard learning method, which requires only a local affinity prediction for the edge being trained. Thus there is a computational price to be paid for the optimization of a true segmentation error. 7 Application to electron microscopic images of neurons 7.1 Electron microscopic images of neural tissue By 3d imaging of brain tissue at sufficiently high resolution, as well as identifying synapses and tracing all axons and dendrites in these images, it is possible in principle to reconstruct connectomes, complete “wiring diagrams” for a brain or piece of brain [19, 4, 21]. Axons can be narrower than 100 nm in diameter, necessitating the use of electron microscopy (EM) [19]. At such high spatial resolution, just one cubic millimeter of brain tissue yields teravoxel scale image sizes. Recent advances in automation are making it possible to collect such images [19, 4, 21], but image analysis remains a challenge. Tracing axons and dendrites is a very large-scale image segmentation problem requiring high accuracy. The images used for this study were from the inner plexiform layer of the rabbit retina, and were taken using Serial Block-Face Scanning Electron Microscopy [5]. Two large image volumes of 1003 voxels were hand segmented and reserved for training and testing purposes. 7.2 Training convolutional networks for affinity classification Any classifier that is a smooth function of its parameters can be used for maximin affinity learning. We have used convolutional networks (CN), but our method is not restricted to this choice. Convolutional networks have previously been shown to be effective for similar EM images of brain tissue [11]. We trained two identical four-layer CNs, one with standard affinity learning and the second with MALIS. The CNs contained 5 feature maps in each layer with sigmoid nonlinearities. All filters in the CN were 5 × 5 × 5 in size. This led to an affinity classifier that uses a 17 × 17 × 17 cubic image patch to classify a affinity edge. We used the square-square loss function l(x, ˆx) = x · max(0, 1 − ˆx −m)2 + (1 −x) · max(0, ˆx −m)2, with a margin m = 0.3. As noted earlier, maximin affinity learning can be significantly slower than standard affinity learning, due to the need for computing the entire affinity graph on each iteration, while standard affinity training need only predict the weight of a single edge in the graph. For this reason, we constructed a proxy training image dataset by picking all possible 21 × 21 × 21 sized overlapping sub-images 6 from the original training set. Since each 21 × 21 × 21 sub-image is smaller than the original image, the size of the affinity graph needed to be predicted for the sub-image is significantly smaller, leading to faster training. A consequence of this approximation is that the maximum separation between image pixel pairs chosen for training is less than about 20 pixels. A second means of speeding up the maximin procedure is by pretraining the maximin CN for 500,000 iterations using the fast standard affinity classification cost function. At the end, both CNs were trained for a total of 1,000,000 iterations by which point the training error plateaued. 7.3 Maximin learning leads to dramatic improvement in segmentation performance 0.5 0.6 0.7 0.8 0.9 0.93 0.94 0.95 0.96 0.97 0.98 0.99 1 Threshold Fraction correct A. Clustering accuracy 0 0.5 1 0 0.2 0.4 0.6 0.8 1 False positive rate True positive rate B. ROC curve 0 0.5 1 0 0.2 0.4 0.6 0.8 1 Recall Precision C. Precision−Recall curve 0 1 2 3 4 5 0 0.2 0.4 0.6 0.8 1 Splits/object Mergers/object D. Splits vs. Mergers Standard (Train) Standard (Test) Minimax (Test) Minimax (Train) Figure 3: Quantification of segmentation performance on 3d electron microscopic images of neural tissue. A) Clustering accuracy measuring the number of correctly classified pixel pairs. B) and C) ROC curve and precision-recall quantification of pixel-pair connectivity classification shows near perfect performance. D) Segmentation error as measured by the number of splits and mergers. We benchmarked the performance of the standard and maximin affinity classifiers by measuring the the pixel-pair connectivity classification performance using the Rand index. After training the standard and MALIS affinity classifiers, we generated affinity graphs for the training and test images. In principle, the training algorithm suggests a single threshold for the graph partitioning. In practice, one can generate a full spectrum of segmentations leading from over-segmentations to under-segmentations by varying the threshold parameter. In Fig. 3, we plot the Rand index for segmentations resulting from a range of threshold values. In images with large numbers of segments, most pixel pairs will be disconnected from one another leading to a large imbalancing the number of connected and disconnected pixel pairs. This is reflected in the fact that the Rand index is over 95% for both segmentation algorithms. While this imbalance between positive and negative examples is not a significant problem for training the affinity classifier, it can make comparisons between classifiers difficult to interpret. Instead, we can use the ROC and precision-recall methodologies, which provide for accurate quantification of the accuracy of classifiers even in the presence of large class imbalance. From these curves, we observe that our maximin affinity classifier dramatically outperforms the standard affinity classifier. Our positive results have an intriguing interpretation. The poor performance of the connected components when applied to a standard learned affinity classifier could be interpreted to imply that 1) a local classifier lacks the context important for good affinity prediction; 2) connected components is a poor strategy for image segmentation since mistakes in the affinity prediction of just a few edges can merge or split segments. On the contrary, our experiments suggest that when trained properly, thresholded affinity classification followed by connected components can be an extremely competitive method of image segmentations. 8 Discussion In this paper, we have trained an affinity classifier to produce affinity graphs that result in excellent segmentations when partitioned by the simple graph partitioning algorithm of thresholding followed by connected components. The key to good performance is the training of a segmentation-based cost function, and the use of a powerful trainable classifier to predict affinity graphs. Once trained, our segmentation algorithm is fast. In contrast to classic graph-based segmentation algorithms where 7 image groundtruth maximin training standard training mergers Figure 4: A 2d cross-section through a 3d segmentation of the test image. The maximin segmentation correctly segments several objects which are merged in the standard segmentation, and even correctly segments objects which are missing in the groundtruth segmentation. Not all segments merged in the standard segmentation are merged at locations visible in this cross section. Pixels colored black in the machine segmentations correspond to pixels completely disconnected from their neighbors and represent boundary regions. the partitioning phase dominates, our partitioning algorithm is simple and can partition graphs in time linearly proportional to the number of edges in the graph. We also do not require any prior knowledge of the number of image segments or image segment sizes at test time, in contrast to other graph partitioning algorithms [7, 20]. The formalism of maximin affinities used to derive our learning algorithm has connections to singlelinkage hierarchical clustering, minimum spanning trees and ultrametric distances. Felzenszwalb and Huttenlocher [7] describe a graph partitioning algorithm based on a minimum spanning tree computation which resembles our segmentation algorithm, in part. The Ultrametric Contour Map algorithm [1] generates hierarchical segmentations nearly identical those generated by varying the threshold of our graph partitioning algorithm. Neither of these methods incorporates a means for learning from labeled data, but our work shows how the performance of these algorithms can be improved by use of our maximin affinity learning. Acknowledgements SCT and HSS were supported in part by the Howard Hughes Medical Institute and the Gatsby Charitable Foundation. References [1] P. Arbelaez. Boundary extraction in natural images using ultrametric contour maps. Proc. POCV, 2006. [2] F. Bach and M. Jordan. Learning spectral clustering, with application to speech separation. The Journal of Machine Learning Research, 7:1963–2001, 2006. [3] Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 23(11):1222–1239, 2001. [4] K. L. Briggman and W. Denk. Towards neural circuit reconstruction with volume electron microscopy techniques. Curr Opin Neurobiol, 16(5):562–70, 2006. [5] W. Denk and H. Horstmann. Serial block-face scanning electron microscopy to reconstruct three-dimensional tissue nanostructure. PLoS Biol, 2(11):e329, 2004. [6] P. Dollár, Z. Tu, and S. Belongie. Supervised learning of edges and object boundaries. In CVPR, June 2006. [7] P. Felzenszwalb and D. Huttenlocher. Efficient Graph-Based Image Segmentation. International Journal of Computer Vision, 59(2):167–181, 2004. [8] B. Fischer, V. Roth, and J. Buhmann. Clustering with the connectivity kernel. In Advances in Neural Information Processing Systems 16: Proceedings of the 2003 Conference. Bradford Book, 2004. [9] C. Fowlkes, D. Martin, and J. Malik. Learning affinity functions for image segmentation: combining patch-based and gradient-based approaches. Computer Vision and Pattern Recognition, 2003. Proceedings. 2003 IEEE Computer Society Conference on, 2, 2003. 8 [10] X. He, R. Zemel, and M. Carreira-Perpinan. Multiscale conditional random fields for image labeling. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, volume 2. IEEE Computer Society; 1999, 2004. [11] V. Jain, J. Murray, F. Roth, S. Turaga, V. Zhigulin, K. Briggman, M. Helmstaedter, W. Denk, and H. Seung. Supervised learning of image restoration with convolutional networks. ICCV 2007, 2007. [12] S. Kumar and M. Hebert. Discriminative random fields: a discriminative framework for contextual interaction in classification. Computer Vision, 2003. Proceedings. Ninth IEEE International Conference on, pages 1150–1157, 2003. [13] Y. LeCun, L. Bottou, G. Orr, and K. Müller. Efficient backprop. Lecture notes in computer science, pages 9–50, 1998. [14] M. Maire, P. Arbelaez, C. Fowlkes, and J. Malik. Using contours to detect and localize junctions in natural images. In IEEE Conference on Computer Vision and Pattern Recognition, 2008. CVPR 2008, pages 1–8, 2008. [15] D. Martin, C. Fowlkes, D. Tal, and J. Malik. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proc. Eighth Int’l Conf. Computer Vision, volume 2, pages 416–423, 2001. [16] D. R. Martin, C. C. Fowlkes, and J. Malik. Learning to detect natural image boundaries using local brightness, color, and texture cues. IEEE Trans Pattern Anal Mach Intell, 26(5):530–549, May 2004. [17] M. Meila and J. Shi. Learning segmentation by random walks. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS, pages 873–879, 2001. [18] W. Rand. Objective criteria for the evaluation of clustering methods. Journal of the American Statistical association, pages 846–850, 1971. [19] H. Seung. Reading the Book of Memory: Sparse Sampling versus Dense Mapping of Connectomes. Neuron, 62(1):17–29, 2009. [20] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8):888–905, 2000. [21] S. J. Smith. Circuit reconstruction tools today. Curr Opin Neurobiol, 17(5):601–608, Oct 2007. [22] Z. Tu, X. Chen, A. Yuille, and S. Zhu. Image parsing: Unifying segmentation, detection, and recognition. International Journal of Computer Vision, 63(2):113–140, 2005. [23] R. Unnikrishnan, C. Pantofaru, and M. Hebert. Toward objective evaluation of image segmentation algorithms. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, pages 929–944, 2007. 9
2009
104
3,598
Perceptual Multistability as Markov Chain Monte Carlo Inference Samuel J. Gershman Department of Psychology and Neuroscience Institute Princeton University Princeton, NJ 08540 sjgershm@princeton.edu Edward Vul & Joshua B. Tenenbaum Department of Brain and Cognitive Sciences Massachusetts Institute of Technology Cambridge, MA 02139 {evul,jbt}@mit.edu Abstract While many perceptual and cognitive phenomena are well described in terms of Bayesian inference, the necessary computations are intractable at the scale of realworld tasks, and it remains unclear how the human mind approximates Bayesian computations algorithmically. We explore the proposal that for some tasks, humans use a form of Markov Chain Monte Carlo to approximate the posterior distribution over hidden variables. As a case study, we show how several phenomena of perceptual multistability can be explained as MCMC inference in simple graphical models for low-level vision. 1 Introduction People appear to make rational statistical inferences from noisy, uncertain input in a wide variety of perceptual and cognitive domains [1, 9]. However, the computations for such inference, even for relatively small problems, are often intractable. For larger problems like those people face in the real world, the space of hypotheses that must be entertained is infinite. So how can people achieve solutions that seem close to the Bayesian ideal? Recent work has suggested that people may use approximate inference algorithms similar to those used for solving large-scale problems in Bayesian AI and machine learning [23, 4, 14]. “Rational models” of human cognition at the level of computational theories are often inspired by models for analogous inferences in machine learning. In the same spirit of reverse engineering cognition, we can also look to the general-purpose approximation methods used in these engineering fields as the inspiration for “rational process models”—principled algorithmic models for how Bayesian computations are implemented approximately in the human mind. Several authors have recently proposed that humans approximate complex probabilistic inferences by sampling [19, 14, 21, 6, 4, 24, 23], constructing Monte Carlo estimates similar to those used in Bayesian statistics and AI [16]. A variety of psychological phenomena have natural interpretations in terms of Monte Carlo methods, such as resource limitations [4], stochastic responding [6, 23] and order effects [21, 14]. The Monte Carlo methods that have received most attention to date as rational process models are importance sampling and particle filtering, which are traditionally seen as best suited to certain classes of inference problems: static low dimensional models and models with explicit sequential structure, respectively. Many problems in perception and cognition, however, 1 require inference in high dimensional models with sparse and noisy observations, where the correct global interpretation can only be achieved by propagating constraints from the ambiguous local information across the model. For these problems, Markov Chain Monte Carlo (MCMC) methods are often the method of choice in AI and machine vision [16]. Our goal in this paper is to explore the prospects for rational process models of perceptual inference based on MCMC. MCMC refers to a family of algorithms that sample from the joint posterior distribution in a highdimensional model by gradually drifting through the hypothesis space of complete interpretations, following a Markov chain that asymptotically spends time at each point in the hypothesis space proportional to its posterior probability. MCMC algorithms are quite flexible, suitable for a wide range of approximate inference problems that arise in cognition, but with a particularly long history of application in visual inference problems ([8] and many subsequent papers). The chains of hypotheses generated by MCMC shows characteristic dynamics distinct from other sampling algorithms: the hypotheses will be temporally correlated and as the chain drifts through hypothesis space, it will tend to move from regions of low posterior probability to regions of high probability; hence hypotheses will tend to cluster around the modes. Here we show that the characteristic dynamics of MCMC inference in high-dimensional, sparsely coupled spatial models correspond to several well-known phenomena in visual perception, specifically the dynamics of multistable percepts. Perceptual multistability [13] has long been of interest both phenomenologically and theoretically for models of perception as Bayesian inference [7, 20, 22, 10]. The classic example of perceptual multistability is the Necker cube, a 2D line drawing of a cube perceived to alternate between two different depth configurations (Figure 1A). Another classic phenomenon, extensively studied in psychophysics but less well known outside the field, is binocular rivalry [2]: when incompatible images are presented to the two eyes, subjects report a percept that alternates between the images presented to the left eye and that presented to the right (e.g., Figure 1B). Bayesian modelers [7, 20, 22, 10] have interpreted these multistability phenomena as reflections of the shape of the posterior distribution arising from ambiguous observations, images that could have plausibly been generated by two or more distinct scenes. For the Necker cube, two plausible depth configurations have indistinguishable 2D projections; with binocular rivalry, two mutually exclusive visual inputs have equal perceptual fidelity. Under these conditions, the posterior over scene interpretations is bimodal, and rivalry is thought to reflect periodic switching between the modes. Exactly how this “mode-switching” relates to the mechanisms by which the brain implements Bayesian perceptual inference is less clear, however. Here we explore the hypothesis that the dynamics of multistability can be understood in terms of the output of an MCMC algorithm, drawing posterior samples in spatially structured probabilistic models for image interpretation. Traditionally, bistability has been explained in non-rational mechanistic terms, for example, in terms of physiological mechanisms for adaptation or reciprocal inhibition between populations of neurons. Dayan [7] studied network models for Bayesian perceptual inference that estimate the maximum a posteriori scene interpretation, and proposed that multistability might occur in the presence of a multimodal posterior due to an additional neural oscillatory process whose function is specifically to induce mode-switching. He speculated that this mechanism might implement a form of MCMC inference but he did not pursue the connection formally. Our proposal is most closely related to the work of Sundareswara and Schrater [20, 22], who suggested that mode-switching in Necker cubetype images reflects a rational sampling-based algorithm for approximate Bayesian inference and decision making. They presented an elegant sampling scheme that could account for Necker cube bistability, with several key assumptions: (1) that the visual system draws a sequence of samples from the posterior over scene interpretations; (2) that the posterior probability of each sample is known; (3) that samples are weighted based on the product of their posterior probabilities and a memory decay process favoring more recently drawn samples; and (4) that perceptual decisions are made deterministically based on the sample with highest weight. Our goal here is a simpler analysis that comes closer to the standard MCMC approaches used for approximate inference in Bayesian AI and machine vision, and establishing a clearer link between the mechanisms of perception in the brain and rational approximate inference algorithms on the engineering side. As in most applications of Bayesian inference in machine vision [8, 16], we do not assume that the visual system has access to the full posterior distribution over scene interpretations, 2 Figure 1: (A) Necker cube. (B) Binocular rivalry stimuli. (C) Markov random field image model with lattice and ring (D) topologies. Shaded nodes correspond to observed variables; unshaded nodes correspond to hidden variables. which is expected to be extremely high-dimensional and complex. The visual system might be able to evaluate only relative probabilities of two similar hypotheses (as in Metropolis-Hastings), or to compute local conditional posteriors of one scene variable conditioned on its neighbors (as in Gibbs sampling). We also do not make extra assumptions about weighting samples based on memory decay, or require that conscious perceptual decisions be based on a memory for samples; consciousness has access to only the current state of the Markov chain, reflecting the observer’s current brain state. Here we show that several characteristic phenomena of multistability derive naturally from applying standard MCMC inference to Markov random fields (MRFs) – high dimensional, loosely coupled graphical models with spatial structure characteristic of many low-level and mid-level vision problems. Specifically, we capture the classic findings of Gamma-distributed mode-switching times in bistable perception; the biasing effects of contextual stimuli; the situations in which fused (rather than bistable) percepts occur, and the propagation of perceptual switches in traveling waves across the visual field. Although it is unlikely that this MCMC scheme corresponds exactly to any process in the visual system, and it is almost surely too simplified or limited as a general account of perceptual multistability, our results suggest that MCMC could provide a promising foundation on which to build rational process-level accounts of human perception and perhaps cognition more generally. 2 Markov random field image model Our starting point is a simple and schematic model of vision problems embodying the idea that images are generated by a set of hidden variables with local dependencies. Specifically, we assume that each observed image element xi is connected to a hidden variable zi by a directed edge, and each hidden variable is connected to its neighbors (in set ci) by an undirected edge (thus implying that each hidden variable is conditionally independent of all others given its neighbors). This Markov property is often exploited in computer vision [8] because elements of an image tend to depend on their adjacent neighbors, but are less influenced by more distant elements. Formally, this assumption corresponds to a Markov random field (MRF). Different topologies of the MRF (e.g., lattice or ring) can be used to capture the structure of different visual objects (Figure 1C,D). The joint distribution over configurations of hidden and observed variables is given by: P(z, x) = Z−1 exp " − X i R(xi|zi) −V (zi|zci) # , (1) where Z is a normalizing constant, and R and V are potential functions. In a Gaussian MRF, the conditional potential function over hidden node i is given by V (zi|zci) = µi −λ X j∈ci (zi −zj)2, (2) where λ is a precision (inverse variance) parameter specifying the coupling between neighboring hidden nodes; when λ is large, a node will be strongly influenced by its neighbors. The µi term represents the prior mean of zi, which can be used to encode contextual biases, as we discuss below. We construct the likelihood potential R(xi|zi) to express the ambiguity of the image by making it multimodal: several different hidden causes are equally likely to have generated the image. Since 3 for our purposes only the likelihood of xi matters, we can arbitrarily set xi = 0 and formalize the multimodal likelihood as a mixture of Gaussians evaluated at points a and b: R(xi|zi) = N(zi; a, σ2) + N(zi; b, σ2). (3) The computational problem for vision (as we are framing it) is to infer the hidden causes of an observed image. Given an observed image x, the posterior distribution over hidden causes z is P(z|x) = P(x|z)P(z) R z P(x|z)P(z)dz. (4) There are a number of reasons why Equation 4 may be computationally intractable. One is that the integration in the denominator may be high dimensional and lacking an analytical solution. Another is that there may not exist a simple functional form for the posterior. Assuming it is intractable to perform exact inference, we now turn to approximate solutions based on sampling. 3 Markov chain monte carlo The basic idea behind Monte Carlo methods is to approximate a distribution with a set of samples drawn from that distribution. In order to use Monte Carlo approximations, one must be able to draw samples from the posterior, but it is often impossible to do so directly. MCMC methods address this problem by drawing samples from a Markov chain that converges to the posterior distribution [16]. There are many variations of MCMC methods but here we will focus on the simplest: the Metropolis algorithm [18]. Each step of the algorithm consists of two stages: a proposal stage and an acceptance stage. An accepted proposal is a sample from a Markov chain that provably converges to the posterior. We will refer to z(l) as the “state” at step l. In the proposal stage, a new state z′ is proposed by generating a random sample from a proposal density Q z′; z(l) that depends on the current state. In the acceptance stage, this proposal is accepted with probability P  z(l+1) = z′|z(l) = min " 1, P(z′|x) P z(l)|x  # , (5) where we have assumed for simplicity that the proposal is symmetric: Q(z′; z) = Q(z; z′). If the proposal is rejected, the current state is repeated in the chain. 4 Results We now show how the Metropolis algorithm applied to the MRF image model gives rise to a number of phenomena in binocular rivalry experiments. Unless mentioned otherwise, we use the following parameters in our simulations: µ = 0, λ = 0.25, σ = 0.1, a = 1, b = −1. For the ring topology, we used λ = 0.2 to compensate for the fewer neighbors around each node as compared to the lattice topology. The sampler was run for 200, 000 iterations. For some simulations, we systematically manipulated certain parameters to demonstrate their role in the model. We have found that the precise values of these parameters have relatively little effect on the model’s behavior. For all simulations we used a Gaussian proposal (with standard deviation 1.5) that alters the state of one hidden node (selected at random) on each iteration. 4.1 Distribution of dominance durations One of the most robust findings in the literature on perceptual multistability is that switching times in binocular rivalry between different stable percepts tend to follow a Gamma-like distribution. In other words, the “dominance” durations of stability in one mode tend to be neither overwhelmingly short nor long. This effect is so characteristic of binocular rivalry that there have been countless psychophysical experiments measuring the differences in Gamma switching time parameters across manipulations, and testing whether Gamma, or log-normal distributions are best [2]. To account for this characteristic behavior, many papers have described neural circuits that could produce switching oscillations with the right stochastic dynamics (e.g., [25]). Existing rational process models of multistability [7, 20, 22] likewise appeal to specific implementational-level constraints to produce 4 Figure 2: (A) Simulated timecourse of bistability in the lattice MRF. Plotted on the y-axis is the number of nodes with value greater than 0. The horizontal lines show the thresholds for a perceptual switch. (B) Distribution of simulated dominance durations (mean-normalized) for MRF with lattice topology. Curves show gamma distributions fitted to simulated (with parameter values shown on the right) and empirical data, replotted from [17] this effect. In contrast, here we show how Gamma-distributed dominance durations fall naturally out of MCMC operating on an MRF. We constructed a 4 × 4 grid to model a typical binocular rivalry grating. In the typical experiment reporting a Gamma distribution of dominance durations, subjects are asked to say which of two images corresponds to their “global” percept. To make the same query of the current state of our simulated MCMC chain, we defined a perceptual switch to occur when at least 2/3 of the hidden nodes turn positive or negative. Figure 2A shows a sample of the timecourse1 and the distribution of dominance durations and maximum-likelihood estimates for the Gamma parameters α (shape) and β (scale), demonstrating that the durations produced by MCMC are well-described by a Gamma distribution (Figure 2B). It is interesting to note that the MRF structure of the problem (representing the multivariate structure of low-level vision) is an important pre-condition to obtaining a Gamma-like distribution of dominance durations: When considering MCMC on only a single node, the measured dominance durations tend to be exponentially-distributed. The Gamma distribution may arise in MCMC on an MRF because each hidden node takes an exponentially-distributed amount of time to switch (and these switches follow roughly one after another). In these settings, the total amount of time until enough nodes switch to one mode will be Gamma-distributed (i.e., the sum of exponentially-distributed random variables is Gamma-distributed). [20, 22] also used this idea to explain mode-switching. In their model, each sample is paired with a weight initialized to the sample’s posterior probability, and the sample with the largest weight designated as the dominant percept. Since multiple samples may correspond to the same percept, a particular percept will lose dominance only when the weights on all such samples decrease below the weights on samples of the non-dominant percept. By assuming an exponential decay on the weights, the time it takes for a single sample to lose dominance will be approximately exponentially distributed, leading to a Gamma distribution on the time it takes for multiple samples of the same percept to lose dominance. Here we have attempted to capture this effect within a rational inference procedure by attributing the exponential dynamics to the operation of MCMC on individual nodes in the MRF, rather than a memory decay process on individual samples. 4.2 Contextual biases Much discussion in research on multistability revolves around the extent to which it is influenced by top-down processes like prior knowledge and attention [2]. In support of the existence of top-down 1It may seem surprising that the model spends relatively little time near the extremes, and that switches are fairly gradual. This is not the phenomenology of bistability in a Necker cube, but it is the phenomenology of binocular rivlary with grating-like stimuli where experiments have shown that substantial time is spent in transition periods [3]. It seems that this is the case in scenarios where a simple planar MRF with nearest neighbor smoothness like the one we’re considering is a good model. To capture the perception of depth in the Necker cube, or rivalry with more complex higher-level stimuli (like natural scenes), a more complex and densely interconnected graphical model would be required — in such cases the perceptual switching dynamics will be different. 5 Figure 3: (A) Stimuli used by [5] in their experiment. On the top are the standard tilted grating patches presented dichoptically. On the bottom are the tilted grating patches superimposed on a background of rightward-tilting gratings, a contextual cue that biases dominance towards the rightward-tilting grating patch. (B) Simulated timecourse of transient preference for a lattice-topology MRF with and without a contextual cue (averaged over 100 runs of the sampler). (C) Empirical timecourse of transient preference fitted with a scaled cumulative Gaussian function, reprinted with permission from [17]. influences, several studies have shown that contextual cues can bias the relative dominance of rival stimuli. For example, [5] superimposed rivalrous tilted grating patches on a background of either rightward or leftward tilting gratings (Figure 3A) and showed that the direction of background tilt shifted dominance towards the monocular stimulus with context-compatible tilt. Following [20, 22], we modeled this result by assuming that the effect of context is to shift the prior mean towards the contextually-biased interpretation. We simulated this contextual bias by setting the prior mean µ = 1. Figure 3B shows the timecourse of transient preference (probability of a particular interpretation at each timepoint) for the “context” and “no-context” simulations, illustrating this persistent bias. Another property of this timeseries is the initial bias exhibited by both the context and no-context conditions, a phenomenon observed experimentally [17, 22] (Figure 3C). In fact, this is a distinctive property of Markov chains (as pointed out by [22]): MCMC algorithms generally take multiple iterations before they converge to the stationary distribution [16]. This initial period is known as the “burn-in.” Thus, human perceptual inference may similarly require an initial burn-in period to reach the stationary distribution. 4.3 Deviations from stable rivalry: fusion Most models have focused on the “stable” portions of the bistable dynamics of rivalry; however, in addition to the mode-hopping behavior that characterizes this phenomenon, bistable percepts often produce other states. In some conditions the two percepts are known to fuse, rather than rival: the percept then becomes a composite or superposition of the two stimuli (and hence no alternation is perceived). This fused perceptual state can be induced most reliably by decreasing the distance in feature space between the two stimuli [11] (Figure 4B) or decreasing the contrast of both stimuli [15]. These relations are shown schematically in Figure 4A. Neither neural, nor algorithmic, nor computational models of rivalry have thus far attempted to explain these findings. In experiments on “fusion”, subjects are given three options to report their percept: one of two global precepts or something in between. We define such a fused percept as a perceptual state lying between the two “bistable” modes — that is, an interpretation between the two rivalrous, high-probability interpretations. We can interpret manipulation of feature space distance in terms of the distance between the modes, and reductions of contrast as increases in the variance around the modes. When such manipulations are introduced to the MRF model, the posterior distribution changes as in Figure 4A (inset). By making the modes closer together or increasing the variance around the modes, greater probability mass is assigned to an intermediate zone between the modes—a fused percept. Thus, manipulating stimulus separation (feature distance) or stimulus fidelity (contrast) changes the parameterizations of the likelihood function, and these manipulations produce systematically increasing odds of fused percepts, matching the phenomenology of these stimuli (Figure 4B). 6 Figure 4: (A) Schematic illustration of manipulating orientation (feature space distance) and contrast in binocular rivalry stimuli. The inset shows effects of different likelihood parameterizations on the posterior distribution, designed to mimic these experimental manipulations. (B) Experimental effects of increasing feature space distance (depth and color difference) between rivalrous gratings on exclusivity of monocular percepts, reprinted with permission from [11]. Increasing the distance in feature space between rivalrous stimuli (C) or the contrast of both stimuli (D), modeled as increasing the variance around the modes, increases the probability of observing an exclusive percept in simulations. 4.4 Traveling waves Fused percepts are not the only deviations from bistability. In other circumstances, particularly in binocular rivalry, stability is often incomplete across the visual field, producing “piecemeal” rivalry, in which one portion of the visual field looks like the image in one eye, while another portion looks like the image in the other eye. One tantalizing feature of these piecemeal percepts is the phenomenon known as traveling waves: subjects tend to perceive a perceptual switch as a “wave” propagating over the visual field [26, 12]: the suppressed stimulus becomes dominant in an isolated location of the visual field and then gradually spreads. These traveling waves reveal an interesting local dynamics during an individual switch itself, rather than just the Gamma-distributed dynamics of the time between complete switches of dominance. Like fused percepts, these intraswitch dynamics have been generally ignored by models of multistability. Demonstrating the dynamics of traveling waves within patches of the percept requires a different method of probing perception. Wilson et al. [26] used annular stimuli (Figure 5A), and probed a particular patch along the annulus; they showed that the time at which the suppressed stimulus in the test patch becomes dominant is a function of the distance (around the circumference of the annulus) between the test patch and the patch where a dominance switch was induced by transiently increasing the contrast of the suppressed stimulus. This dependence of switch-time on distance (Figure 5B) suggested to Wilson et al. that stimulus dominance was propagating around the annulus. Using fMRI, Lee et al. [12] showed that the propagation of this “traveling wave” can be measured in primary visual cortex (V1; Figure 5): they used the retinotopic structure of V1 to identify brain regions corresponding to different portions of the the visual field, then measured the timing of the response in these regions to the induced dominance switch as a function of the cortical distance from the location of the initial switch. They found that the temporal delay in the response increased as a function of cortical distance from the V1 representation of the top of the annulus (Figure 5C). To simulate such traveling waves within the percept of a stimulus, we constructed an MRF with ring topology and measured the propagation time (the time at which a mode-switch occurs) at different hidden nodes along the ring. To simulate the transient increase in contrast at one location to induce a switch, we initialized one node’s state to be +1 and the rest to be −1. Consistent with the idea of wave propagation, Figure 5D shows the average time for a simulated node to switch modes as a function of distance around the ring. Intuitively, nodes will tend to switch in a kind of “domino effect” around the ring; the local dependencies in the MRF ensure that nodes will be more likely to switch modes once their neighbors have switched. Thus, once a switch at one node has been accepted by the Metropolis algorithm, a switch at its neighbor is likely to follow. 5 Conclusion We have proposed a “rational process” model of perceptual multistability based on the idea that humans approximate the posterior distribution over the hidden causes of their visual input with a set of samples. In particular, the dynamics of the sample-generating process gives rise to much of 7 Figure 5: Traveling waves in binocular rivalry. (A) Annular stimuli used by Lee et al. (left and center panels) and the subject percept reported by observers (right panel), in which the low contrast stimulus was seen to spread around the annulus, starting at the top. Figure reprinted with permission from [12]. (B) Propagation time as a function of distance around the annulus, replotted from [26]. Filled circles represent radial gratings, open circles represent concentric gratings. (C) Anatomical image (left panel) showing the retinotopically-mapped coordinates of the initial and probe locations in V1. Right panel shows the measured fMRI responses for the two outlined subregions. (D) A transient increase in contrast of the suppressed stimulus induces a perceptual switch at the location of contrast change. The propagation time for a switch at a probe location increases with distance (around the annulus) from the switch origin. the rich dynamics in multistable perception observed experimentally. These dynamics may be an approximation to the MCMC algorithms standardly used to solve difficult inference problems in machine learning and statistics [16]. The idea that perceptual multistability can be construed in terms of sampling in a Bayesian model was first proposed by [20, 22], and our work follows theirs closely in several respects. However, we depart from that work in the theoretical underpinnings of our model: It is not transparent how well the sampling scheme in [22, 24] approximates Bayesian inferences, or how it corresponds to standard algorithms where the full posterior is not assumed to be available when drawing samples. Our goal here is to show how some of the basic phenomena of multistable perception can be understood straightforwardly as the output of familiar, simple and effective methods for approximate inference in Bayesian machine vision. A related point of divergence between our model and that of [20, 22], as well as other Bayesian models of multistable perception [7, 10], is that we are able to explain multistable perception in terms of a well-defined inference procedure that doesn’t require ad-hoc appeals to neurophysiological processes like noise, adaptation, inhibition, etc. Thus, our contribution is to show how an inference algorithm widely used in statistics and computer science can give rise naturally to perceptual multistability phenomena. Of course, we do not wish to argue that neurophysiological processes are irrelevant. Our goal here was to abstract away from implementational details and make claims about the algorithmic level. Clearly an important avenue for future work is relating algorithms like MCMC to neural processes (indeed this connection was suggested previously by [7]). Another important direction in which to extend this work is from rivalry with low-level stimuli to more complex vision problems that involve global coherence over the image (such as in natural scenes). Although similar perceptual dynamics have been observed with a wide range of ambiguous stimuli, the absence of obvious transition periods with the Necker cube suggests that these dynamics may differ in important ways from perception of rivalry stimuli. Acknowledgments: This work was supported by ONR MURI: Complex Learning and Skill Transfer with Video Games N00014-07-1-0937 (PI: Daphne Bavelier); NDSEG fellowship to EV and NSF DRMS Dissertation grant to EV. 8 References [1] J.R. Anderson. The adaptive character of thought. Lawrence Erlbaum Associates, 1990. [2] R. Blake. A primer on binocular rivalry, including current controversies. Brain and Mind, 2(1):5–38, 2001. [3] J.W. Brascamp, R. van Ee, A.J. Noest, RH Jacobs, and A.V. van den Berg. The time course of binocular rivalry reveals a fundamental role of noise. Journal of Vision, 6(11):8, 2006. [4] S.D. Brown and M. Steyvers. Detecting and predicting changes. Cognitive Psychology, 58(1):49–67, 2009. [5] O.L. Carter, T.G. Campbell, G.B. Liu, and G. Wallis. Contradictory influence of context on predominance during binocular rivalry. Clinical and Experimental Optometry, 87:153–162, 2004. [6] N.D. Daw and A.C. Courville. The pigeon as particle filter. Advances in Neural Information Processing Systems, 20, 2007. [7] P. Dayan. A hierarchical model of binocular rivalry. Neural Computation, 10(5):1119–1135, 1998. [8] S. Geman and D. Geman. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Transactions of Pattern Analysis and Machine Intelligence, 6:721–741, 1984. [9] T.L. Griffiths and J.B. Tenenbaum. Optimal predictions in everyday cognition. Psychological Science, 17(9):767–773, 2006. [10] J. Hohwy, A. Roepstorff, and K. Friston. Predictive coding explains binocular rivalry: An epistemological review. Cognition, 108(3):687–701, 2008. [11] T. Knapen, R. Kanai, J. Brascamp, J. van Boxtel, and R. van Ee. Distance in feature space determines exclusivity in visual rivalry. Vision Research, 47(26):3269–3275, 2007. [12] S.H. Lee, R. Blake, and D.J. Heeger. Traveling waves of activity in primary visual cortex during binocular rivalry. Nature Neuroscience, 8(1):22–23, 2005. [13] D.A. Leopold and N.K. Logothetis. Multistable phenomena: changing views in perception. Trends in Cognitive Sciences, 3(7):254–264, 1999. [14] R.P. Levy, Reali. F., and T.L. Griffiths. Modeling the effects of memory on human online sentence processing with particle filters. Advances in Neural Information Processing Systems, 21:937, 2009. [15] L. Liu, C.W. Tyler, and C.M. Schor. Failure of rivalry at low contrast: evidence of a suprathreshold binocular summation process. Vision research, 32(8):1471–1479, 1992. [16] D.J.C. MacKay. Information theory, inference and learning algorithms. Cambridge University Press, 2003. [17] P. Mamassian and R. Goutcher. Temporal dynamics in bistable perception. Journal of Vision, 5(4):7, 2005. [18] N. Metropolis and S. Ulam. The Monte Carlo method. Journal of the American Statistical Association, pages 335–341, 1949. [19] A.N. Sanborn, T.L. Griffiths, and D.J. Navarro. A more rational model of categorization. In Proceedings of the 28th annual conference of the cognitive science society, pages 726–731, 2006. [20] P.R. Schrater and R. Sundareswara. Theory and dynamics of perceptual bistability. Advances in Neural Information Processing Systems, 19:1217, 2007. [21] L. Shi, N.H. Feldman, and T.L. Griffiths. Performing bayesian inference with exemplar models. In Proceedings of the 30th Annual Conference of the Cognitive Science Society, pages 745–750, 2008. [22] R. Sundareswara and P.R. Schrater. Perceptual multistability predicted by search model for Bayesian decisions. Journal of Vision, 8(5):12, 2008. [23] E. Vul, N.D. Goodman, T.L. Griffiths, and J.B. Tenenbaum. One and done? Optimal decisions from very few samples. Proceedings of the 31st Annual Meeting of the Cognitive Science Society, 2009. [24] E. Vul and H. Pashler. Measuring the crowd within: Probabilistic representations within individuals. Psychological Science, 19(7):645–647, 2008. [25] H.R. Wilson. Minimal physiological conditions for binocular rivalry and rivalry memory. Vision Research, 47(21):2741–2750, 2007. [26] H.R. Wilson, R. Blake, and S.H. Lee. Dynamics of travelling waves in visual perception. Nature, 412(6850):907–910, 2001. 9
2009
105
3,599
Probabilistic Relational PCA Wu-Jun Li Dit-Yan Yeung Dept. of Comp. Sci. and Eng. Hong Kong University of Science and Technology Hong Kong, China {liwujun,dyyeung}@cse.ust.hk Zhihua Zhang School of Comp. Sci. and Tech. Zhejiang University Zhejiang 310027, China zhzhang@cs.zju.edu.cn Abstract One crucial assumption made by both principal component analysis (PCA) and probabilistic PCA (PPCA) is that the instances are independent and identically distributed (i.i.d.). However, this common i.i.d. assumption is unreasonable for relational data. In this paper, by explicitly modeling covariance between instances as derived from the relational information, we propose a novel probabilistic dimensionality reduction method, called probabilistic relational PCA (PRPCA), for relational data analysis. Although the i.i.d. assumption is no longer adopted in PRPCA, the learning algorithms for PRPCA can still be devised easily like those for PPCA which makes explicit use of the i.i.d. assumption. Experiments on realworld data sets show that PRPCA can effectively utilize the relational information to dramatically outperform PCA and achieve state-of-the-art performance. 1 Introduction Using a low-dimensional embedding to summarize a high-dimensional data set has been widely used for exploring the structure in the data. The methods for discovering such low-dimensional embedding are often referred to as dimensionality reduction (DR) methods. Principal component analysis (PCA) [13] is one of the most popular DR methods with great success in many applications. As a more recent development, probabilistic PCA (PPCA) [21] provides a probabilistic formulation of PCA [13] based on a Gaussian latent variable model [1]. Compared with the original nonprobabilistic derivation of PCA in [12], PPCA possesses a number of practical advantages. For example, PPCA can naturally deal with missing values in the data; the expectation-maximization (EM) algorithm [9] used to learn the parameters in PPCA may be more efficient for high-dimensional data; it is easy to generalize the single model in PPCA to the mixture model case; furthermore, PPCA as a probabilistic model can naturally exploit Bayesian methods [2]. Like many existing DR methods, both PCA and PPCA are based on some assumptions about the data. One assumption is that the data should be represented as feature vectors all of the same dimensionality. Data represented in this form are sometimes referred to as flat data [10]. Another one is the so-called i.i.d. assumption, which means that the instances are assumed to be independent and identically distributed (i.i.d.). However, the data in many real-world applications, such as web pages and research papers, contain relations or links between (some) instances in the data in addition to the textual content information which is represented in the form of feature vectors. Data of this sort, referred to as relational data1 [10, 20], can be found in such diverse application areas as web mining [3, 17, 23, 24], bioinformatics [22], social network analysis [4], and so on. On one hand, the link structure among instances 1In this paper, we use document classification as a running example for relational data analysis. Hence, for convenience of illustration, the specific term ‘textual content information’ is used in the paper to refer to the feature vectors describing the instances. However, the algorithms derived in this paper can be applied to any relational data in which the instance feature vectors can represent any attribute information. 1 cannot be exploited easily when traditional DR methods such as PCA are applied to relational data. Very often, the useful relational information is simply discarded. For example, a citation/reference relation between two papers provides very strong evidence for them to belong to the same topic even though they may bear low similarity in their content due to the sparse nature of the bag-of-words representation, but the relational information is not exploited at all when applying PCA or PPCA. One possible use of the relational information in PCA or PPCA is to first convert the link structure into the format of flat data by extracting some additional features from the links. However, as argued in [10], this approach fails to capture some important structural information in the data. On the other hand, the i.i.d. assumption underlying PCA and PPCA is unreasonable for relational data. In relational data, the attributes of the connected (linked) instances are often correlated and the class label of one instance may have an influence on that of a linked instance. For example, in biology, interacting proteins are more likely to have the same biological functions than those without interaction. Therefore, PCA and PPCA, or more generally most existing DR methods based on the i.i.d. assumption, are not suitable for relational data analysis. In this paper, a novel probabilistic DR method called probabilistic relational PCA (PRPCA) is proposed for relational data analysis. By explicitly modeling the covariance between instances as derived from the relational information, PRPCA seamlessly integrates relational information and textual content information into a unified probabilistic framework. Two learning algorithms, one based on a closed-form solution and the other based on an EM algorithm [9], are proposed to learn the parameters of PRPCA. Although the i.i.d. assumption is no longer adopted in PRPCA, the learning algorithms for PRPCA can still be devised easily like those for PPCA which makes explicit use of the i.i.d. assumption. Extensive experiments on real-world data sets show that PRPCA can effectively utilize the relational information to dramatically outperform PCA and achieve state-of-the-art performance. 2 Notation We use boldface uppercase letters, such as K, to denote matrices, and boldface lowercase letters, such as z, to denote vectors. The ith row and the jth column of a matrix K are denoted by Ki∗ and K∗j, respectively. Kij denotes the element at the ith row and jth column of K. zi denotes the ith element of z. KT is the transpose of K, and K−1 is the inverse of K. K ⪰0 means that K is positive semi-definite (psd) and K ≻0 means that K is positive definite (pd). tr(·) denotes the trace of a matrix and etr(·) ≜exp(tr(·)). P ⊗Q denotes the Kronecker product [11] of P and Q. | · | denotes the determinant of a matrix. In is the identity matrix of size n × n. e is a vector of 1s, the dimensionality of which depends on the context. We overload N(·) for both multivariate normal distributions and matrix variate normal distributions [11]. ⟨·⟩denotes the expectation operation and cov(·) denotes the covariance operation. Note that in relational data, there exist both content and link observations. As in [21], {tn}N n=1 denotes a set of observed d-dimensional data (content) vectors, the d × q matrix W denotes the q principal axes (or called factor loadings), µ denotes the data sample mean, and xn = WT (tn −µ) denotes the corresponding q principal components (or called latent variables) of tn. We further use the d × N matrix T to denote the content matrix with T∗n = tn, and the q × N matrix X to denote the latent variables of T with X∗n = WT (tn −µ). For relational data, the N × N matrix A denotes the adjacency (link) matrix of the N instances. In this paper, we assume that the links are undirected. For those data with directed links, we will convert the directed links into undirected links which can keep the original physical meaning of the links. This will be described in detail in Section 4.1.1, and an example will be given in Section 5. Hence, Aij = 1 if there exists a relation between instances i and j, and otherwise Aij = 0. Moreover, we always assume that there exist no self-links, i.e., Aii = 0. 3 Probabilistic PCA To set the stage for the next section which introduces our PRPCA model, we first briefly present the derivation for PPCA [21], which was originally based on (vector-based) multivariate normal distributions, from the perspective of matrix variate normal distributions [11]. 2 If we use Υ to denote the Gaussian noise process and assume that Υ and the latent variable matrix X follow these distributions: Υ ∼Nd,N(0, σ2Id ⊗IN), X ∼Nq,N(0, Iq ⊗IN), (1) we can express a generative model as follows: T = WX + µeT + Υ. Based on some properties of matrix variate normal distributions in [11], we get the following results: T | X ∼Nd,N(WX + µeT , σ2Id ⊗IN), T ∼Nd,N µeT , (WWT + σ2Id) ⊗IN  . (2) Let C = WWT + σ2Id. The corresponding log-likelihood of the observation matrix T is then L = ln p(T) = −N 2 h d ln(2π) + ln |C| + tr(C−1S) i , (3) where S = (T−µeT )(T−µeT )T N = PN n=1(T∗n−µ)(T∗n−µ)T N . We can see that S is just the sample covariance matrix of the content observations. It is easy to see that this log-likelihood form is the same as that in [21]. Using matrix notations, the graphical model of PPCA based on matrix variate normal distributions is shown in Figure 1(a). T X Iq W σ2 IN µ T X Iq W σ2 ∆−1 µ (a) Model of PPCA (b) Model of PRPCA Figure 1: Graphical models of PPCA and PRPCA, in which T is the observation matrix, X is the latent variable matrix, µ, W and σ2 are the parameters to learn, and the other quantities are kept constant. 4 Probabilistic Relational PCA PPCA assumes that all the observations are independent and identically distributed. Although this i.i.d. assumption can make the modeling process much simpler and has achieved great success in many traditional applications, this assumption is however very unreasonable for relational data [10]. In relational data, the attributes of connected (linked) instances are often correlated. In this section, a probabilistic relational PCA model, called PRPCA, is proposed to integrate both the relational information and the content information seamlessly into a unified framework by eliminating the i.i.d. assumption. Based on our reformulation of PPCA using matrix variate notations as presented in the previous section, we can obtain PRPCA just by introducing some relatively simple (but very effective) modifications. A promising property is that the computation needed for PRPCA is as simple as that for PPCA even though we have eliminated the restrictive i.i.d. assumption. 4.1 Model Formulation Assume that the latent variable matrix X has the following distribution: X ∼Nq,N(0, Iq ⊗Φ). (4) According to Corollary 2.3.3.1 in [11], we can get cov(Xi∗) = Φ (i ∈{1, . . . , q}), which means that Φ actually reflects the covariance between the instances. From (1), we can see that cov(Xi∗) = IN for PPCA, which also coincides with the i.i.d. assumption of PPCA. Hence, to eliminate the i.i.d. assumption for relational data, one direct way is to use a non-identity covariance matrix Φ for the distribution of X in (4). This Φ should reflect the physical meaning (semantics) of the relations between instances, which will be discussed in detail later. Similarly, we can also change the IN in (1) to Φ for Υ to eliminate the i.i.d. assumption for the noise process. 4.1.1 Relational Covariance Construction Because the covariance matrix Φ in PRPCA is constructed from the relational information in the data, we refer to it as relational covariance here. The goal of PCA and PPCA is to find those principal axes onto which the retained variance under projection is maximal [13, 21]. For one specific X, the retained variance is tr[XXT ]. If we rewrite p(X) in (1) as p(X) = exp{tr[−1 2 XXT ]} (2π)qN/2 = exp{−1 2 tr[XXT ]} (2π)qN/2 , we have the following observation: 3 Observation 1 For PPCA, the larger the retained variance of X, i.e., the more X approaches the destination point, the lower is the probability density at X given by the prior. Here, the destination point refers to the point where the goal of PPCA is achieved, i.e., the retained variance is maximal. Moreover, we use the retained variance as a measure to define the gap between two different points. The smaller is the gap between the retained variance of two points, the more they approach each other. Because the design principle of PRPCA is similar to that of PPCA, our working hypothesis here is that Observation 1 can also guide us to design the relational covariance of PRPCA. Its effectiveness will be empirically verified in Section 5. In PRPCA, we assume that the attributes of two linked instances are positively correlated.2 Under this assumption, the ideal goal of PRPCA should be to make the latent representations of two instances as close as possible if there exists a relation (link) between them. Hence, the measure to define the gap between two points refers to the closeness of the linked instances, i.e., the summation of the Euclidean distances between the linked instances. Based on Observation 1, the more X approaches the destination point, the lower should be the probability density at X given by the prior. Hence, under the latent space representation X, the closer the linked instances are, the lower should be the probability density at X given by the prior. We will prove that if we set Φ = ∆−1 where ∆≜γIN + (IN + A)T (IN + A) with γ being typically a very small positive number to make ∆≻0, we can get an appropriate prior for PRPCA. Note that Aij = 1 if there exists a relation between instances i and j, and otherwise Aij = 0. Because AT = A, we can also express ∆as ∆= γIN + (IN + A)(IN + A). Let ˜D denote a diagonal matrix whose diagonal elements ˜Dii = P j Aij. It is easy to prove that (AA)ii = ˜Dii. Let B = AA −˜D, which means that Bij = (AA)ij if i ̸= j and Bii = 0. We can get ∆= (1+γ)IN +2A+AA = (1+γ)IN + ˜D+(2A+B). Because Bij = PN k=1 AikAkj for i ̸= j, we can see that Bij is the number of paths, each with path length 2, from instance i to instance j in the original adjacency graph A. Because the attributes of two linked instances are positively correlated, Bij actually reflects the degree of correlation between instance i and instance j. Let us take the paper citation graph as an example to illustrate this. The existence of a citation relation between two papers often implies that they are about the same topic. If paper i cites paper k and paper k cites paper j, it is highly likely that paper i and paper j are about the same topic. If there exists another paper a ̸= k linking both paper i and paper j as well, the confidence that paper i and paper j are about the same topic will increase. Hence, the larger Bij is, the stronger is the correlation between instance i and instance j. Because Bij = PN k=1 AikAkj = AT ∗iA∗j, Bij can also be seen as the similarity between the link vectors of instance i and instance j. Therefore, B can be seen as a weight matrix (corresponding to a weight graph) derived from the original adjacency matrix A, and B is also consistent with the physical meaning underlying A. Letting G = 2A + B,3 we can find that G actually combines the original graph reflected by A and the derived graph reflected by B to get a new graph, and puts a weight 2Aij + Bij on the edge between instance i and instance j in the new graph. The new weight graph reflected by G is also consistent with the physical meaning underlying A. Letting L ≜D −G, where D is a diagonal matrix whose diagonal elements Dii = P j Gij and L is called the Laplacian matrix [6] of G, we can get ∆= (1+γ)IN + ˜D+D−L. If we define another diagonal matrix ˆD ≜(1+γ)IN + ˜D+D, we can get ∆= ˆD −L. Then we have tr[X∆XT ] = N X i=1 ˆDii∥X∗i∥2 −1 2 N X i=1 N X j=1 Gij∥X∗i −X∗j∥2. (5) 2Links with other physical meanings, such as the directed links in web graphs [25], can be transformed into links satisfying the assumption in PRPCA via some preprocessing strategies. One such strategy to preprocess the WebKB data set [8] will be given as an example in Section 5. 3This means that we put a 2:1 ratio between A and B. Other ratios can be obtained by setting ∆= γIN + (αIN + A)(αIN + A) = γIN + α2IN + 2αA + B. Preliminary results show that PRPCA is not sensitive to α as long as α is not too large, but we omit the detailed results here because they are out of the scope of this paper. 4 Letting Φ = ∆−1, we can get p(X) = exp{tr[−1 2 X∆XT ]} (2π)qN/2 |∆|−q/2 = exp{−1 2 tr[X∆XT ]} (2π)qN/2 |∆|−q/2 . The first term PN i=1 ˆDii∥X∗i∥2 in (5) can be treated as a measure of weighted variance of all the instances in the latent space. We can see that the larger ˆDii is, the more weight will be put on instance i, which is reasonable because ˆDii mainly reflects the degree of instance i in the graph. It is easy to see that, for those latent representations having a fixed value of weighted variance PN i=1 ˆDii∥X∗i∥2, the closer the latent representations of two linked entities are, the larger is their contribution to tr[X∆XT ], and subsequently the less is their contribution to p(X). This means that under the latent space representation X, the closer the linked instances are, the lower is the probability density at X given by the prior. Hence, we can get an appropriate prior for X by setting Φ = ∆−1 in (4). 4.1.2 Model With the constructed relational covariance Φ, the generative model of PRPCA is defined as follows: Υ ∼Nd,N(0, σ2Id ⊗Φ), X ∼Nq,N(0, Iq ⊗Φ), T = WX + µeT + Υ, where Φ = ∆−1. We can further obtain the following results: T | X ∼Nd,N(WX + µeT , σ2Id ⊗Φ), T ∼Nd,N µeT , (WWT + σ2Id) ⊗Φ  . (6) The graphical model of PRPCA is illustrated in Figure 1(b), from which we can see that the difference between PRPCA and PPCA lies solely in the difference between Φ and IN. Comparing (6) to (2), we can find that the observations of PPCA are sampled independently while those of PRPCA are sampled with correlation. In fact, PPCA may be seen as a degenerate case of PRPCA as detailed below in Remark 1: Remark 1 When the i.i.d. assumption holds, i.e., all Aij = 0, PRPCA degenerates to PPCA by setting γ = 0. Note that the only role that γ plays is to make ∆≻0. Hence, in our implementation, we always set γ to a very small positive value, such as 10−6. Actually, we may even set γ to 0, because ∆does not have to be pd. When ∆⪰0, we say T follows a singular matrix variate normal distribution [11], and all the derivations for PRPCA are still correct. In our experiment, we find that the performance under γ = 0 is almost the same as that under γ = 10−6. Further deliberation is out of the scope of this paper. As in PPCA, we set C = WWT + σ2Id. Then the log-likelihood of the observation matrix T in PRPCA is L1 = ln p(T) = −N 2 h d ln(2π) + ln |C| + tr(C−1H) i + c, (7) where c = −d 2 ln |Φ| can be seen as a constant independent of the parameters µ, W and σ2, and H = (T−µeT )∆(T−µeT )T N . It is interesting to compare (7) with (3). We can find that to learn the parameters W and σ2, the only difference between PRPCA and PPCA lies in the difference between H and S. Hence, all the learning techniques derived previously for PPCA are also potentially applicable to PRPCA simply by substituting S with H. 4.2 Learning By setting the gradient of L1 with respect to µ to 0, we can get the maximum-likelihood estimator (MLE) for µ as follows: µ = T∆e eT∆e. As in PPCA [21], we devise two methods to learn W and σ2 in PRPCA, one based on a closed-form solution and the other based on EM. 5 4.2.1 Closed-Form Solution Theorem 1 The log-likelihood in (7) is maximized when WML = Uq(Λq −σ2 MLIq)1/2R, σ2 ML = Pd i=q+1 λi d −q , where λ1 ≥λ2 ≥· · · ≥λd are the eigenvalues of H, Λq is a q × q diagonal matrix containing the first q largest eigenvalues, Uq is a d × q matrix in which the q column vectors are the principal eigenvectors of H corresponding to Λq, and R is an arbitrary q × q orthogonal rotation matrix. The proof of Theorem 1 makes use of techniques similar to those in Appendix A of [21] and is omitted here. 4.2.2 EM Algorithm During the EM learning process, we treat {W, σ2} as parameters, X as missing data and {T, X} as complete data. The EM algorithm operates by alternating between the E-step and M-step. Here we only briefly describe the updating rules and their derivation can be found in a longer version which can be downloaded from http://www.cse.ust.hk/∼liwujun. In the E-step, the expectation of the complete-data log-likelihood with respect to the distribution of the missing data X is computed. To compute the expectation of the complete-data log-likelihood, we only need to compute the following sufficient statistics: ⟨X⟩= M−1WT (T −µeT ), ⟨X∆XT ⟩= Nσ2M−1 + ⟨X⟩∆⟨X⟩T , (8) where M = WT W + σ2Iq. Note that all these statistics are computed based on the parameter values obtained from the previous iteration. In the M-step, to maximize the expectation of the complete-data log-likelihood, the parameters {W, σ2} are updated as follows: f W = HW(σ2Iq + M−1WT HW)−1, eσ2 = tr(H −HWM−1 f WT ) d . (9) Note that we use W here to denote the old value and f W for the updated new value. 4.3 Complexity Analysis Suppose there are δ nonzero elements in ∆. We can see that the computation cost for H is O(dN + dδ). In many applications δ is typically a constant multiple of N. Hence, we can say that the time complexity for computing H is O(dN). For the closed-form solution, we have to invert a d × d matrix. Hence, the computation cost is O(dN + d3). For EM, because d is typically larger than q, we can see that the computation cost is O(dN + d2qT), where T is the number of EM iterations. If the data are of very high dimensionality, EM will be more efficient than the closed-form solution. 5 Experiments Although PPCA possesses additional advantages when compared with the original non-probabilistic formulation of PCA, they will get similar DR results when there exist no missing values in the data. If the task is to classify instances in the low-dimensional embedding, the classifiers based on the embedding results of PCA and PPCA are expected to achieve comparable results. Hence, in this paper, we only adopt PCA as the baseline to study the performance of PRPCA. For the EM algorithm of PRPCA, we use PCA to initialize W, σ2 is initialized to 10−6, and γ = 10−6. Because the EM algorithm and the closed-form solution achieve similar results, we only report the results of the EM algorithm of PRPCA in the following experiments. 5.1 Data Sets and Evaluation Scheme Here, we only briefly describe the data sets and evaluation scheme for space saving. More detailed information about them can be found in the longer version. 6 We use three data sets to evaluate PRPCA. The first two data sets are Cora [16] and WebKB [8]. We adopt the same strategy as that in [26] to preprocess these two data sets. The third data set is the PoliticalBook data set used in [19]. For WebKB, according to the semantics of authoritative pages and hub pages [25], we first preprocess the link structure of this data set as follows: if two web pages are co-linked by or link to another common web page, we add a link between these two pages. Then all the original links are removed. After preprocessing, all the directed links are converted into undirected links. The Cora data set contains four subsets: DS, HA, ML and PL. The WebKB data set also contains four subsets: Cornell, Texas, Washington and Wisconsin. We adopt the same strategy as that in [26] to evaluate PRPCA on the Cora and WebKB data sets. For the PoliticalBook data set, we use the testing procedure of the latent Wishart process (LWP) model [15] for evaluation. 5.2 Convergence Speed of EM We use the DS and Cornell data sets to illustrate the convergence speed of the EM learning procedure of PRPCA. The performance on other data sets has similar characteristics, which is omitted here. With q = 50, the average classification accuracy based on 5-fold cross validation against the number of EM iterations T is shown in Figure 2. We can see that PRPCA achieves very promising and stable performance after a very small number of iterations. We set T = 5 in all our following experiments. 5.3 Visualization We use the PoliticalBook data set to visualize the DR results of PCA and PRPCA. For the sake of visualization, q is set to 2. The results are depicted in Figure 3. We can see that it is not easy to separate the two classes in the latent space of PCA. However, the two classes are better separated from each other in the latent space of PRPCA. Hence, better clustering or classification performance can be expected when the examples are clustered or classified in the latent space of PRPCA. 0 10 20 30 40 50 0.5 0.6 0.7 0.8 0.9 T Accuracy DS Cornell Figure 2: Convergence speed of the EM learning procedure of PRPCA. PCA PRPCA −0.4 −0.2 0 0.2 0.4 −0.4 −0.2 0 0.2 0.4 −0.4 −0.2 0 0.2 −0.2 −0.1 0 0.1 0.2 Figure 3: Visualization of data points in the latent spaces of PCA and PRPCA for the PoliticalBook data set. The positive and negative examples are shown as red crosses and blue circles, respectively. 5.4 Performance The dimensionality of Cora and WebKB is moderately high, but the dimensionality of PoliticalBook is very high. We evaluate PRPCA on these two different kinds of data to verify its effectiveness in general settings. Performance on Cora and WebKB The average classification accuracy with its standard deviation based on 5-fold cross validation against the dimensionality of the latent space q is shown in Figure 4. We can find that PRPCA can dramatically outperform PCA on all the data sets under any dimensionality, which confirms that the relational information is very informative and PRPCA can utilize it very effectively. We also perform comparison between PRPCA and those methods evaluated in [26]. The methods include: SVM on content, which ignores the link structure in the data and applies SVM only on the content information in the original bag-of-words representation; SVM on links, which ignores the content information and treats the links as features, i.e, the ith feature is link-to-pagei; SVM on link-content, in which the content features and link features of the two methods above are combined to give the feature representation; directed graph regularization (DGR), which is introduced in [25]; PLSI+PHITS, which is described in [7]; link-content MF, which is the joint link-content matrix factorization (MF) method in [26]. Note that Link-content sup. MF in [26] is not adopted here for comparison. Because during the DR procedure link-content sup. MF employs additional label 7 DS HA ML PL 10 20 30 40 50 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 q Accuracy PCA PRPCA 10 20 30 40 50 0.65 0.7 0.75 0.8 q Accuracy PCA PRPCA 10 20 30 40 50 0.55 0.6 0.65 0.7 0.75 0.8 q Accuracy PCA PRPCA 10 20 30 40 50 0.4 0.45 0.5 0.55 0.6 0.65 q Accuracy PCA PRPCA Cornell Texas Washington Wisconsin 10 20 30 40 50 0.75 0.8 0.85 0.9 0.95 q Accuracy PCA PRPCA 10 20 30 40 50 0.75 0.8 0.85 0.9 0.95 q Accuracy PCA PRPCA 10 20 30 40 50 0.84 0.86 0.88 0.9 0.92 0.94 0.96 q Accuracy PCA PRPCA 10 20 30 40 50 0.84 0.86 0.88 0.9 0.92 0.94 q Accuracy PCA PRPCA Figure 4: Comparison between PRPCA and PCA on Cora and WebKB. information which is not employed by other DR methods, it is unfair to directly compare it with other methods. As in the link-content MF method, we set q = 50 for PRPCA. The results are shown in Figure 5. We can see that PRPCA and link-content MF achieve the best performance among all the evaluated methods. Compared with link-content MF, PRPCA performs slightly better on DS and HA while performing slightly worse on ML and Texas, and achieves comparable performance on the other data sets. We can conclude that the overall performance of PRPCA is comparable with that of link-content MF. Unlike link-content MF which is transductive in nature, PRPCA naturally supports inductive inference. More specifically, we can apply the learned transformation matrix of PRPCA to perform DR for the unseen test data, while link-content MF can only perform DR for those data available during the training phase. Very recently, another method proposed by us, called relation regularized matrix factorization (RRMF) [14], has achieved better performance than PRPCA on the Cora data set. However, similar to link-content MF, RRMF cannot be used for inductive inference either. DS HA ML PL 0.5 0.6 0.7 0.8 Accuracy SVM on content SVM on links SVM on link−content DGR PLSI+PHITS link−content MF PRPCA Cornell Texas WashingtonWisconsin 0.5 0.6 0.7 0.8 0.9 1 Accuracy Figure 5: Comparison between PRPCA and other methods on Cora and WebKB. Performance on PoliticalBook As in mixed graph Gaussian process (XGP) [19] and LWP [15], we randomly choose half of the whole data for training and the rest for testing. This subsampling process is repeated for 100 rounds and the average area under the ROC curve (AUC) with its standard deviation is reported in Table 1, where GPC is a Gaussian process classifier [18] trained on the original feature representation, and relational Gaussian process (RGP) is the method in [5]. For PCA and PRPCA, we first use them to perform DR, and then a Gaussian process classifier is trained based on the low-dimensional representation. Here, we set q = 5 for both PCA and PRPCA. We can see that on this data set, PRPCA also dramatically outperforms PCA and achieves performance comparable with the state of the art. Note that RGP and XGP cannot learn a low-dimensional embedding for the instances. Although LWP can also learn a low-dimensional embedding for the instances, the computation cost to obtain a low-dimensional embedding for a test instance is O(N 3) because it has to invert the kernel matrix defined on the training data. Table 1: Performance on the PoliticalBook data set. Results for GPC, RGP and XGP are taken from [19] where the standard deviation is not reported. GPC RGP XGP LWP PCA PRPCA 0.92 0.98 0.98 0.98 ± 0.02 0.92 ± 0.03 0.98 ± 0.02 Acknowledgments Li and Yeung are supported by General Research Fund 621407 from the Research Grants Council of Hong Kong. Zhang is supported in part by 973 Program (Project No. 2010CB327903). We thank Yu Zhang for some useful comments. 8 References [1] D. J. Bartholomew and M. Knott. Latent Variable Models and Factor Analysis. Kendall’s Library of Statistics,7, second edition, 1999. [2] C. M. Bishop. Bayesian PCA. In NIPS 11, 1998. [3] J. Chang and D. M. Blei. Relational topic models for document networks. In AISTATS, 2009. [4] J. Chang, J. L. Boyd-Graber, and D. M. Blei. Connections between the lines: augmenting social networks with text. In KDD, pages 169–178, 2009. [5] W. Chu, V. Sindhwani, Z. Ghahramani, and S. S. Keerthi. Relational learning with Gaussian processes. In NIPS 19, 2007. [6] F. Chung. Spectral Graph Theory. Number 92 in Regional Conference Series in Mathematics. American Mathematical Society, 1997. [7] D. A. Cohn and T. Hofmann. The missing link - a probabilistic model of document content and hypertext connectivity. In NIPS 13, 2000. [8] M. Craven, D. DiPasquo, D. Freitag, A. McCallum, T. M. Mitchell, K. Nigam, and S. Slattery. Learning to extract symbolic knowledge from the world wide web. In AAAI/IAAI, pages 509– 516, 1998. [9] A. Dempster, N. Laird, and D. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, 39(1):1–38, 1977. [10] L. Getoor and B. Taskar. Introduction to Statistical Relational Learning. The MIT Press, 2007. [11] A. K. Gupta and D. K. Nagar. Matrix Variate Distributions. Chapman & Hall/CRC, 2000. [12] H. Howard. Analysis of a complex of statistical variables into principal components. Journal of Educational Psychology, 27:417–441, 1933. [13] I. T. Jolliffe. Principal Component Analysis. Springer, second edition, 2002. [14] W.-J. Li and D.-Y. Yeung. Relation regularized matrix factorization. In IJCAI, 2009. [15] W.-J. Li, Z. Zhang, and D.-Y. Yeung. Latent Wishart processes for relational kernel learning. In AISTATS, pages 336–343, 2009. [16] A. McCallum, K. Nigam, J. Rennie, and K. Seymore. Automating the construction of internet portals with machine learning. Information Retrieval, 3(2):127–163, 2000. [17] R. Nallapati, A. Ahmed, E. P. Xing, and W. W. Cohen. Joint latent topic models for text and citations. In KDD, pages 542–550, 2008. [18] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. The MIT Press, 2006. [19] R. Silva, W. Chu, and Z. Ghahramani. Hidden common cause relations in relational learning. In NIPS 20. 2008. [20] B. Taskar, P. Abbeel, and D. Koller. Discriminative probabilistic models for relational data. In UAI, pages 485–492, 2002. [21] M. E. Tipping and C. M. Bishop. Probabilistic principal component analysis. Journal Of The Royal Statistical Society Series B, 61(3):611–622, 1999. [22] J.-P. Vert. Reconstruction of biological networks by supervised machine learning approaches. In Elements of Computational Systems Biology, 2009. [23] T. Yang, R. Jin, Y. Chi, and S. Zhu. A Bayesian framework for community detection integrating content and link. In UAI, 2009. [24] T. Yang, R. Jin, Y. Chi, and S. Zhu. Combining link and content for community detection: a discriminative approach. In KDD, pages 927–936, 2009. [25] D. Zhou, B. Sch¨olkopf, and T. Hofmann. Semi-supervised learning on directed graphs. In NIPS 17, 2004. [26] S. Zhu, K. Yu, Y. Chi, and Y. Gong. Combining content and link for classification using matrix factorization. In SIGIR, 2007. 9
2009
106